History log of /src/lib/libpthread/arch/arm |
Revision | Date | Author | Comments |
1.5 | 16-May-2009 |
ad | Remove unused code that's confusing when using cscope/opengrok.
|
1.4 | 21-Aug-2004 |
rearnsha | Use RET macro for returning.
|
1.3 | 05-Apr-2003 |
bjh21 | NetBSD/acorn26 has used APCS-32 for years, so unifdef -U__APCS_26__.
|
1.2 | 18-Jan-2003 |
thorpej | Merge the nathanw_sa branch.
|
1.1 | 16-Nov-2001 |
thorpej | branches: 1.1.2; file _context_u.S was initially added on branch nathanw_sa.
|
1.1.2.8 | 27-Nov-2001 |
bjh21 | In GETC(), rather than faking a CPSR, save the real one if we're can, and zero otherwise (ie if we're on a 26-bit CPU). I suspect this isn't actually necessary, but it's cheap, and it should make it easier for debuggers and suchlike to work out what's going on.
|
1.1.2.7 | 22-Nov-2001 |
bjh21 | _UC_USER | _UC_CPU happens to be a valid ARM constant, so we can load it in a single cycle.
|
1.1.2.6 | 22-Nov-2001 |
thorpej | In the GETC() macro, don't actually save r14 (lr) and r15 (pc). r14 is caller-save, so we don't need to save it in the _REG_LR slot. Instead, we save it in the _REG_PC slot so that when the context is resumed, we end up at the insn after the call site.
Fixes the "cu6" test in Nathan's test suite. "cu1" - "cu5" also function correctly (as they did before this change).
|
1.1.2.5 | 21-Nov-2001 |
thorpej | ome improvements, based on suggestions from Ben Harris: * Don't need to save CPSR in GETC(), since it's a caller-saved register. However, stuff a fake one (that indicates USR32 mode) into the ucontext in case it is used later in a setcontext(2) system call (as is done in one of Nathan's regression tests; this last bit from me). * Only restore CPSR if ! _UC_USER (i.e. context was created by getcontext(2)). * Determine at run-time if we're in USR26 or USR32 by forcing the Z flag to be set and then comparing the PC portion of R15 to all of R15 (if they're not equal, we're in 26-bit mode). * Restore banked regs properly in 26-bit mode.
Plus one from me: Don't or _UC_USER | _UC_CPU into uc_flags; set uc_flags to that value instead.
|
1.1.2.4 | 20-Nov-2001 |
thorpej | Clean up SAVE_FLAGS/RESTORE_FLAGS, and make sure to fetch the flags from the correct place in the ucontext (not past the END of the ucontext) in the SETC() macro.
|
1.1.2.3 | 20-Nov-2001 |
thorpej | Make sure to set _UC_CPU in _getcontext_u().
|
1.1.2.2 | 18-Nov-2001 |
thorpej | Note that we need to think about what to do regarding hard and soft FP context.
|
1.1.2.1 | 16-Nov-2001 |
thorpej | First cut at machine-dependent pthread bits for ARM.
|
1.8 | 16-May-2009 |
ad | Remove unused code that's confusing when using cscope/opengrok.
|
1.7 | 02-Mar-2007 |
ad | Remove the PTHREAD_SA option. If M:N threads is reimplemented it's better off done with a seperate library.
|
1.6 | 07-Sep-2003 |
cl | Remove possible race condition in upcall recycling.
|
1.5 | 17-Jul-2003 |
nathanw | Adapt to structure name changes.
|
1.4 | 26-Jun-2003 |
nathanw | Remove PT_SLEEPUC and add PT_TRAPUC.
|
1.3 | 13-Mar-2003 |
thorpej | Include <sys/types> before <sys/lock.h>, shuffle <ucontext.h>.
|
1.2 | 18-Jan-2003 |
thorpej | Merge the nathanw_sa branch.
|
1.1 | 16-Nov-2001 |
thorpej | branches: 1.1.2; file genassym.cf was initially added on branch nathanw_sa.
|
1.1.2.2 | 22-Nov-2001 |
thorpej | Move the _REG_PC definition up with the other _REG_* definitions.
|
1.1.2.1 | 16-Nov-2001 |
thorpej | First cut at machine-dependent pthread bits for ARM.
|
1.13 | 25-May-2023 |
riastradh | libpthread: New pthread__smt_wait to put CPU in low power for spin.
This is now distinct from pthread__smt_pause, which is for spin lock backoff with no paired wakeup.
On Arm, there is a single-bit event register per CPU, and there are two instructions to manage it:
- wfe, wait for event -- if event register is clear, enter low power mode and wait until event register is set; then exit low power mode and clear event register
- sev, signal event -- sets event register on all CPUs (other circumstances like interrupts also set the event register and cause wfe to wake)
These can be used to reduce the power consumption of spinning for a lock, but only if they are actually paired -- if there's no sev, wfe might hang indefinitely. Currently only pthread_spin(3) actually pairs them; the other lock primitives (internal lock, mutex, rwlock) do not -- they have spin lock backoff loops, but no corresponding wakeup to cancel a wfe.
It may be worthwhile to teach the other lock primitives to pair wfe/sev, but that requires some performance measurement to verify it's actually worthwhile. So for now, we just make sure not to use wfe when there's no sev, and keep everything else the same -- this should fix severe performance degredation in libpthread on Arm without hurting anything else.
No change in the generated code on amd64 and i386. No change in the generated code for pthread_spin.c on arm and aarch64 -- changes only the generated code for pthread_lock.c, pthread_mutex.c, and pthread_rwlock.c, as intended.
PR port-arm/57437
XXX pullup-10
|
1.12 | 25-May-2023 |
riastradh | libpthread: Use __nothing, not /* nothing */, for empty macros.
No functional change intended -- just safer to do it this way in case the macros are used in if branches or comma expressions.
PR port-arm/57437 (pthread__smt_pause/wake issue)
XXX pullup-10
|
1.11 | 22-Nov-2018 |
skrll | branches: 1.11.2; 1.11.10; G/C __APCS_26__ support
|
1.10 | 17-Jul-2017 |
skrll | branches: 1.10.2; 1.10.4; 1.10.6; Typo in comment
|
1.9 | 15-Aug-2013 |
matt | branches: 1.9.18; Use the thumb1 versions of sev/wfe for thumb && armv6+. if using armv5t don't do anything for thumb.
|
1.8 | 19-Sep-2012 |
matt | Use .inst instead of wfe/sev to shut up gas.
|
1.7 | 16-Aug-2012 |
matt | branches: 1.7.2; Add a pthread__smt_wake and add support for it on arm along with pthread__smt_pause. These are implemented using the ARM instructions SEV (wake) and WFE (pause). These are treated as NOPs on ARM CPUs that don't support them.
|
1.6 | 25-Jan-2011 |
christos | branches: 1.6.4; make pthread__sp unsigned long.
|
1.5 | 16-May-2009 |
ad | branches: 1.5.2; Remove unused code that's confusing when using cscope/opengrok.
|
1.4 | 24-Dec-2005 |
perry | Remove leading __ from __(const|inline|signed|volatile) -- it is obsolete.
|
1.3 | 18-Jan-2003 |
christos | add missing backslash
|
1.2 | 18-Jan-2003 |
thorpej | Merge the nathanw_sa branch.
|
1.1 | 16-Nov-2001 |
thorpej | branches: 1.1.2; file pthread_md.h was initially added on branch nathanw_sa.
|
1.1.2.4 | 17-Jan-2003 |
nathanw | Add _INITCONTEXT_U_MD() code that sets up the PC or CPSR.
Adjust PTHREAD_UCONTEXT_TO_REG() to set a plausable value in reg->r_cpsr when _UC_USER is set in the ucontext; otherwise, GDB gets very confused and thinks it's dealing with 26-bit ARM state.
|
1.1.2.3 | 20-Dec-2002 |
thorpej | Update for mcontext_t changes.
|
1.1.2.2 | 06-Aug-2002 |
thorpej | Add glue for libpthread_dbg.
|
1.1.2.1 | 16-Nov-2001 |
thorpej | First cut at machine-dependent pthread bits for ARM.
|
1.5.2.1 | 08-Feb-2011 |
bouyer | Sync with HEAD
|
1.6.4.2 | 22-May-2014 |
yamt | sync with head.
for a reference, the tree before this commit was tagged as yamt-pagecache-tag8.
this commit was splitted into small chunks to avoid a limitation of cvs. ("Protocol error: too many arguments")
|
1.6.4.1 | 30-Oct-2012 |
yamt | sync with head
|
1.7.2.2 | 20-Aug-2014 |
tls | Rebase to HEAD as of a few days ago.
|
1.7.2.1 | 20-Nov-2012 |
tls | Resync to 2012-11-19 00:00:00 UTC
|
1.9.18.1 | 04-Aug-2023 |
martin | Pull up following revision(s) (requested by riastradh in ticket #1878):
lib/libpthread/arch/x86_64/pthread_md.h: revision 1.13 lib/libpthread/pthread_int.h: revision 1.110 lib/libpthread/pthread_int.h: revision 1.111 lib/libpthread/arch/i386/pthread_md.h: revision 1.21 lib/libpthread/arch/arm/pthread_md.h: revision 1.12 lib/libpthread/arch/arm/pthread_md.h: revision 1.13 lib/libpthread/pthread_spin.c: revision 1.11 lib/libpthread/arch/aarch64/pthread_md.h: revision 1.2
libpthread: Use __nothing, not /* nothing */, for empty macros.
No functional change intended -- just safer to do it this way in case the macros are used in if branches or comma expressions. PR port-arm/57437 (pthread__smt_pause/wake issue)
libpthread: New pthread__smt_wait to put CPU in low power for spin.
This is now distinct from pthread__smt_pause, which is for spin lock backoff with no paired wakeup.
On Arm, there is a single-bit event register per CPU, and there are two instructions to manage it: - wfe, wait for event -- if event register is clear, enter low power mode and wait until event register is set; then exit low power mode and clear event register - sev, signal event -- sets event register on all CPUs (other circumstances like interrupts also set the event register and cause wfe to wake)
These can be used to reduce the power consumption of spinning for a lock, but only if they are actually paired -- if there's no sev, wfe might hang indefinitely. Currently only pthread_spin(3) actually pairs them; the other lock primitives (internal lock, mutex, rwlock) do not -- they have spin lock backoff loops, but no corresponding wakeup to cancel a wfe.
It may be worthwhile to teach the other lock primitives to pair wfe/sev, but that requires some performance measurement to verify it's actually worthwhile. So for now, we just make sure not to use wfe when there's no sev, and keep everything else the same -- this should fix severe performance degredation in libpthread on Arm without hurting anything else.
No change in the generated code on amd64 and i386. No change in the generated code for pthread_spin.c on arm and aarch64 -- changes only the generated code for pthread_lock.c, pthread_mutex.c, and pthread_rwlock.c, as intended. PR port-arm/57437
|
1.10.6.1 | 10-Jun-2019 |
christos | Sync with HEAD
|
1.10.4.1 | 26-Nov-2018 |
pgoyette | Sync with HEAD, resolve a couple of conflicts
|
1.10.2.2 | 17-Jul-2017 |
skrll | 2167115
|
1.10.2.1 | 17-Jul-2017 |
skrll | file pthread_md.h was added on branch perseant-stdc-iso10646 on 2017-07-17 20:24:08 +0000
|
1.11.10.1 | 01-Aug-2023 |
martin | Pull up following revision(s) (requested by riastradh in ticket #296):
lib/libpthread/arch/x86_64/pthread_md.h: revision 1.13 lib/libpthread/pthread_int.h: revision 1.110 lib/libpthread/pthread_int.h: revision 1.111 lib/libpthread/arch/i386/pthread_md.h: revision 1.21 lib/libpthread/arch/arm/pthread_md.h: revision 1.12 lib/libpthread/arch/arm/pthread_md.h: revision 1.13 lib/libpthread/pthread_spin.c: revision 1.11 lib/libpthread/arch/aarch64/pthread_md.h: revision 1.2
libpthread: Use __nothing, not /* nothing */, for empty macros.
No functional change intended -- just safer to do it this way in case the macros are used in if branches or comma expressions.
PR port-arm/57437 (pthread__smt_pause/wake issue)
libpthread: New pthread__smt_wait to put CPU in low power for spin.
This is now distinct from pthread__smt_pause, which is for spin lock backoff with no paired wakeup.
On Arm, there is a single-bit event register per CPU, and there are two instructions to manage it: - wfe, wait for event -- if event register is clear, enter low power mode and wait until event register is set; then exit low power mode and clear event register - sev, signal event -- sets event register on all CPUs (other circumstances like interrupts also set the event register and cause wfe to wake)
These can be used to reduce the power consumption of spinning for a lock, but only if they are actually paired -- if there's no sev, wfe might hang indefinitely. Currently only pthread_spin(3) actually pairs them; the other lock primitives (internal lock, mutex, rwlock) do not -- they have spin lock backoff loops, but no corresponding wakeup to cancel a wfe.
It may be worthwhile to teach the other lock primitives to pair wfe/sev, but that requires some performance measurement to verify it's actually worthwhile. So for now, we just make sure not to use wfe when there's no sev, and keep everything else the same -- this should fix severe performance degredation in libpthread on Arm without hurting anything else.
No change in the generated code on amd64 and i386. No change in the generated code for pthread_spin.c on arm and aarch64 -- changes only the generated code for pthread_lock.c, pthread_mutex.c, and pthread_rwlock.c, as intended.
PR port-arm/57437
|
1.11.2.1 | 04-Aug-2023 |
martin | Pull up following revision(s) (requested by riastradh in ticket #1700):
lib/libpthread/arch/x86_64/pthread_md.h: revision 1.13 lib/libpthread/pthread_int.h: revision 1.110 lib/libpthread/pthread_int.h: revision 1.111 lib/libpthread/arch/i386/pthread_md.h: revision 1.21 lib/libpthread/arch/arm/pthread_md.h: revision 1.12 lib/libpthread/arch/arm/pthread_md.h: revision 1.13 lib/libpthread/pthread_spin.c: revision 1.11 lib/libpthread/arch/aarch64/pthread_md.h: revision 1.2
libpthread: Use __nothing, not /* nothing */, for empty macros.
No functional change intended -- just safer to do it this way in case the macros are used in if branches or comma expressions. PR port-arm/57437 (pthread__smt_pause/wake issue)
libpthread: New pthread__smt_wait to put CPU in low power for spin.
This is now distinct from pthread__smt_pause, which is for spin lock backoff with no paired wakeup.
On Arm, there is a single-bit event register per CPU, and there are two instructions to manage it: - wfe, wait for event -- if event register is clear, enter low power mode and wait until event register is set; then exit low power mode and clear event register - sev, signal event -- sets event register on all CPUs (other circumstances like interrupts also set the event register and cause wfe to wake)
These can be used to reduce the power consumption of spinning for a lock, but only if they are actually paired -- if there's no sev, wfe might hang indefinitely. Currently only pthread_spin(3) actually pairs them; the other lock primitives (internal lock, mutex, rwlock) do not -- they have spin lock backoff loops, but no corresponding wakeup to cancel a wfe.
It may be worthwhile to teach the other lock primitives to pair wfe/sev, but that requires some performance measurement to verify it's actually worthwhile. So for now, we just make sure not to use wfe when there's no sev, and keep everything else the same -- this should fix severe performance degredation in libpthread on Arm without hurting anything else.
No change in the generated code on amd64 and i386. No change in the generated code for pthread_spin.c on arm and aarch64 -- changes only the generated code for pthread_lock.c, pthread_mutex.c, and pthread_rwlock.c, as intended. PR port-arm/57437
|
1.9 | 02-Mar-2007 |
ad | Remove the PTHREAD_SA option. If M:N threads is reimplemented it's better off done with a seperate library.
|
1.8 | 21-Aug-2004 |
rearnsha | Use RET macro for returning.
|
1.7 | 07-Sep-2003 |
cl | Remove possible race condition in upcall recycling.
|
1.6 | 24-Jul-2003 |
skrll | Typo in comment.
|
1.5 | 26-Jun-2003 |
nathanw | Adapt to pt_trapuc: change STACK_SWITCH to check for a value in pt_trapuc and use it preferentially to a value in pt_uc, clearing it once on the new stack. Move stores into pt_uc back to before the stack switch; storing after the stack switch opened a one-instruction race condition where an upcall that had just started a chain could be preempted again, and would bomb when restarted due to its pt_uc not yet having been updated. Now that pt_trapuc is what the upcall code writes to, it is safe to store to pt_uc before switching stacks.
Remove obsolete pt_sleepuc code.
|
1.4 | 12-Jun-2003 |
nathanw | Two fixes: * In switch-away cases, write PT_SWITCHTO last (after PT_SWITCHTOUC), so that pthread__resolve_locks() doesn't see an empty SWITCHTOUC value. This also permits pthread__resolve_locks() to use the presence of PT_SWITCHTO as a sign that the thread has done all of its necessary chain work.
* Make the return-point of pthread__switch global and visible, so that its address can be compared to the PC of a thread, again as a sign that its chain-work is done.
(other architectures in progress, after they get the *previous* asm fix...)
|
1.3 | 05-Apr-2003 |
bjh21 | NetBSD/acorn26 has used APCS-32 for years, so unifdef -U__APCS_26__.
|
1.2 | 18-Jan-2003 |
thorpej | Merge the nathanw_sa branch.
|
1.1 | 16-Nov-2001 |
thorpej | branches: 1.1.2; file pthread_switch.S was initially added on branch nathanw_sa.
|
1.1.2.3 | 14-Jan-2003 |
nathanw | Rewrite pthread__switch() to avoid storing the new saved-context pointer while still using the old stack. This avoids a race condition with pthread__find_interrupted() where a thread could lose its old state if it was interrupted in a certain window in pthread__switch().
|
1.1.2.2 | 22-Nov-2001 |
bjh21 | Correct a typo in STACK_SWITCH(), where it used the wrong register to compute the stack pointer. This patch gets test "cond2" working on ARM.
|
1.1.2.1 | 16-Nov-2001 |
thorpej | First cut at machine-dependent pthread bits for ARM.
|