History log of /src/lib/libpthread/pthread_spin.c |
Revision | | Date | Author | Comments |
1.11 |
| 25-May-2023 |
riastradh | libpthread: New pthread__smt_wait to put CPU in low power for spin.
This is now distinct from pthread__smt_pause, which is for spin lock backoff with no paired wakeup.
On Arm, there is a single-bit event register per CPU, and there are two instructions to manage it:
- wfe, wait for event -- if event register is clear, enter low power mode and wait until event register is set; then exit low power mode and clear event register
- sev, signal event -- sets event register on all CPUs (other circumstances like interrupts also set the event register and cause wfe to wake)
These can be used to reduce the power consumption of spinning for a lock, but only if they are actually paired -- if there's no sev, wfe might hang indefinitely. Currently only pthread_spin(3) actually pairs them; the other lock primitives (internal lock, mutex, rwlock) do not -- they have spin lock backoff loops, but no corresponding wakeup to cancel a wfe.
It may be worthwhile to teach the other lock primitives to pair wfe/sev, but that requires some performance measurement to verify it's actually worthwhile. So for now, we just make sure not to use wfe when there's no sev, and keep everything else the same -- this should fix severe performance degredation in libpthread on Arm without hurting anything else.
No change in the generated code on amd64 and i386. No change in the generated code for pthread_spin.c on arm and aarch64 -- changes only the generated code for pthread_lock.c, pthread_mutex.c, and pthread_rwlock.c, as intended.
PR port-arm/57437
XXX pullup-10
|
1.10 |
| 10-Apr-2022 |
riastradh | branches: 1.10.2; pthread: Nix trailing whitespace.
|
1.9 |
| 12-Feb-2022 |
riastradh | libpthread: Move namespacing include to top of .c files.
Stuff like libc's namespace.h, or atomic_op_namespace.h, which does namespacing tricks like `#define atomic_cas_uint _atomic_cas_uint', has to go at the top of each .c file. If it goes in the middle, it might be too late to affect the declarations, and result in compile errors.
I tripped over this by including <sys/atomic.h> in mips <machine/lock.h>.
(Maybe we should create a new pthread_namespace.h file for the purpose, but this'll do for now.)
|
1.8 |
| 05-Feb-2020 |
kamil | Retire ifdef ERRORCHECK in pthread(3)
It is enabled unconditionally since 2003 and used only for rwlocks and spinlocks.
LLVM sanitizers make assumptions that these checks are enabled always.
|
1.7 |
| 31-Jan-2020 |
kamil | Refactor libpthread checks for invalid arguments
Switch from manual functions to pthread__error().
|
1.6 |
| 16-Aug-2012 |
matt | branches: 1.6.24; 1.6.32; 1.6.34; Add a pthread__smt_wake and add support for it on arm along with pthread__smt_pause. These are implemented using the ARM instructions SEV (wake) and WFE (pause). These are treated as NOPs on ARM CPUs that don't support them.
|
1.5 |
| 28-Apr-2008 |
martin | branches: 1.5.4; 1.5.8; Remove clause 3 and 4 from TNF licenses
|
1.4 |
| 05-Jan-2008 |
ad | branches: 1.4.4; machine/lock.h, not sys/lock.h
|
1.3 |
| 13-Nov-2007 |
ad | For PR bin/37347:
- Override __libc_thr_init() instead of using our own constructor. - Add pthread__getenv() and use instead of getenv(). This is used before we are up and running and unfortunatley getenv() takes locks.
Other changes:
- Cache the spinlock vectors in pthread__st. Internal spinlock operations now take 1 function call instead of 3 (i386). - Use pthread__self() internally, not pthread_self(). - Use __attribute__ ((visibility("hidden"))) in some places. - Kill PTHREAD_MAIN_DEBUG.
|
1.2 |
| 10-Sep-2007 |
skrll | Merge nick-csl-alignment.
|
1.1 |
| 16-Aug-2007 |
ad | branches: 1.1.2; 1.1.4; Trim fat off libpthread internal spinlock operations. Makes a mesurable improvement across the board.
|
1.1.4.3 |
| 10-Sep-2007 |
skrll | Fix inverted test.
|
1.1.4.2 |
| 03-Sep-2007 |
skrll | Sync with HEAD.
|
1.1.4.1 |
| 16-Aug-2007 |
skrll | file pthread_spin.c was added on branch nick-csl-alignment on 2007-09-03 10:14:16 +0000
|
1.1.2.2 |
| 09-Jan-2008 |
matt | sync with HEAD
|
1.1.2.1 |
| 06-Nov-2007 |
matt | sync with HEAD
|
1.4.4.1 |
| 18-May-2008 |
yamt | sync with head.
|
1.5.8.2 |
| 28-Apr-2008 |
martin | Remove clause 3 and 4 from TNF licenses
|
1.5.8.1 |
| 28-Apr-2008 |
martin | file pthread_spin.c was added on branch christos-time_t on 2008-04-28 20:23:02 +0000
|
1.5.4.1 |
| 30-Oct-2012 |
yamt | sync with head
|
1.6.34.1 |
| 04-Aug-2023 |
martin | Pull up following revision(s) (requested by riastradh in ticket #1700):
lib/libpthread/arch/x86_64/pthread_md.h: revision 1.13 lib/libpthread/pthread_int.h: revision 1.110 lib/libpthread/pthread_int.h: revision 1.111 lib/libpthread/arch/i386/pthread_md.h: revision 1.21 lib/libpthread/arch/arm/pthread_md.h: revision 1.12 lib/libpthread/arch/arm/pthread_md.h: revision 1.13 lib/libpthread/pthread_spin.c: revision 1.11 lib/libpthread/arch/aarch64/pthread_md.h: revision 1.2
libpthread: Use __nothing, not /* nothing */, for empty macros.
No functional change intended -- just safer to do it this way in case the macros are used in if branches or comma expressions. PR port-arm/57437 (pthread__smt_pause/wake issue)
libpthread: New pthread__smt_wait to put CPU in low power for spin.
This is now distinct from pthread__smt_pause, which is for spin lock backoff with no paired wakeup.
On Arm, there is a single-bit event register per CPU, and there are two instructions to manage it: - wfe, wait for event -- if event register is clear, enter low power mode and wait until event register is set; then exit low power mode and clear event register - sev, signal event -- sets event register on all CPUs (other circumstances like interrupts also set the event register and cause wfe to wake)
These can be used to reduce the power consumption of spinning for a lock, but only if they are actually paired -- if there's no sev, wfe might hang indefinitely. Currently only pthread_spin(3) actually pairs them; the other lock primitives (internal lock, mutex, rwlock) do not -- they have spin lock backoff loops, but no corresponding wakeup to cancel a wfe.
It may be worthwhile to teach the other lock primitives to pair wfe/sev, but that requires some performance measurement to verify it's actually worthwhile. So for now, we just make sure not to use wfe when there's no sev, and keep everything else the same -- this should fix severe performance degredation in libpthread on Arm without hurting anything else.
No change in the generated code on amd64 and i386. No change in the generated code for pthread_spin.c on arm and aarch64 -- changes only the generated code for pthread_lock.c, pthread_mutex.c, and pthread_rwlock.c, as intended. PR port-arm/57437
|
1.6.32.1 |
| 08-Apr-2020 |
martin | Merge changes from current as of 20200406
|
1.6.24.1 |
| 04-Aug-2023 |
martin | Pull up following revision(s) (requested by riastradh in ticket #1878):
lib/libpthread/arch/x86_64/pthread_md.h: revision 1.13 lib/libpthread/pthread_int.h: revision 1.110 lib/libpthread/pthread_int.h: revision 1.111 lib/libpthread/arch/i386/pthread_md.h: revision 1.21 lib/libpthread/arch/arm/pthread_md.h: revision 1.12 lib/libpthread/arch/arm/pthread_md.h: revision 1.13 lib/libpthread/pthread_spin.c: revision 1.11 lib/libpthread/arch/aarch64/pthread_md.h: revision 1.2
libpthread: Use __nothing, not /* nothing */, for empty macros.
No functional change intended -- just safer to do it this way in case the macros are used in if branches or comma expressions. PR port-arm/57437 (pthread__smt_pause/wake issue)
libpthread: New pthread__smt_wait to put CPU in low power for spin.
This is now distinct from pthread__smt_pause, which is for spin lock backoff with no paired wakeup.
On Arm, there is a single-bit event register per CPU, and there are two instructions to manage it: - wfe, wait for event -- if event register is clear, enter low power mode and wait until event register is set; then exit low power mode and clear event register - sev, signal event -- sets event register on all CPUs (other circumstances like interrupts also set the event register and cause wfe to wake)
These can be used to reduce the power consumption of spinning for a lock, but only if they are actually paired -- if there's no sev, wfe might hang indefinitely. Currently only pthread_spin(3) actually pairs them; the other lock primitives (internal lock, mutex, rwlock) do not -- they have spin lock backoff loops, but no corresponding wakeup to cancel a wfe.
It may be worthwhile to teach the other lock primitives to pair wfe/sev, but that requires some performance measurement to verify it's actually worthwhile. So for now, we just make sure not to use wfe when there's no sev, and keep everything else the same -- this should fix severe performance degredation in libpthread on Arm without hurting anything else.
No change in the generated code on amd64 and i386. No change in the generated code for pthread_spin.c on arm and aarch64 -- changes only the generated code for pthread_lock.c, pthread_mutex.c, and pthread_rwlock.c, as intended. PR port-arm/57437
|
1.10.2.1 |
| 01-Aug-2023 |
martin | Pull up following revision(s) (requested by riastradh in ticket #296):
lib/libpthread/arch/x86_64/pthread_md.h: revision 1.13 lib/libpthread/pthread_int.h: revision 1.110 lib/libpthread/pthread_int.h: revision 1.111 lib/libpthread/arch/i386/pthread_md.h: revision 1.21 lib/libpthread/arch/arm/pthread_md.h: revision 1.12 lib/libpthread/arch/arm/pthread_md.h: revision 1.13 lib/libpthread/pthread_spin.c: revision 1.11 lib/libpthread/arch/aarch64/pthread_md.h: revision 1.2
libpthread: Use __nothing, not /* nothing */, for empty macros.
No functional change intended -- just safer to do it this way in case the macros are used in if branches or comma expressions.
PR port-arm/57437 (pthread__smt_pause/wake issue)
libpthread: New pthread__smt_wait to put CPU in low power for spin.
This is now distinct from pthread__smt_pause, which is for spin lock backoff with no paired wakeup.
On Arm, there is a single-bit event register per CPU, and there are two instructions to manage it: - wfe, wait for event -- if event register is clear, enter low power mode and wait until event register is set; then exit low power mode and clear event register - sev, signal event -- sets event register on all CPUs (other circumstances like interrupts also set the event register and cause wfe to wake)
These can be used to reduce the power consumption of spinning for a lock, but only if they are actually paired -- if there's no sev, wfe might hang indefinitely. Currently only pthread_spin(3) actually pairs them; the other lock primitives (internal lock, mutex, rwlock) do not -- they have spin lock backoff loops, but no corresponding wakeup to cancel a wfe.
It may be worthwhile to teach the other lock primitives to pair wfe/sev, but that requires some performance measurement to verify it's actually worthwhile. So for now, we just make sure not to use wfe when there's no sev, and keep everything else the same -- this should fix severe performance degredation in libpthread on Arm without hurting anything else.
No change in the generated code on amd64 and i386. No change in the generated code for pthread_spin.c on arm and aarch64 -- changes only the generated code for pthread_lock.c, pthread_mutex.c, and pthread_rwlock.c, as intended.
PR port-arm/57437
|