Home | History | Annotate | Download | only in kern
History log of /src/sys/kern/subr_workqueue.c
RevisionDateAuthorComments
 1.48  01-Mar-2024  mrg check that l_nopreempt (preemption count) doesn't change after callbacks

check that the idle loop, soft interrupt handlers, workqueue, and xcall
callbacks do not modify the preemption count, in most cases, knowing it
should be 0 currently.

this work was originally done by simonb. cleaned up slightly and some
minor enhancement made by myself, and with discussion with riastradh@.

other callback call sites could check this as well (such as MD interrupt
handlers, or really anything that includes a callback registration. x86
version to be commited separately.)
 1.47  09-Aug-2023  riastradh workqueue(9): Factor out wq->wq_flags & WQ_FPU in workqueue_worker.

No functional change intended. Makes it clearer that s is
initialized when used.
 1.46  09-Aug-2023  riastradh workqueue(9): Sort includes.

No functional change intended.
 1.45  09-Aug-2023  riastradh workqueue(9): Avoid unnecessary mutex_exit/enter cycle each loop.
 1.44  09-Aug-2023  riastradh workqueue(9): Stop violating queue(3) internals.
 1.43  09-Aug-2023  riastradh workqueue(9): Sprinkle dtrace probes for workqueue_wait edge cases.

Let's make it easy to find out whether these are hit.
 1.42  09-Aug-2023  riastradh workqueue(9): Avoid touching running work items in workqueue_wait.

As soon as the workqueue function has called, it is forbidden to
touch the struct work passed to it -- the function might free or
reuse the data structure it is embedded in.

So workqueue_wait is forbidden to search the queue for the batch of
running work items. Instead, use a generation number which is odd
while the thread is processing a batch of work and even when not.

There's still a small optimization available with the struct work
pointer to wait for: if we find the work item in one of the per-CPU
_pending_ queues, then after we wait for a batch of work to complete
on that CPU, we don't need to wait for work on any other CPUs.

PR kern/57574

XXX pullup-10
XXX pullup-9
XXX pullup-8
 1.41  29-Oct-2022  riastradh branches: 1.41.2;
workqueue(9): Sprinkle dtrace probes.
 1.40  15-Aug-2022  riastradh workqueue(9): workqueue_wait and workqueue_destroy may sleep.

But might not, so assert sleepable up front.
 1.39  08-Sep-2020  riastradh workqueue: Lift unnecessary restriction on workqueue_wait.

Allow multiple concurrent waits at a time, and allow enqueueing work
at the same time (as long as it's not the work we're waiting for).
This way multiple users can use a shared global workqueue and safely
wait for individual work items concurrently, while the workqueue is
still in use for other items (e.g., wg(4) peers).

This has the side effect of taking away a diagnostic measure, but I
think allowing the diagnostic's false positives instead of rejecting
them is worth it. We could cheaply add it back with some false
negatives if it's important.
 1.38  01-Aug-2020  riastradh New workqueue flag WQ_FPU.

Arranges kthread_fpu_enter/exit around calls to the worker. Saves
cost over explicit calls to kthread_fpu_enter/exit in the worker by
only doing it once, since there's often a high cost to flushing the
icache and zeroing the fpu registers.

As proposed on tech-kern:
https://mail-index.netbsd.org/tech-kern/2020/06/20/msg026524.html
 1.37  13-Jun-2018  ozaki-r branches: 1.37.6;
Don't wait on workqueue_wait if called from worker itself

Otherwise workqueue_wait never return in such a case. This treatment
is the same as callout_halt.
 1.36  06-Feb-2018  ozaki-r branches: 1.36.2;
Check the length of a passed name to avoid slient truncation
 1.35  30-Jan-2018  ozaki-r Check if a queued work is tried to be enqueued again, which is not allowed
 1.34  28-Dec-2017  ozaki-r Add workqueue_wait that waits for a specific work to finish

The caller must ensure that no new work is enqueued before calling
workqueue_wait. Note that Note that if the workqueue is WQ_PERCPU, the caller
can enqueue a new work to another queue other than the waiting queue.

Discussed on tech-kern@
 1.33  07-Oct-2012  matt branches: 1.33.30;
If the workqueue is using a prio less than PRI_KERNEL, make sure KTHREAD_TS
is used when creating the kthread.
 1.32  23-Oct-2011  jym branches: 1.32.2; 1.32.12;
Turn a workqueue(9) name into an array in the struct workqueue, rather
than a const char *. This avoids keeping a reference to a string
owned by caller (string could be allocated on stack).
 1.31  27-Jul-2011  uebayasi These don't need uvm/uvm_extern.h.
 1.30  11-Nov-2009  rmind workqueue_finiqueue: remove unused variable.
 1.29  21-Oct-2009  rmind Remove uarea swap-out functionality:

- Addresses the issue described in PR/38828.
- Some simplification in threading and sleepq subsystems.
- Eliminates pmap_collect() and, as a side note, allows pmap optimisations.
- Eliminates XS_CTL_DATA_ONSTACK in scsipi code.
- Avoids few scans on LWP list and thus potentially long holds of proc_lock.
- Cuts ~1.5k lines of code. Reduces amd64 kernel size by ~4k.
- Removes __SWAP_BROKEN cases.

Tested on x86, mips, acorn32 (thanks <mpumford>) and partly tested on
acorn26 (thanks to <bjh21>).

Discussed on <tech-kern>, reviewed by <ad>.
 1.28  16-Aug-2009  yamt struct lwp -> lwp_t for consistency
 1.27  03-Apr-2009  ad workqueue_finiqueue: our stack could be swapped out while enqueued to
a worker thread.
 1.26  15-Sep-2008  rmind branches: 1.26.2; 1.26.4; 1.26.8;
Replace intptr_t in few places to uintptr_t.
 1.25  02-Jul-2008  matt branches: 1.25.2;
Switch from KASSERT to CTASSERT for those asserts testing sizes of types.
 1.24  27-Mar-2008  ad branches: 1.24.4; 1.24.6; 1.24.8;
Replace use of CACHE_LINE_SIZE in some obvious places.
 1.23  10-Mar-2008  martin Use cpu index instead of the machine dependend, not very expressive
cpuid when naming user-visible kernel entities.
 1.22  05-Dec-2007  ad branches: 1.22.8; 1.22.12;
Match the docs: MUTEX_DRIVER/SPIN are now only for porting code written
for Solaris.
 1.21  07-Aug-2007  yamt branches: 1.21.2; 1.21.8; 1.21.10;
don't bother to set thread's priority by ourselves,
as kthread_create does it for us now. from Andrew Doran.
 1.20  07-Aug-2007  yamt - don't assume the order of cpus in a CPU_INFO_FOREACH loop.
- remove unused structure members.
- simplify.
 1.19  05-Aug-2007  ad branches: 1.19.2;
Current convention is to name/number objects after ci->ci_cpuid, so do
that when creating the kthreads. We may want to change this.
 1.18  05-Aug-2007  rmind Improve per-CPU support for the workqueue(9):
- Make structures CPU-cache friendly, as suggested and explained
by Andrew Doran. CACHE_LINE_SIZE definition is invented.
- Use current CPU if NULL is passed to the workqueue_enqueue().
- Implemented MI CPU index, which could be used as an index of array.
Removed linked-lists usage for work queues.

The roundup2() function avoids division, but works only with power of 2.

Reviewed by: <ad>, <yamt>, <tech-kern>
 1.17  20-Jul-2007  yamt branches: 1.17.4;
hide internals of struct work.
 1.16  18-Jul-2007  ad workqueue_destroy: fix a use-after-free.
 1.15  13-Jul-2007  rmind branches: 1.15.2;
Make MP parts friendly with various ports (especially UP):
- #ifdef code parts, which uses CPU_INFO_FOREACH/CPU_INFO_ITERATOR
- use ci_cpuid only in MP case
- include machine/cpu.h
 1.14  12-Jul-2007  rmind Implementation of per-CPU work-queues support for workqueue(9) interface.
WQ_PERCPU flag for workqueue and additional argument for workqueue_enqueue()
to assign a CPU might be used. Notes:
- For now, the list is used for workqueue_queue, which is non-optimal,
and will be changed with array, where index would be CPU ID.
- The data structures should be changed to be cache-friendly.

Reviewed by: <yamt>, <tech-kern>
 1.13  09-Jul-2007  ad Merge some of the less invasive changes from the vmlocking branch:

- kthread, callout, devsw API changes
- select()/poll() improvements
- miscellaneous MT safety improvements
 1.12  27-Feb-2007  yamt branches: 1.12.2; 1.12.4;
typedef pri_t and use it instead of int and u_char.
 1.11  11-Feb-2007  yamt branches: 1.11.2;
workqueue_exit: update a comment.
 1.10  11-Feb-2007  yamt use cv_signal rather than cv_broadcast where appropriate.
 1.9  09-Feb-2007  ad Merge newlock2 to head.
 1.8  21-Dec-2006  yamt merge yamt-splraiseipl branch.

- finish implementing splraiseipl (and makeiplcookie).
http://mail-index.NetBSD.org/tech-kern/2006/07/01/0000.html
- complete workqueue(9) and fix its ipl problem, which is reported
to cause audio skipping.
- fix netbt (at least compilation problems) for some ports.
- fix PR/33218.
 1.7  01-Nov-2006  yamt remove some __unused from function parameters.
 1.6  12-Oct-2006  christos - sprinkle __unused on function decls.
- fix a couple of unused bugs
- no more -Wno-unused for i386
 1.5  16-Sep-2006  yamt branches: 1.5.2;
add workqueue_destroy().
 1.4  16-Sep-2006  yamt workqueue_create: use kmem_alloc rather than malloc.
 1.3  02-May-2006  rpaulo branches: 1.3.6; 1.3.10;
Use for in a forever loop as per KNF.
 1.2  11-Dec-2005  christos branches: 1.2.4; 1.2.6; 1.2.8; 1.2.10; 1.2.12;
merge ktrace-lwp.
 1.1  29-Oct-2005  yamt branches: 1.1.2; 1.1.4;
add a simple "do it in thread context" framework.
 1.1.4.2  10-Nov-2005  skrll Sync with HEAD. Here we go again...
 1.1.4.1  29-Oct-2005  skrll file subr_workqueue.c was added on branch ktrace-lwp on 2005-11-10 14:09:45 +0000
 1.1.2.2  02-Nov-2005  yamt sync with head.
 1.1.2.1  29-Oct-2005  yamt file subr_workqueue.c was added on branch yamt-vop on 2005-11-02 11:58:11 +0000
 1.2.12.1  24-May-2006  tron Merge 2006-05-24 NetBSD-current into the "peter-altq" branch.
 1.2.10.1  11-May-2006  elad sync with head
 1.2.8.1  24-May-2006  yamt sync with head.
 1.2.6.1  01-Jun-2006  kardel Sync with head.
 1.2.4.1  09-Sep-2006  rpaulo sync with head
 1.3.10.5  03-Feb-2007  ad - Require that cv_signal/cv_broadcast be called with the interlock held.
- Provide 'async' versions that's don't need the interlock.
 1.3.10.4  25-Jan-2007  ad Eliminate some uses of mtsleep().
 1.3.10.3  12-Jan-2007  ad Sync with head.
 1.3.10.2  18-Nov-2006  ad Sync with head.
 1.3.10.1  11-Sep-2006  ad Use mutexes to protect workqueues.
 1.3.6.7  17-Mar-2008  yamt sync with head.
 1.3.6.6  07-Dec-2007  yamt sync with head
 1.3.6.5  03-Sep-2007  yamt sync with head.
 1.3.6.4  26-Feb-2007  yamt sync with head.
 1.3.6.3  30-Dec-2006  yamt sync with head.
 1.3.6.2  21-Jun-2006  yamt sync with head.
 1.3.6.1  02-May-2006  yamt file subr_workqueue.c was added on branch yamt-lazymbuf on 2006-06-21 15:09:38 +0000
 1.5.2.4  10-Dec-2006  yamt sync with head.
 1.5.2.3  22-Oct-2006  yamt sync with head
 1.5.2.2  18-Sep-2006  yamt don't bother to wrap splraiseipl and IPL_xxx with #if notyet.
 1.5.2.1  18-Sep-2006  yamt adapt to new api. (#if 0'ed code)
 1.11.2.1  27-Feb-2007  yamt - sync with head.
- move sched_changepri back to kern_synch.c as it doesn't know PPQ anymore.
 1.12.4.1  11-Jul-2007  mjf Sync with head.
 1.12.2.7  20-Aug-2007  ad Sync with HEAD.
 1.12.2.6  15-Jul-2007  ad Sync with head.
 1.12.2.5  01-Jul-2007  ad s/MUTEX_SPIN/MUTEX_DRIVER/ as it's called with IPL_NONE now.
 1.12.2.4  13-May-2007  ad - Pass the error number and residual count to biodone(), and let it handle
setting error indicators. Prepare to eliminate B_ERROR.
- Add a flag argument to brelse() to be set into the buf's flags, instead
of doing it directly. Typically used to set B_INVAL.
- Add a "struct cpu_info *" argument to kthread_create(), to be used to
create bound threads. Change "bool mpsafe" to "int flags".
- Allow exit of LWPs in the IDL state when (l != curlwp).
- More locking fixes & conversion to the new API.
 1.12.2.3  10-Apr-2007  ad Allow workqueues to be marked as MP safe.
 1.12.2.2  10-Apr-2007  ad - Add two new arguments to kthread_create1: pri_t pri, bool mpsafe.
- Fork kthreads off proc0 as new LWPs, not new processes.
 1.12.2.1  21-Mar-2007  ad Eliminate a needless test.
 1.15.2.1  15-Aug-2007  skrll Sync with HEAD.
 1.17.4.2  09-Dec-2007  jmcneill Sync with HEAD.
 1.17.4.1  09-Aug-2007  jmcneill Sync with HEAD.
 1.19.2.2  05-Aug-2007  ad Current convention is to name/number objects after ci->ci_cpuid, so do
that when creating the kthreads. We may want to change this.
 1.19.2.1  05-Aug-2007  ad file subr_workqueue.c was added on branch matt-mips64 on 2007-08-05 13:47:26 +0000
 1.21.10.1  08-Dec-2007  ad Sync with head.
 1.21.8.1  08-Dec-2007  mjf Sync with HEAD.
 1.21.2.2  23-Mar-2008  matt sync with HEAD
 1.21.2.1  09-Jan-2008  matt sync with HEAD
 1.22.12.3  28-Sep-2008  mjf Sync with HEAD.
 1.22.12.2  02-Jul-2008  mjf Sync with HEAD.
 1.22.12.1  03-Apr-2008  mjf Sync with HEAD.
 1.22.8.1  24-Mar-2008  keiichi sync with head.
 1.24.8.1  03-Jul-2008  simonb Sync with head.
 1.24.6.2  24-Sep-2008  wrstuden Merge in changes between wrstuden-revivesa-base-2 and
wrstuden-revivesa-base-3.
 1.24.6.1  18-Sep-2008  wrstuden Sync with wrstuden-revivesa-base-2.
 1.24.4.3  11-Mar-2010  yamt sync with head
 1.24.4.2  19-Aug-2009  yamt sync with head.
 1.24.4.1  04-May-2009  yamt sync with head.
 1.25.2.1  19-Oct-2008  haad Sync with HEAD.
 1.26.8.1  13-May-2009  jym Sync with HEAD.

Commit is split, to avoid a "too many arguments" protocol error.
 1.26.4.1  04-Apr-2009  snj Pull up following revision(s) (requested by ad in ticket #651):
sys/kern/subr_workqueue.c: revision 1.27
workqueue_finiqueue: our stack could be swapped out while enqueued to
a worker thread.
 1.26.2.1  28-Apr-2009  skrll Sync with HEAD.
 1.32.12.1  20-Nov-2012  tls Resync to 2012-11-19 00:00:00 UTC
 1.32.2.1  30-Oct-2012  yamt sync with head
 1.33.30.3  14-Jun-2018  martin Pull up following revision(s) (requested by ozaki-r in ticket #879):

sys/kern/subr_workqueue.c: revision 1.37

Don't wait on workqueue_wait if called from worker itself

Otherwise workqueue_wait never return in such a case. This treatment
is the same as callout_halt.
 1.33.30.2  05-Feb-2018  martin Pull up following revision(s) (requested by ozaki-r in ticket #528):
sys/net/agr/if_agr.c: revision 1.42
sys/netinet6/nd6_rtr.c: revision 1.137
sys/netinet6/nd6_rtr.c: revision 1.138
sys/net/agr/if_agr.c: revision 1.46
sys/net/route.c: revision 1.206
sys/net/if.c: revision 1.419
sys/net/agr/if_agrether.c: revision 1.10
sys/netinet6/nd6.c: revision 1.241
sys/netinet6/nd6.c: revision 1.242
sys/netinet6/nd6.c: revision 1.243
sys/netinet6/nd6.c: revision 1.244
sys/netinet6/nd6.c: revision 1.245
sys/netipsec/ipsec_input.c: revision 1.52
sys/netipsec/ipsec_input.c: revision 1.53
sys/net/agr/if_agrsubr.h: revision 1.5
sys/kern/subr_workqueue.c: revision 1.35
sys/netipsec/ipsec.c: revision 1.124
sys/net/agr/if_agrsubr.c: revision 1.11
sys/net/agr/if_agrsubr.c: revision 1.12
Simplify; share agr_vlan_add and agr_vlan_del (NFCI)
Fix late NULL-checking (CID 1427782: Null pointer dereferences (REVERSE_INULL))
KNF: replace soft tabs with hard tabs
Add missing NULL-checking for m_pullup (CID 1427770: Null pointer dereferences (NULL_RETURNS))
Add locking.
Revert "Get rid of unnecessary splsoftnet" (v1.133)
It's not always true that softnet_lock is held these places.
See PR kern/52947.
Get rid of unnecessary splsoftnet (redo)
Unless NET_MPSAFE, splsoftnet is still needed for rt_* functions.
Use existing fill_[pd]rlist() functions to calculate size of buffer to
allocate, rather than relying on an arbitrary length passed in from
userland.
Allow copyout() of partial results if the user buffer is too small, to
be consistent with the way sysctl(3) is documented.
Garbage-collect now-unused third parrameter in the fill_[pd]rlist()
functions.
As discussed on IRC.
OK kamil@ and christos@
XXX Needs pull-up to netbsd-8 branch.
Simplify, from christos@
More simplification, this time from ozaki-r@
No need to break after return.
One more from christos@
No need to initialize fill_func
more cleanup (don't allow oldlenp == NULL)
Destroy ifq_lock at the end of if_detach
It still can be used in if_detach.
Prevent rt_free_global.wk from being enqueued to workqueue doubly
Check if a queued work is tried to be enqueued again, which is not allowed
 1.33.30.1  16-Jan-2018  martin Pull up following revision(s) (requested by ozaki-r in ticket #497):
tests/rump/rumpkern/Makefile: revision 1.16
tests/rump/kernspace/Makefile: revision 1.6
tests/rump/kernspace/workqueue.c: revision 1.1
tests/rump/kernspace/workqueue.c: revision 1.2
tests/rump/kernspace/workqueue.c: revision 1.3
tests/rump/kernspace/workqueue.c: revision 1.4
tests/rump/kernspace/workqueue.c: revision 1.5
tests/rump/kernspace/workqueue.c: revision 1.6
tests/rump/rumpkern/t_workqueue.c: revision 1.1
sys/sys/workqueue.h: revision 1.10
tests/rump/rumpkern/t_workqueue.c: revision 1.2
tests/rump/kernspace/kernspace.h: revision 1.5
tests/rump/kernspace/kernspace.h: revision 1.6
sys/net/if_bridge.c: revision 1.147
distrib/sets/lists/debug/mi: revision 1.225
sys/kern/subr_workqueue.c: revision 1.34
share/man/man9/workqueue.9: revision 1.12
sys/net/if_spppsubr.c: revision 1.178
distrib/sets/lists/tests/mi: revision 1.763
Add simple test for workqueue(9)
Add declaration. build fix
sorry, I forgot to commit this file.
Tweak use of cv_timedwait
- Handle its return value
- Specify more appropriate time-out periods (2 ticks is too short)
Fix a race condition on taking the mutex
The workqueue worker can take the mutex before the tester tries to take it after
calling workqueue_enqueue. If it happens, the worker calls cv_broadcast before
the tester calls cv_timedwait and the tester will wait until the cv timed out
Take the mutex before calling workqueue_enqueue so that the tester surely calls
cv_timedwait before the worker calls cv_broadcast.
The fix stabilizes the test, t_workqueue/workqueue1.
Add workqueue_wait that waits for a specific work to finish
The caller must ensure that no new work is enqueued before calling
workqueue_wait. Note that Note that if the workqueue is WQ_PERCPU, the caller
can enqueue a new work to another queue other than the waiting queue.
Discussed on tech-kern@
Ensure the timer isn't running by using workqueue_wait
Functionalize some routines to add new tests easily (NFC)
Add a test case for workqueue_wait
Fix build
 1.36.2.1  25-Jun-2018  pgoyette Sync with HEAD
 1.37.6.1  18-Apr-2024  martin Pull up following revision(s) (requested by riastradh in ticket #1830):

sys/kern/subr_workqueue.c: revision 1.40
sys/kern/subr_workqueue.c: revision 1.41
sys/kern/subr_workqueue.c: revision 1.42
sys/kern/subr_workqueue.c: revision 1.43
sys/kern/subr_workqueue.c: revision 1.44
sys/kern/subr_workqueue.c: revision 1.45
sys/kern/subr_workqueue.c: revision 1.46
tests/rump/kernspace/workqueue.c: revision 1.7
sys/kern/subr_workqueue.c: revision 1.47
tests/rump/kernspace/workqueue.c: revision 1.8
tests/rump/kernspace/workqueue.c: revision 1.9
tests/rump/rumpkern/t_workqueue.c: revision 1.3
tests/rump/rumpkern/t_workqueue.c: revision 1.4
tests/rump/kernspace/kernspace.h: revision 1.9
tests/rump/rumpkern/Makefile: revision 1.20
sys/kern/subr_workqueue.c: revision 1.39
share/man/man9/workqueue.9: revision 1.15
(all via patch)

workqueue: Lift unnecessary restriction on workqueue_wait.

Allow multiple concurrent waits at a time, and allow enqueueing work
at the same time (as long as it's not the work we're waiting for).

This way multiple users can use a shared global workqueue and safely
wait for individual work items concurrently, while the workqueue is
still in use for other items (e.g., wg(4) peers).

This has the side effect of taking away a diagnostic measure, but I
think allowing the diagnostic's false positives instead of rejecting
them is worth it. We could cheaply add it back with some false
negatives if it's important.
workqueue(9): workqueue_wait and workqueue_destroy may sleep.

But might not, so assert sleepable up front.
workqueue(9): Sprinkle dtrace probes.
tests/rump/rumpkern: Use PROGDPLIBS, not explicit -L/-l.

This way we relink the t_* test programs whenever changes under
tests/rump/kernspace change libkernspace.a.

workqueue(9) tests: Nix trailing whitespace.

workqueue(9) tests: Destroy struct work immediately on entry.

workqueue(9) tests: Add test for PR kern/57574.

workqueue(9): Avoid touching running work items in workqueue_wait.

As soon as the workqueue function has called, it is forbidden to
touch the struct work passed to it -- the function might free or
reuse the data structure it is embedded in.

So workqueue_wait is forbidden to search the queue for the batch of
running work items. Instead, use a generation number which is odd
while the thread is processing a batch of work and even when not.
There's still a small optimization available with the struct work
pointer to wait for: if we find the work item in one of the per-CPU
_pending_ queues, then after we wait for a batch of work to complete
on that CPU, we don't need to wait for work on any other CPUs.
PR kern/57574

workqueue(9): Sprinkle dtrace probes for workqueue_wait edge cases.

Let's make it easy to find out whether these are hit.

workqueue(9): Stop violating queue(3) internals.

workqueue(9): Avoid unnecessary mutex_exit/enter cycle each loop.

workqueue(9): Sort includes.
No functional change intended.

workqueue(9): Factor out wq->wq_flags & WQ_FPU in workqueue_worker.
No functional change intended. Makes it clearer that s is
initialized when used.
 1.41.2.2  11-Sep-2024  martin Pull up following revision(s) (requested by rin in ticket #821):

sys/arch/x86/x86/intr.c: revision 1.169
sys/kern/kern_softint.c: revision 1.76
sys/kern/subr_workqueue.c: revision 1.48
sys/kern/kern_idle.c: revision 1.36
sys/kern/subr_xcall.c: revision 1.38

check that l_nopreempt (preemption count) doesn't change after callbacks

check that the idle loop, soft interrupt handlers, workqueue, and xcall
callbacks do not modify the preemption count, in most cases, knowing it
should be 0 currently.

this work was originally done by simonb. cleaned up slightly and some
minor enhancement made by myself, and with discussion with riastradh@.
other callback call sites could check this as well (such as MD interrupt
handlers, or really anything that includes a callback registration. x86
version to be commited separately.)

apply some more diagnostic checks for x86 interrupts
convert intr_biglock_wrapper() into a slight less complete
intr_wrapper(), and move the kernel lock/unlock points into
the new intr_biglock_wrapper().
add curlwp->l_nopreempt checking for interrupt handlers,
including the dtrace wrapper.

XXX: has to copy the i8254_clockintr hack.

tested for a few months by myself, and recently by rin@ on both
current and netbsd-10. thanks!
 1.41.2.1  04-Sep-2023  martin Pull up following revision(s) (requested by riastradh in ticket #342):

sys/kern/subr_workqueue.c: revision 1.42
sys/kern/subr_workqueue.c: revision 1.43
sys/kern/subr_workqueue.c: revision 1.44
sys/kern/subr_workqueue.c: revision 1.45
sys/kern/subr_workqueue.c: revision 1.46
tests/rump/kernspace/workqueue.c: revision 1.7
sys/kern/subr_workqueue.c: revision 1.47
tests/rump/kernspace/workqueue.c: revision 1.8
tests/rump/kernspace/workqueue.c: revision 1.9
tests/rump/rumpkern/t_workqueue.c: revision 1.3
tests/rump/rumpkern/t_workqueue.c: revision 1.4
tests/rump/kernspace/kernspace.h: revision 1.9
tests/rump/rumpkern/Makefile: revision 1.20

tests/rump/rumpkern: Use PROGDPLIBS, not explicit -L/-l.

This way we relink the t_* test programs whenever changes under
tests/rump/kernspace change libkernspace.a.

workqueue(9) tests: Nix trailing whitespace.

workqueue(9) tests: Destroy struct work immediately on entry.

workqueue(9) tests: Add test for PR kern/57574.

workqueue(9): Avoid touching running work items in workqueue_wait.

As soon as the workqueue function has called, it is forbidden to
touch the struct work passed to it -- the function might free or
reuse the data structure it is embedded in.

So workqueue_wait is forbidden to search the queue for the batch of
running work items. Instead, use a generation number which is odd
while the thread is processing a batch of work and even when not.

There's still a small optimization available with the struct work
pointer to wait for: if we find the work item in one of the per-CPU
_pending_ queues, then after we wait for a batch of work to complete
on that CPU, we don't need to wait for work on any other CPUs.
PR kern/57574

workqueue(9): Sprinkle dtrace probes for workqueue_wait edge cases.

Let's make it easy to find out whether these are hit.

workqueue(9): Stop violating queue(3) internals.

workqueue(9): Avoid unnecessary mutex_exit/enter cycle each loop.

workqueue(9): Sort includes.
No functional change intended.

workqueue(9): Factor out wq->wq_flags & WQ_FPU in workqueue_worker.
No functional change intended. Makes it clearer that s is
initialized when used.

RSS XML Feed