History log of /src/sys/kern/kern_softint.c |
Revision | | Date | Author | Comments |
1.76 |
| 01-Mar-2024 |
mrg | check that l_nopreempt (preemption count) doesn't change after callbacks
check that the idle loop, soft interrupt handlers, workqueue, and xcall callbacks do not modify the preemption count, in most cases, knowing it should be 0 currently.
this work was originally done by simonb. cleaned up slightly and some minor enhancement made by myself, and with discussion with riastradh@.
other callback call sites could check this as well (such as MD interrupt handlers, or really anything that includes a callback registration. x86 version to be commited separately.)
|
1.75 |
| 04-Aug-2023 |
riastradh | Revert "softint(9): Sprinkle KASSERT(!cold)."
Temporary workaround for PR kern/57563 -- to be fixed properly after analysis.
|
1.74 |
| 04-Aug-2023 |
riastradh | softint(9): Sprinkle KASSERT(!cold).
Softints are forbidden to run while cold. So let's make sure nobody even tries it -- if they do, they might be delayed indefinitely, which is going to be much harder to diagnose.
|
1.73 |
| 09-Apr-2023 |
riastradh | kern: KASSERT(A && B) -> KASSERT(A); KASSERT(B)
|
1.72 |
| 28-Oct-2022 |
riastradh | softint(9): Sprinkle dtrace probes.
|
1.71 |
| 03-Sep-2022 |
thorpej | Garbage-collect the remaining vestiges of netisr.
|
1.70 |
| 30-Mar-2022 |
riastradh | kern: Assert softint does not net acquire kernel locks.
This redoes previous change where I mistakenly used the CPU's biglock count, which is not necessarily stable -- the softint lwp may sleep on a mutex, and the lwp it interrupted may start up again and release the kernel lock, so by the time the softint lwp wakes up again and the softint function returns, the CPU may not be holding any kernel locks. But the softint lwp should never hold kernel locks except when it's in a (usually, non-MPSAFE) softint function.
Same with callout.
|
1.69 |
| 30-Mar-2022 |
riastradh | Revert "kern: Sprinkle biglock-slippage assertions."
Got the diagnostic information I needed from this, and it's holding up releng tests of everything else, so let's back this out until I need more diagnostics or track down the original source of the problem.
|
1.68 |
| 30-Mar-2022 |
riastradh | kern: Sprinkle biglock-slippage assertions.
We seem to have a poltergeist that occasionally messes with the biglock depth, but it's very hard to reproduce and only manifests as some other CPU spinning out on the kernel lock which is no good for diagnostics.
|
1.67 |
| 05-Dec-2021 |
msaitoh | s/futher/further/ in comment.
|
1.66 |
| 17-May-2020 |
ad | softint_disestablish(): use a high priority xcall to determine that the handler is no longer running.
|
1.65 |
| 30-Apr-2020 |
skrll | Trailing whitespace
|
1.64 |
| 27-Mar-2020 |
ad | Comments
|
1.63 |
| 26-Mar-2020 |
ad | softint_overlay() (slow case) gains ~nothing but creates potential headaches. In the interests of simplicity remove it and always use the kthreads.
|
1.62 |
| 08-Mar-2020 |
ad | Kill off kernel_lock_plug_leak(), and go back to dropping kernel_lock in exit1(), since there seems little hope of finding the leaking code any time soon. Can still be caught with LOCKDEBUG.
|
1.61 |
| 17-Feb-2020 |
ad | softint_dispatch(): Temporarily call kernel_lock_plug_leak() since there is incontinent code somewhere.
|
1.60 |
| 15-Feb-2020 |
ad | - Move the LW_RUNNING flag back into l_pflag: updating l_flag without lock in softint_dispatch() is risky. May help with the "softint screwup" panic.
- Correct the memory barriers around zombies switching into oblivion.
|
1.59 |
| 26-Jan-2020 |
ad | softint_dispatch(): replace the KASSERT for LW_RUNNING with a big printf() plus panic() that dumps lots of info.
|
1.58 |
| 25-Jan-2020 |
ad | softint_execute(): don't hang onto the kernel_lock hold longer than needed.
|
1.57 |
| 08-Jan-2020 |
ad | Hopefully fix some problems seen with MP support on non-x86, in particular where curcpu() is defined as curlwp->l_cpu:
- mi_switch(): undo the ~2007ish optimisation to unlock curlwp before calling cpu_switchto(). It's not safe to let other actors mess with the LWP (in particular l->l_cpu) while it's still context switching. This removes l->l_ctxswtch.
- Move the LP_RUNNING flag into l->l_flag and rename to LW_RUNNING since it's now covered by the LWP's lock.
- Ditch lwp_exit_switchaway() and just call mi_switch() instead. Everything is in cache anyway so it wasn't buying much by trying to avoid saving old state. This means cpu_switchto() will never be called with prevlwp == NULL.
- Remove some KERNEL_LOCK handling which hasn't been needed for years.
|
1.56 |
| 16-Dec-2019 |
ad | branches: 1.56.2; - Extend the per-CPU counters matt@ did to include all of the hot counters in UVM, excluding uvmexp.free, which needs special treatment and will be done with a separate commit. Cuts system time for a build by 20-25% on a 48 CPU machine w/DIAGNOSTIC.
- Avoid 64-bit integer divide on every fault (for rnd_add_uint32).
|
1.55 |
| 06-Dec-2019 |
ad | Make it possible to call mi_switch() and immediately switch to another CPU. This seems to take about 3us on my Intel system. Two changes required:
- Have the caller to mi_switch() be responsible for calling spc_lock(). - Avoid using l->l_cpu in mi_switch().
While here:
- Add a couple of calls to membar_enter() - Have the idle LWP set itself to LSIDL, to match softint_thread(). - Remove unused return value from mi_switch().
|
1.54 |
| 06-Dec-2019 |
ad | softint_trigger (slow case): set RESCHED_IDLE too just to be consistent. No functional change.
|
1.53 |
| 03-Dec-2019 |
riastradh | Rip out pserialize(9) logic now that the RCU patent has expired.
pserialize_perform() is now basically just xc_barrier(XC_HIGHPRI). No more tentacles throughout the scheduler. Simplify the psz read count for diagnostic assertions by putting it unconditionally into cpu_info.
From rmind@, tidied up by me.
|
1.52 |
| 01-Dec-2019 |
ad | Fix false sharing problems with cpu_info. Identified with tprof(8). This was a very nice win in my tests on a 48 CPU box.
- Reorganise cpu_data slightly according to usage. - Put cpu_onproc into struct cpu_info alongside ci_curlwp (now is ci_onproc). - On x86, put some items in their own cache lines according to usage, like the IPI bitmask and ci_want_resched.
|
1.51 |
| 25-Nov-2019 |
ad | port-sparc/54718 (sparc install hangs since recent scheduler changes)
|
1.50 |
| 23-Nov-2019 |
ad | Minor scheduler cleanup:
- Adapt to cpu_need_resched() changes. Avoid lost & duplicate IPIs and ASTs. sched_resched_cpu() and sched_resched_lwp() contain the logic for this. - Changes for LSIDL to make the locking scheme match the intended design. - Reduce lock contention and false sharing further. - Numerous small bugfixes, including some corrections for SCHED_FIFO/RT. - Use setrunnable() in more places, and merge cut & pasted code.
|
1.49 |
| 21-Nov-2019 |
ad | calcru: ignore running softints, unless softint_timing is on. Fixes crazy times reported for proc0.
|
1.48 |
| 06-Oct-2019 |
uwe | xc_barrier - convenience function to xc_broadcast() a nop.
Make the intent more clear and also avoid a bunch of (xcfunc_t)nullop casts that gcc 8 -Wcast-function-type is not happy about.
|
1.47 |
| 17-May-2019 |
ozaki-r | branches: 1.47.2; Implement an aggressive psref leak detector
It is yet another psref leak detector that enables to tell where a leak occurs while a simpler version that is already committed just tells an occurrence of a leak.
Investigating of psref leaks is hard because once a leak occurs a percpu list of psref that tracks references can be corrupted. A reference to a tracking object is memorized in the list via an intermediate object (struct psref) that is normally allocated on a stack of a thread. Thus, the intermediate object can be overwritten on a leak resulting in corruption of the list.
The tracker makes a shadow entry to an intermediate object and stores some hints into it (currently it's a caller address of psref_acquire). We can detect a leak by checking the entries on certain points where any references should be released such as the return point of syscalls and the end of each softint handler.
The feature is expensive and enabled only if the kernel is built with PSREF_DEBUG.
Proposed on tech-kern
|
1.46 |
| 19-Apr-2019 |
ozaki-r | Implement a simple psref leak detector
It detects leaks by counting up the number of held psref by an LWP and checking its zeroness at the end of syscalls and softint handlers. For the counter, a unused field of struct lwp is reused.
The detector runs only if DIAGNOSTIC is turned on.
|
1.45 |
| 28-Dec-2017 |
msaitoh | branches: 1.45.4; Prevent panic or hangup in softint_disestablish(), pserialize_perform() or psref_target_destroy() while mp_online == false.
See http://mail-index.netbsd.org/tech-kern/2017/12/25/msg022829.html
|
1.44 |
| 22-Nov-2017 |
msaitoh | Increase the size of softint's data to prevent panic on big machine. Nowadays, some device drivers and some pseudo interfaces allocate a lot of softints. The resource size for softints are static and it panics when it execeed the limit. It can be dynamically resized. Untill dynamically resizing is implemented, increase softint_bytes from 8192 to 32768.
|
1.43 |
| 04-Jul-2016 |
knakahara | branches: 1.43.10; revert kern_softint.c:r1.42 (which was incorrect fix)
gif(4) has violated softint(9) contract. That is fixed by previous 2 commits. see: https://mail-index.netbsd.org/tech-kern/2016/01/12/msg019993.html
|
1.42 |
| 24-Dec-2015 |
knakahara | fix the following softint parallel operation problem.
(0) softint handler "handler A" is established (1) CPU#X does softint_schedule() for "handler A" - the softhand_t is set SOFTINT_PENDING flag - the softhand_t is NOT set SOFTINT_ACTIVE flag yet (2) CPU#X begins other H/W interrupt processing (3) CPU#Y does softint_disestablish() for "handler A" - waits until softhand_t's SOFTINT_ACTIVE of all CPUs is clear - the softhand_t is set not SOFTINT_ACTIVE but SOFTINT_PENDING, so CPU#Y does not wait - unset the function of "handler A" (4) CPU#X does softint_execute() - the function of "handler A" is already clear, so panic
|
1.41 |
| 25-May-2014 |
rmind | branches: 1.41.2; 1.41.4; 1.41.6; softint: implement softint_schedule_cpu() to trigger software interrupts on the remote CPUs and add SOFTINT_RCPU flag to indicate whether this is going to be used; implemented using asynchronous IPIs.
|
1.40 |
| 07-Sep-2013 |
matt | branches: 1.40.2; Change two KASSERTs to KASSERTMSG
|
1.39 |
| 07-Jan-2013 |
rmind | branches: 1.39.2; - softint_dispatch: perform pserialize(9) switchpoint when softintr processing finishes (without blocking). Problem reported by hannken@, thanks! - pserialize_read_enter: use splsoftserial(), not splsoftclock(). - pserialize_perform: add xcall(9) barrier as interrupts may be coalesced.
|
1.38 |
| 27-Sep-2011 |
jym | branches: 1.38.2; 1.38.8; 1.38.12; 1.38.14; Modify *ASSERTMSG() so they are now used as variadic macros. The main goal is to provide routines that do as KASSERT(9) says: append a message to the panic format string when the assertion triggers, with optional arguments.
Fix call sites to reflect the new definition.
Discussed on tech-kern@. See http://mail-index.netbsd.org/tech-kern/2011/09/07/msg011427.html
|
1.37 |
| 31-Jul-2011 |
uebayasi | Revert previous; s/kmem(9)/uvm_km(9)/ and comment why done so. Per request from rmind@.
|
1.36 |
| 30-Jul-2011 |
uebayasi | Use kmem(9) to allocate per-cpu softint context. No functional changes.
|
1.35 |
| 24-Apr-2011 |
rmind | - Replace few malloc(9) uses with kmem(9). - Rename buf_malloc() to buf_alloc(), fix comments. - Remove some unnecessary inclusions.
|
1.34 |
| 11-Apr-2011 |
rmind | softint_execute: add assert which could catch locking bugs in softint handlers.
|
1.33 |
| 20-Dec-2010 |
matt | branches: 1.33.2; Move counting of faults, traps, intrs, soft[intr]s, syscalls, and nswtch from uvmexp to per-cpu cpu_data and move them to 64bits. Remove unneeded includes of <uvm/uvm_extern.h> and/or <uvm/uvm.h>.
|
1.32 |
| 11-Dec-2010 |
matt | Make sure all for loops use { }
|
1.31 |
| 09-Jan-2010 |
rmind | branches: 1.31.4; softint_overlay: disable kernel preemption before curlwp->l_cpu use.
|
1.30 |
| 08-Jan-2010 |
rmind | softint_execute: release/re-acquire kernel-lock depending on SOFTINT_MPSAFE flag. Keeping it held for MP-safe cases break the lock order assumptions. Per discussion with <martin>.
|
1.29 |
| 19-Jul-2009 |
yamt | set LP_RUNNING when starting lwp0 and idle lwps. add assertions.
|
1.28 |
| 18-May-2009 |
bouyer | Back out rev 1.27 now that MD implementations of spl*() have been fixed to be a memory barrier.
|
1.27 |
| 05-May-2009 |
bouyer | Declare sh_flags volatile. Without it, on ports where splhigh() is inline, the compiler will optimise the second SOFTINT_PENDING test in softint_schedule(). A dissasembly of softint_schedule() with and without the volatile sh_flags confirm this on sparc. Because of this there is a race that could lead to the softhand_t being enqueued twice on si_q, leading to a corrupted queue and some handler being SOFTINT_PENDING but never called.
Should fix PR kern/38637
|
1.26 |
| 06-Apr-2009 |
dyoung | Fix spelling.
|
1.25 |
| 01-Jan-2009 |
ad | branches: 1.25.2; softint_disestablish: don't pass softint_lock to kpause, it's not held.
|
1.24 |
| 13-Dec-2008 |
ad | softint_disestablish: the soft interrupt could still be running on a CPU somewhere in the system. If it is, wait for it to complete before tearing it down. The caller commits to not trigger the interrupt again once disestablish is set in motion.
|
1.23 |
| 14-Oct-2008 |
pooka | branches: 1.23.2; 1.23.4; Give maximum level of network softinterrupts a symbolic constant (which happened to get bumbed from 32 to 33 (AF_MAX) now).
|
1.22 |
| 31-May-2008 |
ad | branches: 1.22.4; PR kern/38812 race between lwp_exit_switchaway and exit1/coredump
Move the LWP RUNNING and TIMEINTR flags into the thread-private flag word.
|
1.21 |
| 27-May-2008 |
ad | Move lwp_exit_switchaway() into kern_synch.c. Instead of always switching to the idle loop, pick a new LWP from the run queue.
|
1.20 |
| 28-Apr-2008 |
ad | branches: 1.20.2; Don't count many items as EVCNT_TYPE_INTR because they clutter up the systat vmstat display.
|
1.19 |
| 28-Apr-2008 |
ad | Make the preemption switch a __HAVE instead of an option.
|
1.18 |
| 28-Apr-2008 |
martin | Remove clause 3 and 4 from TNF licenses
|
1.17 |
| 28-Apr-2008 |
ad | Add MI code to support in-kernel preemption. Preemption is deferred by one of the following:
- Holding kernel_lock (indicating that the code is not MT safe). - Bracketing critical sections with kpreempt_disable/kpreempt_enable. - Holding the interrupt priority level above IPL_NONE.
Statistics on kernel preemption are reported via event counters, and where preemption is deferred for some reason, it's also reported via lockstat. The LWP priority at which preemption is triggered is tuneable via sysctl.
|
1.16 |
| 24-Apr-2008 |
ad | branches: 1.16.2; Merge the socket locking patch:
- Socket layer becomes MP safe. - Unix protocols become MP safe. - Allows protocol processing interrupts to safely block on locks. - Fixes a number of race conditions.
With much feedback from matt@ and plunky@.
|
1.15 |
| 12-Apr-2008 |
ad | branches: 1.15.2; Fix typo. Spotted by kardel@.
|
1.14 |
| 12-Apr-2008 |
ad | softint_overlay: bind the stolen LWP to the current CPU while processing, to prevent it blocking and migrating to another CPU.
|
1.13 |
| 20-Mar-2008 |
ad | softint_execute: add more assertions.
|
1.12 |
| 10-Mar-2008 |
martin | Use cpu index instead of the machine dependend, not very expressive cpuid when naming user-visible kernel entities.
|
1.11 |
| 06-Feb-2008 |
yamt | branches: 1.11.2; 1.11.6; softint_dispatch: fix softint_timing.
|
1.10 |
| 29-Jan-2008 |
ad | Remove reference to lockmgr().
|
1.9 |
| 22-Dec-2007 |
yamt | use binuptime for l_stime/l_rtime.
|
1.8 |
| 11-Dec-2007 |
ad | Change the ncpu test to work when a pool_cache or softint is initialized between mi_cpu_attach() and attachment of the boot CPU. Suggested by mrg@.
|
1.7 |
| 10-Dec-2007 |
ad | softint_establish: hack around CPU_INFO_FOREACH() not working before configure() on some architectures.
|
1.6 |
| 03-Dec-2007 |
ad | branches: 1.6.2; 1.6.4; 1.6.6; For the slow path soft interrupts, arrange to have the priority of a borrowed user LWP raised into the 'kernel RT' range if the LWP sleeps (which is unlikely).
|
1.5 |
| 03-Dec-2007 |
ad | Interrupt handling changes, in discussion since February:
- Reduce available SPL levels for hardware devices to none, vm, sched, high. - Acquire kernel_lock only for interrupts at IPL_VM. - Implement threaded soft interrupts.
|
1.4 |
| 06-Nov-2007 |
ad | branches: 1.4.2; Merge scheduler changes from the vmlocking branch. All discussed on tech-kern:
- Invert priority space so that zero is the lowest priority. Rearrange number and type of priority levels into bands. Add new bands like 'kernel real time'. - Ignore the priority level passed to tsleep. Compute priority for sleep dynamically. - For SCHED_4BSD, make priority adjustment per-LWP, not per-process.
|
1.3 |
| 08-Oct-2007 |
ad | branches: 1.3.2; 1.3.4; 1.3.6; Merge run time accounting changes from the vmlocking branch. These make the LWP "start time" per-thread instead of per-CPU.
|
1.2 |
| 08-Oct-2007 |
ad | Add stubs that provide new soft interrupt API from the vmlocking branch. For now these just pass through to the current softintr code.
(The naming is different to allow softint/softintr to co-exist for a while. I'm hoping that should make it easier to transition.)
|
1.1 |
| 17-Jun-2007 |
ad | branches: 1.1.2; 1.1.6; 1.1.8; file kern_softint.c was initially added on branch vmlocking.
|
1.1.8.1 |
| 14-Oct-2007 |
yamt | sync with head.
|
1.1.6.3 |
| 09-Dec-2007 |
jmcneill | Sync with HEAD.
|
1.1.6.2 |
| 06-Nov-2007 |
joerg | Sync with HEAD.
|
1.1.6.1 |
| 26-Oct-2007 |
joerg | Sync with HEAD.
Follow the merge of pmap.c on i386 and amd64 and move pmap_init_tmp_pgtbl into arch/x86/x86/pmap.c. Modify the ACPI wakeup code to restore CR4 before jumping back into kernel space as the large page option might cover that.
|
1.1.2.21 |
| 05-Nov-2007 |
ad | Eliminate ref to safepri.
|
1.1.2.20 |
| 01-Nov-2007 |
ad | - Fix interactivity problems under high load. Beacuse soft interrupts are being stacked on top of regular LWPs, more often than not aston() was being called on a soft interrupt thread instead of a user thread, meaning that preemption was not happening on EOI.
- Don't use bool in a couple of data structures. Sub-word writes are not always atomic and may clobber other fields in the containing word.
- For SCHED_4BSD, make p_estcpu per thread (l_estcpu). Rework how the dynamic priority level is calculated - it's much better behaved now.
- Kill the l_usrpri/l_priority split now that priorities are no longer directly assigned by tsleep(). There are three fields describing LWP priority:
l_priority: Dynamic priority calculated by the scheduler. This does not change for kernel/realtime threads, and always stays within the correct band. Eg for timeshared LWPs it never moves out of the user priority range. This is basically what l_usrpri was before.
l_inheritedprio: Lent to the LWP due to priority inheritance (turnstiles).
l_kpriority: A boolean value set true the first time an LWP sleeps within the kernel. This indicates that the LWP should get a priority boost as compensation for blocking. lwp_eprio() now does the equivalent of sched_kpri() if the flag is set. The flag is cleared in userret().
- Keep track of scheduling class (OTHER, FIFO, RR) in struct lwp, and use this to make decisions in a few places where we previously tested for a kernel thread.
- Partially fix itimers and usr/sys/intr time accounting in the presence of software interrupts.
- Use kthread_create() to create idle LWPs. Move priority definitions from the various modules into sys/param.h.
- newlwp -> lwp_create
|
1.1.2.19 |
| 30-Oct-2007 |
ad | cpu_netisrs is no more.
|
1.1.2.18 |
| 18-Oct-2007 |
ad | Soft interrupt changes. This speeds up the slow mechanism quite a bit and removes the requirement that IPL_SCHED == IPL_HIGH. On a DECstation 3100 using the slow mechanism, FTP transfers are about 7% slower than HEAD on the few simple tests that I have done.
- Instead of using the run queues to schedule soft interrupts, poll for soft interrupt activity in mi_switch(). If there is a soft interrupt to run, we pick a new soft interrupt LWP to run in preference to checking the run queues.
- If a soft interrupt occurs while the CPU is running in userspace, hijack the user LWP in userret() to run the soft interrupt, thus avoiding context switches. This is OK to do, since in userret() the LWP can hold no locks. Soft interrupts will very rarely block in this configuration, and any blocking activity will be short term, so the user process will ~never be delayed by much. The downside to this approach is that it prevents the architecture from doing anything useful with real-time LWPs or kernel preemption, but that is not a problem is the fast path is implemented.
- Steal an idea from powerpc, and use DONETISR to set up multiple soft interrupt handlers for the legacy netisrs. Now schednetisr() just does softint_schedule() on the relevant handle.
|
1.1.2.17 |
| 09-Oct-2007 |
ad | Sync with head.
|
1.1.2.16 |
| 09-Oct-2007 |
ad | Sync with head.
|
1.1.2.15 |
| 24-Sep-2007 |
ad | Replace references to top/bottom half to reduce confusion. Suggested by yamt@.
|
1.1.2.14 |
| 01-Sep-2007 |
yamt | make "softint block" evcnt per softint_t. ok'ed by Andrew Doran.
|
1.1.2.13 |
| 28-Aug-2007 |
yamt | softint_dispatch: add a missing pmap_deactivate. ok'ed by Andrew Doran.
|
1.1.2.12 |
| 21-Aug-2007 |
ad | A few minor corrections around calls to cpu_need_resched().
|
1.1.2.11 |
| 19-Aug-2007 |
ad | softint_execute: acquire kernel_lock only once.
|
1.1.2.10 |
| 13-Aug-2007 |
yamt | softint_execute: s/Since since/Since/ in a comment.
|
1.1.2.9 |
| 18-Jul-2007 |
ad | Don't bother yielding if a higher priority LWP is interrupted, just leave a comment about it.
|
1.1.2.8 |
| 18-Jul-2007 |
ad | - No need for a TAILQ, use a SIMPLEQ instead. - Update blurb.
|
1.1.2.7 |
| 15-Jul-2007 |
ad | Get pmax working.
|
1.1.2.6 |
| 14-Jul-2007 |
ad | Make it possible to track time spent by soft interrupts as is done for normal LWPs, and provide a sysctl to switch it on/off. Not enabled by default because microtime() is not free. XXX Not happy with this but I want it get it out of my local tree for the time being.
|
1.1.2.5 |
| 07-Jul-2007 |
ad | Fix a comment.
|
1.1.2.4 |
| 07-Jul-2007 |
ad | Minor tweak to previous.
|
1.1.2.3 |
| 07-Jul-2007 |
ad | - Remove the interrupt priority range and use 'kernel RT' instead, since only soft interrupts are threaded. - Rename l->l_pinned to l->l_switchto. It might be useful for (re-) implementing SA or doors. - Simplify soft interrupt dispatch so MD code is doing as little as possible that is new.
|
1.1.2.2 |
| 01-Jul-2007 |
ad | - softint_execute: if a thread with higher priority has been interrupted, then yield. Need to find a way to optimise this as it currently means walking the list of pinned LWPs. - Add blurb (to be edited/completed). - Add a counter to track how often soft interrupts sleep. - Minor cosmetic changes.
|
1.1.2.1 |
| 17-Jun-2007 |
ad | - Increase the number of thread priorities from 128 to 256. How the space is set up is to be revisited. - Implement soft interrupts as kernel threads. A generic implementation is provided, with hooks for fast-path MD code that can run the interrupt threads over the top of other threads executing in the kernel. - Split vnode::v_flag into three fields, depending on how the flag is locked (by the interlock, by the vnode lock, by the file system). - Miscellaneous locking fixes and improvements.
|
1.3.6.4 |
| 18-Feb-2008 |
mjf | Sync with HEAD.
|
1.3.6.3 |
| 27-Dec-2007 |
mjf | Sync with HEAD.
|
1.3.6.2 |
| 08-Dec-2007 |
mjf | Sync with HEAD.
|
1.3.6.1 |
| 19-Nov-2007 |
mjf | Sync with HEAD.
|
1.3.4.9 |
| 24-Mar-2008 |
yamt | sync with head.
|
1.3.4.8 |
| 17-Mar-2008 |
yamt | sync with head.
|
1.3.4.7 |
| 11-Feb-2008 |
yamt | sync with head.
|
1.3.4.6 |
| 04-Feb-2008 |
yamt | sync with head.
|
1.3.4.5 |
| 21-Jan-2008 |
yamt | sync with head
|
1.3.4.4 |
| 07-Dec-2007 |
yamt | sync with head
|
1.3.4.3 |
| 15-Nov-2007 |
yamt | sync with head.
|
1.3.4.2 |
| 27-Oct-2007 |
yamt | sync with head.
|
1.3.4.1 |
| 08-Oct-2007 |
yamt | file kern_softint.c was added on branch yamt-lazymbuf on 2007-10-27 11:35:28 +0000
|
1.3.2.1 |
| 13-Nov-2007 |
bouyer | Sync with HEAD
|
1.4.2.4 |
| 23-Mar-2008 |
matt | sync with HEAD
|
1.4.2.3 |
| 09-Jan-2008 |
matt | sync with HEAD
|
1.4.2.2 |
| 06-Nov-2007 |
matt | sync with HEAD
|
1.4.2.1 |
| 06-Nov-2007 |
matt | file kern_softint.c was added on branch matt-armv6 on 2007-11-06 23:31:58 +0000
|
1.6.6.2 |
| 02-Jan-2008 |
bouyer | Sync with HEAD
|
1.6.6.1 |
| 13-Dec-2007 |
bouyer | Sync with HEAD
|
1.6.4.2 |
| 13-Dec-2007 |
yamt | sync with head.
|
1.6.4.1 |
| 11-Dec-2007 |
yamt | sync with head.
|
1.6.2.1 |
| 26-Dec-2007 |
ad | Sync with head.
|
1.11.6.3 |
| 17-Jan-2009 |
mjf | Sync with HEAD.
|
1.11.6.2 |
| 02-Jun-2008 |
mjf | Sync with HEAD.
|
1.11.6.1 |
| 03-Apr-2008 |
mjf | Sync with HEAD.
|
1.11.2.1 |
| 24-Mar-2008 |
keiichi | sync with head.
|
1.15.2.2 |
| 04-Jun-2008 |
yamt | sync with head
|
1.15.2.1 |
| 18-May-2008 |
yamt | sync with head.
|
1.16.2.6 |
| 11-Mar-2010 |
yamt | sync with head
|
1.16.2.5 |
| 19-Aug-2009 |
yamt | sync with head.
|
1.16.2.4 |
| 20-Jun-2009 |
yamt | sync with head
|
1.16.2.3 |
| 16-May-2009 |
yamt | sync with head
|
1.16.2.2 |
| 04-May-2009 |
yamt | sync with head.
|
1.16.2.1 |
| 16-May-2008 |
yamt | sync with head.
|
1.20.2.1 |
| 23-Jun-2008 |
wrstuden | Sync w/ -current. 34 merge conflicts to follow.
|
1.22.4.1 |
| 19-Oct-2008 |
haad | Sync with HEAD.
|
1.23.4.3 |
| 16-Jan-2010 |
bouyer | Pull up following revision(s) (requested by rmind in ticket #1241): sys/kern/kern_softint.c: revision 1.30 softint_execute: release/re-acquire kernel-lock depending on SOFTINT_MPSAFE flag. Keeping it held for MP-safe cases break the lock order assumptions. Per discussion with <martin>.
|
1.23.4.2 |
| 02-Feb-2009 |
snj | branches: 1.23.4.2.2; 1.23.4.2.4; Pull up following revision(s) (requested by ad in ticket #349): sys/kern/kern_softint.c: revision 1.25 softint_disestablish: don't pass softint_lock to kpause, it's not held.
|
1.23.4.1 |
| 02-Feb-2009 |
snj | Pull up following revision(s) (requested by ad in ticket #349): sys/kern/kern_softint.c: revision 1.24 sys/sys/intr.h: revision 1.8 softint_disestablish: the soft interrupt could still be running on a CPU somewhere in the system. If it is, wait for it to complete before tearing it down. The caller commits to not trigger the interrupt again once disestablish is set in motion.
|
1.23.4.2.4.1 |
| 21-Apr-2010 |
matt | sync to netbsd-5
|
1.23.4.2.2.1 |
| 16-Jan-2010 |
bouyer | Pull up following revision(s) (requested by rmind in ticket #1241): sys/kern/kern_softint.c: revision 1.30 softint_execute: release/re-acquire kernel-lock depending on SOFTINT_MPSAFE flag. Keeping it held for MP-safe cases break the lock order assumptions. Per discussion with <martin>.
|
1.23.2.2 |
| 28-Apr-2009 |
skrll | Sync with HEAD.
|
1.23.2.1 |
| 19-Jan-2009 |
skrll | Sync with HEAD.
|
1.25.2.2 |
| 23-Jul-2009 |
jym | Sync with HEAD.
|
1.25.2.1 |
| 13-May-2009 |
jym | Sync with HEAD.
Commit is split, to avoid a "too many arguments" protocol error.
|
1.31.4.3 |
| 31-May-2011 |
rmind | sync with head
|
1.31.4.2 |
| 21-Apr-2011 |
rmind | sync with head
|
1.31.4.1 |
| 05-Mar-2011 |
rmind | sync with head
|
1.33.2.1 |
| 06-Jun-2011 |
jruoho | Sync with HEAD.
|
1.38.14.2 |
| 14-Jul-2016 |
snj | Pull up following revision(s) (requested by knakahara in ticket #1356): sys/kern/kern_softint.c: revision 1.42 fix the following softint parallel operation problem. (0) softint handler "handler A" is established (1) CPU#X does softint_schedule() for "handler A" - the softhand_t is set SOFTINT_PENDING flag - the softhand_t is NOT set SOFTINT_ACTIVE flag yet (2) CPU#X begins other H/W interrupt processing (3) CPU#Y does softint_disestablish() for "handler A" - waits until softhand_t's SOFTINT_ACTIVE of all CPUs is clear - the softhand_t is set not SOFTINT_ACTIVE but SOFTINT_PENDING, so CPU#Y does not wait - unset the function of "handler A" (4) CPU#X does softint_execute() - the function of "handler A" is already clear, so panic
|
1.38.14.1 |
| 08-Feb-2013 |
riz | Pull up following revision(s) (requested by rmind in ticket #782): sys/rump/include/machine/intr.h: revision 1.19 sys/kern/subr_pserialize.c: revision 1.6 sys/kern/kern_softint.c: revision 1.39 - softint_dispatch: perform pserialize(9) switchpoint when softintr processing finishes (without blocking). Problem reported by hannken@, thanks! - pserialize_read_enter: use splsoftserial(), not splsoftclock(). - pserialize_perform: add xcall(9) barrier as interrupts may be coalesced. Provide splsoftserial. GRRR RUMP
|
1.38.12.3 |
| 03-Dec-2017 |
jdolecek | update from HEAD
|
1.38.12.2 |
| 20-Aug-2014 |
tls | Rebase to HEAD as of a few days ago.
|
1.38.12.1 |
| 25-Feb-2013 |
tls | resync with head
|
1.38.8.2 |
| 14-Jul-2016 |
snj | Pull up following revision(s) (requested by knakahara in ticket #1356): sys/kern/kern_softint.c: revision 1.42 fix the following softint parallel operation problem. (0) softint handler "handler A" is established (1) CPU#X does softint_schedule() for "handler A" - the softhand_t is set SOFTINT_PENDING flag - the softhand_t is NOT set SOFTINT_ACTIVE flag yet (2) CPU#X begins other H/W interrupt processing (3) CPU#Y does softint_disestablish() for "handler A" - waits until softhand_t's SOFTINT_ACTIVE of all CPUs is clear - the softhand_t is set not SOFTINT_ACTIVE but SOFTINT_PENDING, so CPU#Y does not wait - unset the function of "handler A" (4) CPU#X does softint_execute() - the function of "handler A" is already clear, so panic
|
1.38.8.1 |
| 08-Feb-2013 |
riz | branches: 1.38.8.1.2; Pull up following revision(s) (requested by rmind in ticket #782): sys/rump/include/machine/intr.h: revision 1.19 sys/kern/subr_pserialize.c: revision 1.6 sys/kern/kern_softint.c: revision 1.39 - softint_dispatch: perform pserialize(9) switchpoint when softintr processing finishes (without blocking). Problem reported by hannken@, thanks! - pserialize_read_enter: use splsoftserial(), not splsoftclock(). - pserialize_perform: add xcall(9) barrier as interrupts may be coalesced. Provide splsoftserial. GRRR RUMP
|
1.38.8.1.2.1 |
| 14-Jul-2016 |
snj | Pull up following revision(s) (requested by knakahara in ticket #1356): sys/kern/kern_softint.c: revision 1.42 fix the following softint parallel operation problem. (0) softint handler "handler A" is established (1) CPU#X does softint_schedule() for "handler A" - the softhand_t is set SOFTINT_PENDING flag - the softhand_t is NOT set SOFTINT_ACTIVE flag yet (2) CPU#X begins other H/W interrupt processing (3) CPU#Y does softint_disestablish() for "handler A" - waits until softhand_t's SOFTINT_ACTIVE of all CPUs is clear - the softhand_t is set not SOFTINT_ACTIVE but SOFTINT_PENDING, so CPU#Y does not wait - unset the function of "handler A" (4) CPU#X does softint_execute() - the function of "handler A" is already clear, so panic
|
1.38.2.2 |
| 22-May-2014 |
yamt | sync with head.
for a reference, the tree before this commit was tagged as yamt-pagecache-tag8.
this commit was splitted into small chunks to avoid a limitation of cvs. ("Protocol error: too many arguments")
|
1.38.2.1 |
| 23-Jan-2013 |
yamt | sync with head
|
1.39.2.1 |
| 18-May-2014 |
rmind | sync with head
|
1.40.2.1 |
| 10-Aug-2014 |
tls | Rebase.
|
1.41.6.1 |
| 26-Jan-2016 |
riz | Pull up following revision(s) (requested by knakahara in ticket #1067): sys/kern/kern_softint.c: revision 1.42 fix the following softint parallel operation problem. (0) softint handler "handler A" is established (1) CPU#X does softint_schedule() for "handler A" - the softhand_t is set SOFTINT_PENDING flag - the softhand_t is NOT set SOFTINT_ACTIVE flag yet (2) CPU#X begins other H/W interrupt processing (3) CPU#Y does softint_disestablish() for "handler A" - waits until softhand_t's SOFTINT_ACTIVE of all CPUs is clear - the softhand_t is set not SOFTINT_ACTIVE but SOFTINT_PENDING, so CPU#Y does not wait - unset the function of "handler A" (4) CPU#X does softint_execute() - the function of "handler A" is already clear, so panic
|
1.41.4.2 |
| 09-Jul-2016 |
skrll | Sync with HEAD
|
1.41.4.1 |
| 27-Dec-2015 |
skrll | Sync with HEAD (as of 26th Dec)
|
1.41.2.1 |
| 26-Jan-2016 |
riz | Pull up following revision(s) (requested by knakahara in ticket #1067): sys/kern/kern_softint.c: revision 1.42 fix the following softint parallel operation problem. (0) softint handler "handler A" is established (1) CPU#X does softint_schedule() for "handler A" - the softhand_t is set SOFTINT_PENDING flag - the softhand_t is NOT set SOFTINT_ACTIVE flag yet (2) CPU#X begins other H/W interrupt processing (3) CPU#Y does softint_disestablish() for "handler A" - waits until softhand_t's SOFTINT_ACTIVE of all CPUs is clear - the softhand_t is set not SOFTINT_ACTIVE but SOFTINT_PENDING, so CPU#Y does not wait - unset the function of "handler A" (4) CPU#X does softint_execute() - the function of "handler A" is already clear, so panic
|
1.43.10.2 |
| 22-Jan-2018 |
martin | Pull up following revision(s) (requested by jdolecek in ticket #506): sys/kern/kern_softint.c: revision 1.45 sys/rump/librump/rumpkern/rump.c: revision 1.331 sys/kern/subr_pserialize.c: revision 1.10 sys/kern/subr_psref.c: revision 1.10 Prevent panic or hangup in softint_disestablish(), pserialize_perform() or psref_target_destroy() while mp_online == false. See http://mail-index.netbsd.org/tech-kern/2017/12/25/msg022829.html Set mp_online = true. This change might fix PR#52886.
|
1.43.10.1 |
| 23-Nov-2017 |
martin | Pull up following revision(s) (requested by msaitoh in ticket #387): sys/kern/kern_softint.c: revision 1.44 Increase the size of softint's data to prevent panic on big machine. Nowadays, some device drivers and some pseudo interfaces allocate a lot of softints. The resource size for softints are static and it panics when it execeed the limit. It can be dynamically resized. Untill dynamically resizing is implemented, increase softint_bytes from 8192 to 32768.
|
1.45.4.3 |
| 13-Apr-2020 |
martin | Mostly merge changes from HEAD upto 20200411
|
1.45.4.2 |
| 08-Apr-2020 |
martin | Merge changes from current as of 20200406
|
1.45.4.1 |
| 10-Jun-2019 |
christos | Sync with HEAD
|
1.47.2.1 |
| 12-Dec-2019 |
martin | Pull up following revision(s) (requested by ad in ticket #546):
sys/kern/kern_resource.c: revision 1.183 sys/kern/kern_softint.c: revision 1.49
calcru: ignore running softints, unless softint_timing is on. Fixes crazy times reported for proc0.
|
1.56.2.3 |
| 29-Feb-2020 |
ad | Sync with head.
|
1.56.2.2 |
| 25-Jan-2020 |
ad | Sync with head.
|
1.56.2.1 |
| 17-Jan-2020 |
ad | Sync with head.
|