History log of /src/sys/kern/kern_timeout.c |
Revision | | Date | Author | Comments |
1.79 |
| 08-Oct-2023 |
ad | Ensure that an LWP that has taken a legitimate wakeup never produces an error code from sleepq_block(). Then, it's possible to make cv_signal() work as expected and only ever wake a singular LWP.
|
1.78 |
| 04-Oct-2023 |
ad | Eliminate l->l_biglocks. Originally I think it had a use but these days a local variable will do.
|
1.77 |
| 23-Sep-2023 |
ad | - Simplify how priority boost for blocking in kernel is handled. Rather than setting it up at each site where we block, make it a property of syncobj_t. Then, do not hang onto the priority boost until userret(), drop it as soon as the LWP is out of the run queue and onto a CPU. Holding onto it longer is of questionable benefit.
- This allows two members of lwp_t to be deleted, and mi_userret() to be simplified a lot (next step: trim it down to a single conditional).
- While here, constify syncobj_t and de-inline a bunch of small functions like lwp_lock() which turn out not to be small after all (I don't know why, but atomic_*_relaxed() seem to provoke a compiler shitfit above and beyond what volatile does).
|
1.76 |
| 27-Jun-2023 |
pho | callout(9): Delete the unused member cc_cancel from struct callout_cpu
I see no reason why it should be there, and believe its a leftover from some old code.
|
1.75 |
| 27-Jun-2023 |
pho | callout(9): Tidy up the condition for "callout is running on another LWP"
No functional changes.
|
1.74 |
| 27-Jun-2023 |
pho | callout(9): Fix panic() in callout_destroy() (kern/57226)
The culprit was callout_halt(). "(c->c_flags & CALLOUT_FIRED) != 0" wasn't the correct way to check if a callout is running. It failed to wait for a running callout to finish in the following scenario:
1. cpu0 initializes a callout and schedules it. 2. cpu0 invokes callout_softlock() and fires the callout, setting the flag CALLOUT_FIRED. 3. The callout invokes callout_schedule() to re-schedule itself. 4. callout_schedule_locked() clears the flag CALLOUT_FIRED, and releases the lock. 5. Before the lock is re-acquired by callout_softlock(), cpu1 decides to destroy the callout. It first invokes callout_halt() to make sure the callout finishes running. 6. But since CALLOUT_FIRED has been cleared, callout_halt() thinks it's not running and therefore returns without invoking callout_wait(). 7. cpu1 proceeds to invoke callout_destroy() while it's still running on cpu0. callout_destroy() detects that and panics.
|
1.73 |
| 29-Oct-2022 |
riastradh | branches: 1.73.2; callout(9): Mark new flags local unused for non-KDTRACE_HOOKS builds.
(feel free to add a new __dtrace_used annotation to make this more precise)
|
1.72 |
| 28-Oct-2022 |
riastradh | callout(9): Sprinkle dtrace probes.
|
1.71 |
| 28-Oct-2022 |
riastradh | callout(9): Nix trailing whitespace.
No functional change intended.
|
1.70 |
| 29-Jun-2022 |
riastradh | sleepq(9): Pass syncobj through to sleepq_block.
Previously the usage pattern was:
sleepq_enter(sq, l, lock); // locks l ... sleepq_enqueue(sq, ..., sobj, ...); // assumes l locked, sets l_syncobj ... (*) sleepq_block(...); // unlocks l
As long as l remains locked from sleepq_enter to sleepq_block, l_syncobj is stable, and sleepq_block uses it via ktrcsw to determine whether the sleep is on a mutex in order to avoid creating ktrace context-switch records (which involves allocation which is forbidden in softint context, while taking and even sleeping for a mutex is allowed).
However, in turnstile_block, the logic at (*) also involves turnstile_lendpri, which sometimes unlocks and relocks l. At that point, another thread can swoop in and sleepq_remove l, which sets l_syncobj to sched_syncobj. If that happens, ktrcsw does what is forbidden -- tries to allocate a ktrace record for the context switch.
As an optimization, sleepq_block or turnstile_block could stop early if it detects that l_syncobj doesn't match -- we've already been requested to wake up at this point so there's no need to mi_switch. (And then it would be unnecessary to pass the syncobj through sleepq_block, because l_syncobj would remain stable.) But I'll leave that to another change.
Reported-by: syzbot+8b9d7b066c32dbcdc63b@syzkaller.appspotmail.com
|
1.69 |
| 30-Mar-2022 |
riastradh | kern: Assert softint does not net acquire kernel locks.
This redoes previous change where I mistakenly used the CPU's biglock count, which is not necessarily stable -- the softint lwp may sleep on a mutex, and the lwp it interrupted may start up again and release the kernel lock, so by the time the softint lwp wakes up again and the softint function returns, the CPU may not be holding any kernel locks. But the softint lwp should never hold kernel locks except when it's in a (usually, non-MPSAFE) softint function.
Same with callout.
|
1.68 |
| 30-Mar-2022 |
riastradh | Revert "kern: Sprinkle biglock-slippage assertions."
Got the diagnostic information I needed from this, and it's holding up releng tests of everything else, so let's back this out until I need more diagnostics or track down the original source of the problem.
|
1.67 |
| 30-Mar-2022 |
riastradh | kern: Sprinkle biglock-slippage assertions.
We seem to have a poltergeist that occasionally messes with the biglock depth, but it's very hard to reproduce and only manifests as some other CPU spinning out on the kernel lock which is no good for diagnostics.
|
1.66 |
| 27-Jun-2020 |
rin | Stop allocating struct cpu_info in BSS; No need to db_read_bytes() against cpu_info, just ci_data.cpu_callout is enough.
Save 1408 bytes of BSS for, e.g., aarch64.
|
1.65 |
| 02-Jun-2020 |
rin | Appease clang -Wtentative-definition-incomplete-type.
Now, both kernel and crash(8) build with clang for amd64 (and certainly other ports also).
Pointed out by joerg.
|
1.64 |
| 31-May-2020 |
rin | Stop allocating buffers dynamically in a DDB session, in order not to disturb on-going debugged state of kernel datastructures.
Since DDB is running on 1 CPU at a time, static buffers are enough.
Increase in BSS section is: 52552 for amd64 (LP64) 9152 for m68k (ILP32)
Requested by thorpej@ and mrg@. Also suggested by ryo@. Thanks!
|
1.63 |
| 31-May-2020 |
rin | Switch to db_alloc() from kmem_intr_alloc(9).
Fix build failure as a part of crash(8). Noticed by tnn@, thanks!
|
1.62 |
| 31-May-2020 |
rin | db_show_callout(): struct callout_cpu and cpu_info are too much for stack.
XXX DDB can be running in the interrupt context, e.g., when activated from console. Therefore, use kmem_intr_alloc(9) instead of kmem_alloc(9).
Frame size, e.g. for m68k, becomes: 9212 (oops!) --> 0
|
1.61 |
| 19-Apr-2020 |
ad | Set LW_SINTR earlier so it doesn't pose a problem for doing interruptable waits with turnstiles (not currently done).
|
1.60 |
| 13-Apr-2020 |
maxv | hardclock_ticks -> getticks()
|
1.59 |
| 21-Mar-2020 |
ad | branches: 1.59.2; callout_destroy(): change output from a couple of assertions so it's clear what they are checking for (callout being destroyed while pending/running).
|
1.58 |
| 23-Jan-2020 |
ad | callout_halt():
- It's a common design pattern for callouts to re-schedule themselves, so check after waiting and put a stop to it again if needed. - Add comments.
|
1.57 |
| 21-Nov-2019 |
ad | branches: 1.57.2; Break the slow path for callout_halt() out into its own routine. No functional change.
|
1.56 |
| 10-Mar-2019 |
kre | Undo previous, in the name of "defined" behaviour, it breaks things.
This is all explained in the comment at the head of the file:
* Some of the "math" in here is a bit tricky. We have to beware of * wrapping ints. * * [...] but c->c_time can * be positive or negative so comparing it with anything is dangerous.
In particular, "if (c->c_time > ticks)" is simply wrong.
* The only way we can use the c->c_time value in any predictable way is * when we calculate how far in the future `to' will timeout - "c->c_time * - c->c_cpu->cc_ticks". The result will always be positive for future * timeouts and 0 or negative for due timeouts.
Go back to the old way. But write the calculation of delta slightly differently which will hopefully appease KUBsan. Perhaps. In any case, this code works on any system that NetBSD has any hope of ever running on, whatever the C standards say is "defined" behaviour.
|
1.55 |
| 08-Jul-2018 |
kamil | Try to avoid signed integer overflow in callout_softclock()
The delta operation (c->c_time - ticks) is documented as safe, however it still can cause overflow in narrow case scenarios.
Try to avoid overflow/underflow or at least make it less frequent with a direct comparison of c->c_time and tics. Perform the operation of subtraction only when c->c_time > ticks.
sys/kern/kern_timeout.c:720:9, signed integer overflow: -2147410738 - 72912 cannot be represented in type 'int'
Detected with Kernel Undefined Behavior Sanitizer.
Patch suggested by <Riastradh>
|
1.54 |
| 16-Jan-2018 |
ozaki-r | branches: 1.54.2; 1.54.4; Sanity-check if interlock is held when it's passed
|
1.53 |
| 09-Jan-2018 |
christos | check the magic first in case we got passed a junk pointer.
|
1.52 |
| 01-Jun-2017 |
chs | branches: 1.52.2; remove checks for failure after memory allocation calls that cannot fail:
kmem_alloc() with KM_SLEEP kmem_zalloc() with KM_SLEEP percpu_alloc() pserialize_create() psref_class_create()
all of these paths include an assertion that the allocation has not failed, so callers should not assert that again.
|
1.51 |
| 24-Nov-2015 |
christos | fix crash(8) printing of callouts.
|
1.50 |
| 09-Feb-2015 |
christos | don't compare user and kernel addresses
|
1.49 |
| 08-Feb-2015 |
christos | make the ddb code crash(8) friendly.
|
1.48 |
| 10-Dec-2014 |
martin | Change a KASSERT to KASSERTMSG and print enough details about the callout so it can be identified even if ddb should not be helpfull (or not enabled).
|
1.47 |
| 14-Sep-2013 |
martin | branches: 1.47.6; Nove a CTASSERT to global scope
|
1.46 |
| 28-Jun-2013 |
matt | branches: 1.46.2; Convert a KASSERT to a KASSERTMSG
|
1.45 |
| 18-Dec-2010 |
rmind | branches: 1.45.8; 1.45.18; - Fix a few possible locking issues in execve1() and exit1(). Add a note that scheduler locks are special in this regard - adaptive locks cannot be in the path due to turnstiles. Randomly spotted/reported by uebayasi@. - Remove unused lwp_relock() and replace lwp_lock_retry() by simplifying lwp_lock() and sleepq_enter() a little. - Give alllwp its own cache-line and mark lwp_cache pointer as read-mostly.
OK ad@
|
1.44 |
| 21-Mar-2009 |
ad | branches: 1.44.4; Allocate sleep queue locks with mutex_obj_alloc. Reduces memory usage on !MP kernels, and reduces false sharing on MP ones.
|
1.43 |
| 10-Oct-2008 |
ad | branches: 1.43.2; 1.43.8; Update CALLOUT_INVOKING correctly, seems to have been lost.
|
1.42 |
| 06-Sep-2008 |
rmind | Add few KASSERTs.
|
1.41 |
| 02-Jul-2008 |
matt | branches: 1.41.2; Switch from KASSERT to CTASSERT for those asserts testing sizes of types.
|
1.40 |
| 26-May-2008 |
ad | branches: 1.40.2; Take the mutex pointer and waiters count out of sleepq_t: the values can be or are maintained elsewhere. Now a sleepq_t is just a TAILQ_HEAD.
|
1.39 |
| 28-Apr-2008 |
martin | branches: 1.39.2; Remove clause 3 and 4 from TNF licenses
|
1.38 |
| 23-Apr-2008 |
ad | branches: 1.38.2; kmutex_t * -> void *, to avoid MD header fallout.
|
1.37 |
| 22-Apr-2008 |
ad | Give callout_halt() an additional 'kmutex_t *interlock' argument. If there is a need to block and wait for the callout to complete, and there is an interlock, it will be dropped while waiting and reacquired before return.
|
1.36 |
| 22-Apr-2008 |
ad | Implement MP callouts as discussed on tech-kern. The CPU binding code is disabled for the moment until we figure out what we want to do with CPUs being offlined.
|
1.35 |
| 29-Mar-2008 |
ad | branches: 1.35.2; callout_halt: remove unneeded extern decl.
|
1.34 |
| 29-Mar-2008 |
ad | callout_destroy: fix assertion to not fire when a callout is destroying its own handle. PR kern/38324.
|
1.33 |
| 28-Mar-2008 |
ad | Pull in sys/cpu.h for cpu_intr_p().
|
1.32 |
| 28-Mar-2008 |
ad | Enable blocking synchronization for callouts as discussed at length on tech-kern last year. Instead of modifying callout_stop, add a new routine (callout_halt) which will sleep if the callout is already in flight. Note that if a callout can take locks, the caller of callout_halt must not hold any of those locks - otherwise the two could deadlock.
|
1.31 |
| 04-Jan-2008 |
ad | branches: 1.31.6; Start detangling lock.h from intr.h. This is likely to cause short term breakage, but the mess of dependencies has been regularly breaking the build recently anyhow.
|
1.30 |
| 05-Dec-2007 |
ad | branches: 1.30.4; Match the docs: MUTEX_DRIVER/SPIN are now only for porting code written for Solaris.
|
1.29 |
| 23-Nov-2007 |
joerg | branches: 1.29.2; Share code between callout_schedule and callout_reset.
|
1.28 |
| 06-Nov-2007 |
ad | Merge scheduler changes from the vmlocking branch. All discussed on tech-kern:
- Invert priority space so that zero is the lowest priority. Rearrange number and type of priority levels into bands. Add new bands like 'kernel real time'. - Ignore the priority level passed to tsleep. Compute priority for sleep dynamically. - For SCHED_4BSD, make priority adjustment per-LWP, not per-process.
|
1.27 |
| 08-Oct-2007 |
ad | branches: 1.27.2; 1.27.4; Use the softint API.
|
1.26 |
| 01-Aug-2007 |
ad | branches: 1.26.2; 1.26.4; 1.26.6; 1.26.8; callout_softclock: add a couple of assertions.
|
1.25 |
| 30-Jul-2007 |
ad | callout_barrier: drop kernel_lock before blocking.
|
1.24 |
| 10-Jul-2007 |
ad | branches: 1.24.2; Define _CALLOUT_PRIVATE.
|
1.23 |
| 10-Jul-2007 |
ad | Make netstat build again. I don't see why it has any business dumping the raw contents of tcpcb but that's another story.
|
1.22 |
| 09-Jul-2007 |
ad | Merge some of the less invasive changes from the vmlocking branch:
- kthread, callout, devsw API changes - select()/poll() improvements - miscellaneous MT safety improvements
|
1.21 |
| 22-Feb-2007 |
matt | branches: 1.21.4; 1.21.6; Fix lossage from boolean_t -> bool and updated x86 bus_dma.
|
1.20 |
| 09-Feb-2007 |
ad | branches: 1.20.2; Merge newlock2 to head.
|
1.19 |
| 01-Nov-2006 |
yamt | remove some __unused from function parameters.
|
1.18 |
| 12-Oct-2006 |
christos | - sprinkle __unused on function decls. - fix a couple of unused bugs - no more -Wno-unused for i386
|
1.17 |
| 11-Dec-2005 |
christos | branches: 1.17.20; 1.17.22; merge ktrace-lwp.
|
1.16 |
| 01-Jun-2005 |
drochner | branches: 1.16.2; need a "const"
|
1.15 |
| 29-May-2005 |
christos | - add const. - remove unnecessary casts. - add __UNCONST casts and mark them with XXXUNCONST as necessary.
|
1.14 |
| 26-Feb-2005 |
perry | nuke trailing whitespace
|
1.13 |
| 30-Oct-2003 |
thorpej | branches: 1.13.8; 1.13.10; Make callout_setfunc() a CPP macro. Suggested by enami.
|
1.12 |
| 27-Oct-2003 |
thorpej | - Change callout_setfunc() to require that the callout handle is already initialized. Update the txp(4) to compensate. - Statically initialize the TCP timer callout handles in the tcpcb template. We still use callout_setfunc(), but that call is now much less expensive. Add a comment that the compiler is likely to unroll the loop (so don't sweat that it's there).
|
1.11 |
| 25-Sep-2003 |
scw | Fix for PR kern/22933
Avoid gcc3 pointer alias bugs caused by casting between struct callout and struct callout_circq.
|
1.10 |
| 07-Sep-2003 |
scw | Cast from pointer type to db_addr_t via intptr_t.
|
1.9 |
| 03-Aug-2003 |
he | On second thought, callout_stop() should not clear the INVOKING flag.
|
1.8 |
| 20-Jul-2003 |
he | Temporarily introduce CALLOUT_INVOKING, callout_invoking() and callout_ack() to make users of the callout facility able to cooperate to work around the race caused by the callout code lowering interrupt priority level when invoking callout handlers, something which allows other code to run before the callout handler gets to it's spl*() call.
This is to enable the workaround for the TCP code found in PR#20390 to be applied.
This should be backed out once a more comprehensive fix can be put in place.
|
1.7 |
| 14-Jul-2003 |
lukem | add missing __KERNEL_RCSID()
|
1.6 |
| 17-May-2003 |
mjl | branches: 1.6.2; Typos in comments.
|
1.5 |
| 26-Feb-2003 |
thorpej | Change a printf to an event counter. Callout event counters are conditional on CALLOUT_EVENT_COUNTERS.
|
1.4 |
| 11-Feb-2003 |
yamt | - don't compare c_time directly. - in callout_hardclock, test if timeout_todo is empty or not before release the lock.
|
1.3 |
| 10-Feb-2003 |
drochner | replace &(a?b:c) by (a?&b:&c), so that it looks more like an lvalue (to lint at least) approved by thorpej
|
1.2 |
| 04-Feb-2003 |
martin | Format fix for archs where ptrdiff_t != int.
|
1.1 |
| 04-Feb-2003 |
thorpej | New callout implementation. This is based on callwheel implementation done by Artur Grabowski and Thomas Nordin for OpenBSD, which is more efficient in several ways than the callwheel implementation that it is replacing. It has been adapted to our pre-existing callout API, and also provides the slightly more efficient (and much more intuitive) API (adapted to the callout_*() naming scheme) that the OpenBSD version provides.
Among other things, this shaves a bunch of cycles off rescheduling-in- the-future a callout which is already scheduled, which the common case for TCP timers (notably REXMT and KEEP).
The API has been simplified a bit, as well. The (very confusing to a good many people) "ACTIVE" state for callouts has gone away. There is now only "PENDING" (scheduled to fire in the future) and "EXPIRED" (has fired, and the function called).
Kernel version bump not done; we'll ride the 1.6N bump that happened with the malloc(9) change.
|
1.6.2.5 |
| 10-Nov-2005 |
skrll | Sync with HEAD. Here we go again...
|
1.6.2.4 |
| 04-Mar-2005 |
skrll | Sync with HEAD.
Hi Perry!
|
1.6.2.3 |
| 21-Sep-2004 |
skrll | Fix the sync with head I botched.
|
1.6.2.2 |
| 18-Sep-2004 |
skrll | Sync with HEAD.
|
1.6.2.1 |
| 03-Aug-2004 |
skrll | Sync with HEAD
|
1.13.10.1 |
| 19-Mar-2005 |
yamt | sync with head. xen and whitespace. xen part is not finished.
|
1.13.8.1 |
| 29-Apr-2005 |
kent | sync with -current
|
1.16.2.6 |
| 21-Jan-2008 |
yamt | sync with head
|
1.16.2.5 |
| 07-Dec-2007 |
yamt | sync with head
|
1.16.2.4 |
| 15-Nov-2007 |
yamt | sync with head.
|
1.16.2.3 |
| 27-Oct-2007 |
yamt | sync with head.
|
1.16.2.2 |
| 03-Sep-2007 |
yamt | sync with head.
|
1.16.2.1 |
| 26-Feb-2007 |
yamt | sync with head.
|
1.17.22.2 |
| 10-Dec-2006 |
yamt | sync with head.
|
1.17.22.1 |
| 22-Oct-2006 |
yamt | sync with head
|
1.17.20.7 |
| 09-Feb-2007 |
ad | Work around a gcc code generation bug on alpha. From mhitch.
|
1.17.20.6 |
| 06-Feb-2007 |
ad | Fix compile on m68k.
|
1.17.20.5 |
| 27-Jan-2007 |
ad | Rename some functions to better describe what they do.
|
1.17.20.4 |
| 19-Jan-2007 |
yamt | fix a modify-after-free problem in softclock(). a callout handler can free the callout.
|
1.17.20.3 |
| 29-Dec-2006 |
ad | Checkpoint work in progress.
|
1.17.20.2 |
| 20-Oct-2006 |
ad | Put 'volatile' in the right place.
|
1.17.20.1 |
| 19-Sep-2006 |
ad | - If the callout is running on another CPU, spin before stopping or rescheduling it. - Use mutexes.
|
1.20.2.1 |
| 27-Feb-2007 |
yamt | - sync with head. - move sched_changepri back to kern_synch.c as it doesn't know PPQ anymore.
|
1.21.6.1 |
| 11-Jul-2007 |
mjf | Sync with head.
|
1.21.4.6 |
| 01-Nov-2007 |
ad | - Fix interactivity problems under high load. Beacuse soft interrupts are being stacked on top of regular LWPs, more often than not aston() was being called on a soft interrupt thread instead of a user thread, meaning that preemption was not happening on EOI.
- Don't use bool in a couple of data structures. Sub-word writes are not always atomic and may clobber other fields in the containing word.
- For SCHED_4BSD, make p_estcpu per thread (l_estcpu). Rework how the dynamic priority level is calculated - it's much better behaved now.
- Kill the l_usrpri/l_priority split now that priorities are no longer directly assigned by tsleep(). There are three fields describing LWP priority:
l_priority: Dynamic priority calculated by the scheduler. This does not change for kernel/realtime threads, and always stays within the correct band. Eg for timeshared LWPs it never moves out of the user priority range. This is basically what l_usrpri was before.
l_inheritedprio: Lent to the LWP due to priority inheritance (turnstiles).
l_kpriority: A boolean value set true the first time an LWP sleeps within the kernel. This indicates that the LWP should get a priority boost as compensation for blocking. lwp_eprio() now does the equivalent of sched_kpri() if the flag is set. The flag is cleared in userret().
- Keep track of scheduling class (OTHER, FIFO, RR) in struct lwp, and use this to make decisions in a few places where we previously tested for a kernel thread.
- Partially fix itimers and usr/sys/intr time accounting in the presence of software interrupts.
- Use kthread_create() to create idle LWPs. Move priority definitions from the various modules into sys/param.h.
- newlwp -> lwp_create
|
1.21.4.5 |
| 20-Aug-2007 |
ad | Sync with HEAD.
|
1.21.4.4 |
| 15-Jul-2007 |
ad | Sync with head.
|
1.21.4.3 |
| 01-Jul-2007 |
ad | - Alter the callout ABI to be stable in the event of changes to the internal data structures. - Add a flags argument to callout_init(). If CALLOUT_MPSAFE is specified, the kernel lock is not taken when the callout is executed. - Only synchronize with running callouts from callout_stop(). If the callout is MP safe, then sleep until it has completed. If it is not MP safe (and thus should not block) use the kernel lock to provide synchronization. Both need to be verified as deadlock free, but at this time I think they are OK.
|
1.21.4.2 |
| 16-Jun-2007 |
ad | - Make some of the callout macros into functions proper. - Acquire kernel_lock in softclock().
|
1.21.4.1 |
| 21-Mar-2007 |
ad | GC the simplelock/spinlock debugging stuff.
|
1.24.2.1 |
| 15-Aug-2007 |
skrll | Sync with HEAD.
|
1.26.8.2 |
| 01-Aug-2007 |
ad | callout_softclock: add a couple of assertions.
|
1.26.8.1 |
| 01-Aug-2007 |
ad | file kern_timeout.c was added on branch matt-mips64 on 2007-08-01 23:23:42 +0000
|
1.26.6.1 |
| 14-Oct-2007 |
yamt | sync with head.
|
1.26.4.2 |
| 09-Jan-2008 |
matt | sync with HEAD
|
1.26.4.1 |
| 06-Nov-2007 |
matt | sync with HEAD
|
1.26.2.4 |
| 09-Dec-2007 |
jmcneill | Sync with HEAD.
|
1.26.2.3 |
| 27-Nov-2007 |
joerg | Sync with HEAD. amd64 Xen support needs testing.
|
1.26.2.2 |
| 06-Nov-2007 |
joerg | Sync with HEAD.
|
1.26.2.1 |
| 26-Oct-2007 |
joerg | Sync with HEAD.
Follow the merge of pmap.c on i386 and amd64 and move pmap_init_tmp_pgtbl into arch/x86/x86/pmap.c. Modify the ACPI wakeup code to restore CR4 before jumping back into kernel space as the large page option might cover that.
|
1.27.4.3 |
| 18-Feb-2008 |
mjf | Sync with HEAD.
|
1.27.4.2 |
| 08-Dec-2007 |
mjf | Sync with HEAD.
|
1.27.4.1 |
| 19-Nov-2007 |
mjf | Sync with HEAD.
|
1.27.2.1 |
| 13-Nov-2007 |
bouyer | Sync with HEAD
|
1.29.2.1 |
| 08-Dec-2007 |
ad | Sync with head.
|
1.30.4.1 |
| 08-Jan-2008 |
bouyer | Sync with HEAD
|
1.31.6.5 |
| 17-Jan-2009 |
mjf | Sync with HEAD.
|
1.31.6.4 |
| 28-Sep-2008 |
mjf | Sync with HEAD.
|
1.31.6.3 |
| 02-Jul-2008 |
mjf | Sync with HEAD.
|
1.31.6.2 |
| 02-Jun-2008 |
mjf | Sync with HEAD.
|
1.31.6.1 |
| 03-Apr-2008 |
mjf | Sync with HEAD.
|
1.35.2.2 |
| 04-Jun-2008 |
yamt | sync with head
|
1.35.2.1 |
| 18-May-2008 |
yamt | sync with head.
|
1.38.2.2 |
| 04-May-2009 |
yamt | sync with head.
|
1.38.2.1 |
| 16-May-2008 |
yamt | sync with head.
|
1.39.2.3 |
| 10-Oct-2008 |
skrll | Sync with HEAD.
|
1.39.2.2 |
| 18-Sep-2008 |
wrstuden | Sync with wrstuden-revivesa-base-2.
|
1.39.2.1 |
| 23-Jun-2008 |
wrstuden | Sync w/ -current. 34 merge conflicts to follow.
|
1.40.2.1 |
| 03-Jul-2008 |
simonb | Sync with head.
|
1.41.2.1 |
| 19-Oct-2008 |
haad | Sync with HEAD.
|
1.43.8.1 |
| 13-May-2009 |
jym | Sync with HEAD.
Commit is split, to avoid a "too many arguments" protocol error.
|
1.43.2.1 |
| 28-Apr-2009 |
skrll | Sync with HEAD.
|
1.44.4.1 |
| 05-Mar-2011 |
rmind | sync with head
|
1.45.18.2 |
| 03-Dec-2017 |
jdolecek | update from HEAD
|
1.45.18.1 |
| 20-Aug-2014 |
tls | Rebase to HEAD as of a few days ago.
|
1.45.8.1 |
| 22-May-2014 |
yamt | sync with head.
for a reference, the tree before this commit was tagged as yamt-pagecache-tag8.
this commit was splitted into small chunks to avoid a limitation of cvs. ("Protocol error: too many arguments")
|
1.46.2.1 |
| 18-May-2014 |
rmind | sync with head
|
1.47.6.3 |
| 28-Aug-2017 |
skrll | Sync with HEAD
|
1.47.6.2 |
| 27-Dec-2015 |
skrll | Sync with HEAD (as of 26th Dec)
|
1.47.6.1 |
| 06-Apr-2015 |
skrll | Sync with HEAD
|
1.52.2.1 |
| 26-Jan-2018 |
martin | Pull up following revision(s) (requested by ozaki-r in ticket #511): sys/kern/kern_timeout.c: revision 1.54 sys/netinet6/nd6_nbr.c: revision 1.141 sys/netinet6/nd6_nbr.c: revision 1.144 sys/netinet/if_arp.c: revision 1.256 Fix a deadlock on callout_halt of nd6_dad_timer We must not call callout_halt of nd6_dad_timer with holding nd6_dad_lock because the lock is taken in nd6_dad_timer. Once softnet_lock goes away, we can pass the lock to callout_halt, but for now we cannot. Make DAD destructions (MP-)safe with callout_stop arp_dad_stoptimer and nd6_dad_stoptimer can be called with or without softnet_lock held and unfortunately we have no easy way to statically know which. So it is hard to use callout_halt there. To address the situation, we use callout_stop to make the code safe. The new approach copes with the issue by delegating the destruction of a callout to callout itself, which allows us to not wait the callout to finish. This can be done thanks to that DAD objects are separated from other data such as ifa. The approach is suggested by riastradh@ Proposed on tech-kern@ and tech-net@ Sanity-check if interlock is held when it's passed
|
1.54.4.3 |
| 21-Apr-2020 |
martin | Sync with HEAD
|
1.54.4.2 |
| 08-Apr-2020 |
martin | Merge changes from current as of 20200406
|
1.54.4.1 |
| 10-Jun-2019 |
christos | Sync with HEAD
|
1.54.2.1 |
| 28-Jul-2018 |
pgoyette | Sync with HEAD
|
1.57.2.1 |
| 25-Jan-2020 |
ad | Sync with head.
|
1.59.2.1 |
| 20-Apr-2020 |
bouyer | Sync with HEAD
|
1.73.2.1 |
| 27-Jun-2023 |
martin | Pull up following revision(s) (requested by pho in ticket #219):
sys/kern/kern_timeout.c: revision 1.74 sys/kern/kern_timeout.c: revision 1.75 sys/kern/kern_timeout.c: revision 1.76
callout(9): Fix panic() in callout_destroy() (kern/57226)
The culprit was callout_halt(). "(c->c_flags & CALLOUT_FIRED) != 0" wasn't the correct way to check if a callout is running. It failed to wait for a running callout to finish in the following scenario: 1. cpu0 initializes a callout and schedules it. 2. cpu0 invokes callout_softlock() and fires the callout, setting the flag CALLOUT_FIRED. 3. The callout invokes callout_schedule() to re-schedule itself. 4. callout_schedule_locked() clears the flag CALLOUT_FIRED, and releases the lock. 5. Before the lock is re-acquired by callout_softlock(), cpu1 decides to destroy the callout. It first invokes callout_halt() to make sure the callout finishes running. 6. But since CALLOUT_FIRED has been cleared, callout_halt() thinks it's not running and therefore returns without invoking callout_wait(). 7. cpu1 proceeds to invoke callout_destroy() while it's still running on cpu0. callout_destroy() detects that and panics.
callout(9): Tidy up the condition for "callout is running on another LWP" No functional changes.
callout(9): Delete the unused member cc_cancel from struct callout_cpu I see no reason why it should be there, and believe its a leftover from some old code.
|