History log of /src/sys/kern/kern_condvar.c |
Revision | | Date | Author | Comments |
1.63 |
| 02-Nov-2023 |
martin | Back out the following revisions on behalf of core:
sys/sys/lwp.h: revision 1.228 sys/sys/pipe.h: revision 1.40 sys/kern/uipc_socket.c: revision 1.306 sys/kern/kern_sleepq.c: revision 1.84 sys/rump/librump/rumpkern/locks_up.c: revision 1.13 sys/kern/sys_pipe.c: revision 1.165 usr.bin/fstat/fstat.c: revision 1.119 sys/rump/librump/rumpkern/locks.c: revision 1.87 sys/ddb/db_xxx.c: revision 1.78 sys/ddb/db_command.c: revision 1.187 sys/sys/condvar.h: revision 1.18 sys/ddb/db_interface.h: revision 1.42 sys/sys/socketvar.h: revision 1.166 sys/kern/uipc_syscalls.c: revision 1.209 sys/kern/kern_condvar.c: revision 1.60
Add cv_fdrestart() [...] Use cv_fdrestart() to implement fo_restart. Simplify/streamline pipes a little bit [...]
This changes have caused regressions and need to be debugged. The cv_fdrestart() addition needs more discussion.
|
1.62 |
| 15-Oct-2023 |
riastradh | kern_condvar.c: Sort includes. No functional change intended.
|
1.61 |
| 15-Oct-2023 |
riastradh | sys/lwp.h: Nix sys/syncobj.h dependency.
Remove it in ddb/db_syncobj.h too.
New sys/wchan.h defines wchan_t so that users need not pull in sys/syncobj.h to get it.
Sprinkle #include <sys/syncobj.h> in .c files where it is now needed.
|
1.60 |
| 13-Oct-2023 |
ad | Add cv_fdrestart() (better name suggestions welcome):
Like cv_broadcast(), but make any LWPs that share the same file descriptor table as the caller return ERESTART when resuming. Used to dislodge LWPs waiting for I/O that prevent a file descriptor from being closed, without upsetting access to the file (not descriptor) made from another direction.
|
1.59 |
| 12-Oct-2023 |
ad | Comments.
|
1.58 |
| 08-Oct-2023 |
ad | Ensure that an LWP that has taken a legitimate wakeup never produces an error code from sleepq_block(). Then, it's possible to make cv_signal() work as expected and only ever wake a singular LWP.
|
1.57 |
| 04-Oct-2023 |
ad | Eliminate l->l_biglocks. Originally I think it had a use but these days a local variable will do.
|
1.56 |
| 23-Sep-2023 |
ad | - Simplify how priority boost for blocking in kernel is handled. Rather than setting it up at each site where we block, make it a property of syncobj_t. Then, do not hang onto the priority boost until userret(), drop it as soon as the LWP is out of the run queue and onto a CPU. Holding onto it longer is of questionable benefit.
- This allows two members of lwp_t to be deleted, and mi_userret() to be simplified a lot (next step: trim it down to a single conditional).
- While here, constify syncobj_t and de-inline a bunch of small functions like lwp_lock() which turn out not to be small after all (I don't know why, but atomic_*_relaxed() seem to provoke a compiler shitfit above and beyond what volatile does).
|
1.55 |
| 17-Jul-2023 |
riastradh | kern: New struct syncobj::sobj_name member for diagnostics.
XXX potential kernel ABI change -- not sure any modules actually use struct syncobj but it's hard to rule that out because sys/syncobj.h leaks into sys/lwp.h
|
1.54 |
| 29-Jun-2022 |
riastradh | sleepq(9): Pass syncobj through to sleepq_block.
Previously the usage pattern was:
sleepq_enter(sq, l, lock); // locks l ... sleepq_enqueue(sq, ..., sobj, ...); // assumes l locked, sets l_syncobj ... (*) sleepq_block(...); // unlocks l
As long as l remains locked from sleepq_enter to sleepq_block, l_syncobj is stable, and sleepq_block uses it via ktrcsw to determine whether the sleep is on a mutex in order to avoid creating ktrace context-switch records (which involves allocation which is forbidden in softint context, while taking and even sleeping for a mutex is allowed).
However, in turnstile_block, the logic at (*) also involves turnstile_lendpri, which sometimes unlocks and relocks l. At that point, another thread can swoop in and sleepq_remove l, which sets l_syncobj to sched_syncobj. If that happens, ktrcsw does what is forbidden -- tries to allocate a ktrace record for the context switch.
As an optimization, sleepq_block or turnstile_block could stop early if it detects that l_syncobj doesn't match -- we've already been requested to wake up at this point so there's no need to mi_switch. (And then it would be unnecessary to pass the syncobj through sleepq_block, because l_syncobj would remain stable.) But I'll leave that to another change.
Reported-by: syzbot+8b9d7b066c32dbcdc63b@syzkaller.appspotmail.com
|
1.53 |
| 01-Nov-2020 |
christos | PR/55664: Ruslan Nikolaev: Split out sleepq guts and turnstiles not used in rump into a separate header file. Add a sleepq_destroy() empty hook.
|
1.52 |
| 11-May-2020 |
riastradh | branches: 1.52.2; Remove timedwaitclock.
This did not fix the bug I hoped it would fix in futex, and needs more design thought. Might redo it somewhat differently later.
|
1.51 |
| 04-May-2020 |
riastradh | New timedwaitclock_setup.
C99 initializers would have been nice, but part of the struct is explicit parameters and part of the struct is implicit state, and -Wmissing-field-initializers can't discriminate between them (although for some reason it doesn't always fire!).
Instead, just do:
struct timedwaitclock T;
timedwaitclock_setup(&T, timeout, clockid, flags, epsilon); while (...) { error = timedwaitclock_begin(&T, &timo); if (error) ... error = waitwhatever(timo); timedwaitclock_end(&T); ... }
|
1.50 |
| 03-May-2020 |
thorpej | Move timedwaitclock_begin() and timedwaitclock_end() to subr_time.c so they can be used by other things.
|
1.49 |
| 03-May-2020 |
riastradh | New cv_timedwaitclock, cv_timedwaitclock_sig.
Usage: given a struct timespec timeout copied from userland, along with a clockid and TIMER_* flags,
error = cv_timedwaitclock(cv, lock, timeout, clockid, flags, DEFAULT_TIMEOUT_EPSILON); if (error) /* fail */
If flags is relative (i.e., (flags & TIMER_ABSTIME) == 0), then this deducts the time spent waiting from timeout, so you can run it in a loop:
struct timespec timeout;
error = copyin(SCARG(uap, timeout), &timeout, sizeof timeout); if (error) return error;
mutex_enter(lock); while (!ready()) { error = cv_timedwaitclock_sig(cv, lock, &timeout, SCARG(uap, clockid), SCARG(uap, flags), DEFAULT_TIMEOUT_EPSILON); if (error) break; } mutex_exit(lock);
CAVEAT: If the system call is interrupted by a signal with SA_RESTART so cv_timedwaitclock_sig fails with ERESTART, then the system call will be restarted with the _original_ relative timeout, not counting the time that was already spent waiting. This is a problem but it's not a problem I want to deal with at the moment.
|
1.48 |
| 03-May-2020 |
riastradh | Fix edge cases in cv_timedwaitbt, cv_timedwaitbt_sig.
- If the timeout is exactly zero, fail immediately with EWOULDBLOCK.
- If the timeout is just so small it would be rounded to zero ticks, make sure to wait at least one tick.
- Make sure we never return with a negative timeout left.
|
1.47 |
| 19-Apr-2020 |
ad | Set LW_SINTR earlier so it doesn't pose a problem for doing interruptable waits with turnstiles (not currently done).
|
1.46 |
| 13-Apr-2020 |
maxv | hardclock_ticks -> getticks()
|
1.45 |
| 10-Apr-2020 |
ad | - Make this needed sequence always work for condvars, by not touching the CV again after wakeup. Previously it could panic because cv_signal() could be called by cv_wait_sig() + others:
cv_broadcast(cv); cv_destroy(cv);
- In support of the above, if an LWP doing a timed wait is awoken by cv_broadcast() or cv_signal(), don't return an error if the timer fires after the fact, i.e. either succeed or fail, not both.
- Remove LOCKDEBUG code for CVs which never worked properly and is of questionable use.
|
1.44 |
| 26-Mar-2020 |
ad | branches: 1.44.2; Change sleepq_t from a TAILQ to a LIST and remove SOBJ_SLEEPQ_FIFO. Only select/poll used the FIFO method and that was for collisions which rarely occur. Shrinks sleep_t and condvar_t.
|
1.43 |
| 15-Feb-2020 |
ad | - List all of the syncobjs in syncobj.h. - Update a comment.
|
1.42 |
| 20-Nov-2019 |
ad | branches: 1.42.2; - Put back a microoptimisation that was accidentally removed. - Comments.
|
1.41 |
| 30-Jan-2018 |
ozaki-r | branches: 1.41.4; Apply C99-style struct initialization to syncobj_t
|
1.40 |
| 25-Dec-2017 |
ozaki-r | Apply C99-style struct initialization to lockops_t
|
1.39 |
| 12-Nov-2017 |
riastradh | Apply same treatment to cv_timedwaitbt.
|
1.38 |
| 12-Nov-2017 |
riastradh | Clarify interpretation of timeout/epsilon in cv_timedwaitbt.
|
1.37 |
| 03-Jul-2017 |
riastradh | Add cv_timedwaitbt, cv_timedwaitbt_sig.
Takes struct bintime maximum delay, and decrements it in place so that you can use it in a loop in case of spurious wakeup.
Discussed on tech-kern a couple years ago:
https://mail-index.netbsd.org/tech-kern/2015/03/23/msg018557.html
Added a parameter for expressing desired precision -- not currently interpreted, but intended for a future tickless kernel with a choice of high-resolution timers.
|
1.36 |
| 08-Jun-2017 |
chs | allow cv_signal() immediately followed by cv_destroy(). this sequence is used by ZFS in a couple places and by supporting it natively we can undo our local ZFS changes that avoided it. note that this is only legal when all of the waiters use cv_wait() and not any of the other variations, and lockdebug will catch any violations of this rule.
|
1.35 |
| 07-Aug-2015 |
uebayasi | branches: 1.35.10; o Don't include sys/sched.h. Scheduler-related operation is done by sleepq(9) via SOBJ_SLEEPQ_SORTED.
o Include sys/lwp.h instead of sys/proc.h.
|
1.34 |
| 25-Oct-2013 |
martin | branches: 1.34.6; Mark a diagnostic-only variable
|
1.33 |
| 14-Sep-2013 |
joerg | nodebug is only used with LOCKDEBUG
|
1.32 |
| 08-Mar-2013 |
apb | branches: 1.32.6; also comment on the meaning of timo=0 for cv_timedwait_sig.
|
1.31 |
| 08-Mar-2013 |
apb | Add comments saying that a cv_timedwait and sleepq_block interpret timo = 0 as an infinite timeout. This is already documented in the cv_timedwait(9) man page, and there is no sleeq_block(9) man page.
|
1.30 |
| 27-Jul-2011 |
uebayasi | branches: 1.30.2; 1.30.12; These don't need uvm/uvm_extern.h.
|
1.29 |
| 14-Apr-2011 |
jym | Typo fix.
|
1.28 |
| 05-Dec-2009 |
pooka | branches: 1.28.4; 1.28.6; tsleep() on lbolt is now illegal. Convert cv_wakeup(&lbolt) to cv_broadcast(&lbolt) and get rid of the prior.
|
1.27 |
| 21-Oct-2009 |
rmind | Remove uarea swap-out functionality:
- Addresses the issue described in PR/38828. - Some simplification in threading and sleepq subsystems. - Eliminates pmap_collect() and, as a side note, allows pmap optimisations. - Eliminates XS_CTL_DATA_ONSTACK in scsipi code. - Avoids few scans on LWP list and thus potentially long holds of proc_lock. - Cuts ~1.5k lines of code. Reduces amd64 kernel size by ~4k. - Removes __SWAP_BROKEN cases.
Tested on x86, mips, acorn32 (thanks <mpumford>) and partly tested on acorn26 (thanks to <bjh21>).
Discussed on <tech-kern>, reviewed by <ad>.
|
1.26 |
| 19-Dec-2008 |
thorpej | Make condvars really opaque -- hide the wait message member from consumers of the API.
|
1.25 |
| 16-Jun-2008 |
ad | branches: 1.25.4; PR kern/38761: new (?) race in buffer cache code
Back out the workaround from cv_has_waiters(), which is not longer needed. Removal was missed earlier.
|
1.24 |
| 16-Jun-2008 |
ad | PR kern/38761: new (?) race in buffer cache code
- Back out the previous workaround now that the sleep queue code has been changed to never let the queue become empty if there are valid waiters. - Use sleepq_hashlock() to improve clarity. - Sprinkle some assertions.
|
1.23 |
| 15-Jun-2008 |
chris | Fix for biowait hangs, and possibly other condvar hangs. Also should fix PR kern/38761.
The condvar must access the sleepq with the sleepq lock held, doing so is causing inconsistent sleepq state to be read.
This is because some accesses to the sleepq don't come via the cv code, but are call directly into sleepq_changepri and sleepq_lendpri, which take the sleepq lock, and removes then re-inserts lwps into the sleepq.
Running a build.sh with -j8 now completes on my quad-core, also tested by Simon@ on a 8-core server and matt@ on a quad-core.
I believe there is room to be more efficient with this, as we now take the sleepq lock for all cv_broadcast and cv_signal calls. I'll look into this and post a diff to tech-kern.
|
1.22 |
| 04-Jun-2008 |
ad | branches: 1.22.2; Disable the wakeup assertion for the time being because the tty code triggers it.
|
1.21 |
| 31-May-2008 |
ad | Fix wmesg for !LOCKDEBUG.
|
1.20 |
| 31-May-2008 |
ad | - Give each condition variable its own sleep queue head. Helps the system to scale more gracefully when there are thousands of active threads. Proposed on tech-kern@.
- Use LOCKDEBUG to catch some errors in the use of condition variables:
freeing an active CV re-initializing an active CV using multiple distinct mutexes during concurrent waits not holding the interlocking mutex when calling cv_broadcast/cv_signal waking waiters and destroying the CV before they run and exit it
|
1.19 |
| 26-May-2008 |
ad | Broken assertion.
|
1.18 |
| 26-May-2008 |
ad | Take the mutex pointer and waiters count out of sleepq_t: the values can be or are maintained elsewhere. Now a sleepq_t is just a TAILQ_HEAD.
|
1.17 |
| 28-Apr-2008 |
martin | branches: 1.17.2; Remove clause 3 and 4 from TNF licenses
|
1.16 |
| 17-Mar-2008 |
ad | branches: 1.16.2; 1.16.4; Add a boolean parameter to syncobj_t::sobj_unsleep. If true we want the existing behaviour: the unsleep method unlocks and wakes the swapper if needs be. If false, the caller is doing a batch operation and will take care of that later. This is kind of ugly, but it's difficult for the caller to know which lock to release in some situations.
|
1.15 |
| 05-Mar-2008 |
ad | - Add cv_is_valid(), for use in assertions. Performs basic sanity checks. - Add more assertions.
|
1.14 |
| 06-Nov-2007 |
ad | branches: 1.14.10; 1.14.14; Merge scheduler changes from the vmlocking branch. All discussed on tech-kern:
- Invert priority space so that zero is the lowest priority. Rearrange number and type of priority levels into bands. Add new bands like 'kernel real time'. - Ignore the priority level passed to tsleep. Compute priority for sleep dynamically. - For SCHED_4BSD, make priority adjustment per-LWP, not per-process.
|
1.13 |
| 08-Oct-2007 |
ad | branches: 1.13.2; 1.13.4; Merge from vmlocking: relax an assertion if panicstr != NULL.
|
1.12 |
| 02-Aug-2007 |
ad | branches: 1.12.2; 1.12.4; 1.12.6; 1.12.8; cv_wakeup: the entire queue has to be searched, as we can't know how many waiters there are.
|
1.11 |
| 01-Aug-2007 |
ad | Ressurect cv_wakeup() and use it on lbolt. Should fix PR kern/36714. (background/foreground signal lossage in -current with various programs).
|
1.10 |
| 01-Aug-2007 |
ad | Improve assertions slightly. When awakening assert that the CV has not been destroyed.
|
1.9 |
| 09-Jul-2007 |
ad | branches: 1.9.2; Merge some of the less invasive changes from the vmlocking branch:
- kthread, callout, devsw API changes - select()/poll() improvements - miscellaneous MT safety improvements
|
1.8 |
| 17-May-2007 |
yamt | merge yamt-idlelwp branch. asked by core@. some ports still needs work.
from doc/BRANCHES:
idle lwp, and some changes depending on it.
1. separate context switching and thread scheduling. (cf. gmcgarry_ctxsw) 2. implement idle lwp. 3. clean up related MD/MI interfaces. 4. make scheduler(s) modular.
|
1.7 |
| 29-Mar-2007 |
ad | Make cv_has_waiters() return type bool.
|
1.6 |
| 29-Mar-2007 |
ad | - cv_wakeup: remove this. There are ~zero situations where it's useful. - cv_wait and friends: after resuming execution, check to see if we have been restarted as a result of cv_signal. If we have, but cannot take the wakeup (because of eg a pending Unix signal or timeout) then try to ensure that another LWP sees it. This is necessary because there may be multiple waiters, and at least one should take the wakeup if possible. Prompted by a discussion with pooka@. - typedef struct lwp lwp_t; - int -> bool, struct lwp -> lwp_t in a few places.
|
1.5 |
| 27-Feb-2007 |
yamt | branches: 1.5.2; 1.5.4; 1.5.6; typedef pri_t and use it instead of int and u_char.
|
1.4 |
| 26-Feb-2007 |
yamt | implement priority inheritance.
|
1.3 |
| 11-Feb-2007 |
yamt | branches: 1.3.2; 1.3.4; unwrap short lines.
|
1.2 |
| 09-Feb-2007 |
ad | Merge newlock2 to head.
|
1.1 |
| 20-Oct-2006 |
ad | branches: 1.1.2; file kern_condvar.c was initially added on branch newlock2.
|
1.1.2.7 |
| 09-Feb-2007 |
ad | - Change syncobj_t::sobj_changepri() to alter both the user priority and the effective priority of LWPs. How the effective priority is adjusted depends on the type of object. - Add a couple of comments to sched_kpri() and remrunqueue().
|
1.1.2.6 |
| 05-Feb-2007 |
ad | Redo previous to be less ugly.
|
1.1.2.5 |
| 03-Feb-2007 |
ad | - Require that cv_signal/cv_broadcast be called with the interlock held. - Provide 'async' versions that's don't need the interlock.
|
1.1.2.4 |
| 29-Dec-2006 |
ad | Checkpoint work in progress.
|
1.1.2.3 |
| 17-Nov-2006 |
ad | Fix an obvious sleep/wakeup bug introduced in previous.
|
1.1.2.2 |
| 17-Nov-2006 |
ad | Checkpoint work in progress.
|
1.1.2.1 |
| 20-Oct-2006 |
ad | Add a condition variable implementation (untested).
|
1.3.4.7 |
| 24-Mar-2008 |
yamt | sync with head.
|
1.3.4.6 |
| 17-Mar-2008 |
yamt | sync with head.
|
1.3.4.5 |
| 15-Nov-2007 |
yamt | sync with head.
|
1.3.4.4 |
| 27-Oct-2007 |
yamt | sync with head.
|
1.3.4.3 |
| 03-Sep-2007 |
yamt | sync with head.
|
1.3.4.2 |
| 26-Feb-2007 |
yamt | sync with head.
|
1.3.4.1 |
| 11-Feb-2007 |
yamt | file kern_condvar.c was added on branch yamt-lazymbuf on 2007-02-26 09:11:04 +0000
|
1.3.2.3 |
| 19-Apr-2007 |
ad | Pull up a change from the vmlocking branch:
- Ensure that LWPs going to sleep are on the sleep queue before releasing any interlocks. This is so that calls to turnstile_wakeup will have the correct locks held when adjusting priority. Avoids another deadlock. - Assume that LWPs blocked on a turnstile will never be swapped out. - LWPs blocking on a turnstile must have kernel priority, as they are consuming kernel resources.
|
1.3.2.2 |
| 15-Apr-2007 |
yamt | sync with head.
|
1.3.2.1 |
| 27-Feb-2007 |
yamt | - sync with head. - move sched_changepri back to kern_synch.c as it doesn't know PPQ anymore.
|
1.5.6.1 |
| 29-Mar-2007 |
reinoud | Pullup to -current
|
1.5.4.1 |
| 11-Jul-2007 |
mjf | Sync with head.
|
1.5.2.9 |
| 01-Nov-2007 |
ad | - Fix interactivity problems under high load. Beacuse soft interrupts are being stacked on top of regular LWPs, more often than not aston() was being called on a soft interrupt thread instead of a user thread, meaning that preemption was not happening on EOI.
- Don't use bool in a couple of data structures. Sub-word writes are not always atomic and may clobber other fields in the containing word.
- For SCHED_4BSD, make p_estcpu per thread (l_estcpu). Rework how the dynamic priority level is calculated - it's much better behaved now.
- Kill the l_usrpri/l_priority split now that priorities are no longer directly assigned by tsleep(). There are three fields describing LWP priority:
l_priority: Dynamic priority calculated by the scheduler. This does not change for kernel/realtime threads, and always stays within the correct band. Eg for timeshared LWPs it never moves out of the user priority range. This is basically what l_usrpri was before.
l_inheritedprio: Lent to the LWP due to priority inheritance (turnstiles).
l_kpriority: A boolean value set true the first time an LWP sleeps within the kernel. This indicates that the LWP should get a priority boost as compensation for blocking. lwp_eprio() now does the equivalent of sched_kpri() if the flag is set. The flag is cleared in userret().
- Keep track of scheduling class (OTHER, FIFO, RR) in struct lwp, and use this to make decisions in a few places where we previously tested for a kernel thread.
- Partially fix itimers and usr/sys/intr time accounting in the presence of software interrupts.
- Use kthread_create() to create idle LWPs. Move priority definitions from the various modules into sys/param.h.
- newlwp -> lwp_create
|
1.5.2.8 |
| 18-Oct-2007 |
ad | Update for soft interrupt changes. See kern_softint.c 1.1.2.17 for details.
|
1.5.2.7 |
| 20-Aug-2007 |
ad | Sync with HEAD.
|
1.5.2.6 |
| 15-Jul-2007 |
ad | Sync with head.
|
1.5.2.5 |
| 09-Jul-2007 |
ad | KASSERT((l->l_flag & LW_INTR) == 0) -> KASSERT((l->l_flag & LW_INTR) == 0 || panicstr != NULL)
|
1.5.2.4 |
| 17-Jun-2007 |
ad | - Increase the number of thread priorities from 128 to 256. How the space is set up is to be revisited. - Implement soft interrupts as kernel threads. A generic implementation is provided, with hooks for fast-path MD code that can run the interrupt threads over the top of other threads executing in the kernel. - Split vnode::v_flag into three fields, depending on how the flag is locked (by the interlock, by the vnode lock, by the file system). - Miscellaneous locking fixes and improvements.
|
1.5.2.3 |
| 10-Apr-2007 |
ad | - Ensure that that LWPs going to sleep are on the sleep queue and so have their syncobj pointer updated, so that calls to turnstile_wakeup will have the correct locks held when adjusting the current LWP's priority. Avoids another deadlock. - Assume that LWPs blocked on a turnstile will never be swapped out. - LWPs blocking on a turnstile must have kernel priority, as they are consuming kernel resources.
|
1.5.2.2 |
| 10-Apr-2007 |
ad | Sync with head.
|
1.5.2.1 |
| 21-Mar-2007 |
ad | GC the simplelock/spinlock debugging stuff.
|
1.9.2.1 |
| 15-Aug-2007 |
skrll | Sync with HEAD.
|
1.12.8.2 |
| 02-Aug-2007 |
ad | cv_wakeup: the entire queue has to be searched, as we can't know how many waiters there are.
|
1.12.8.1 |
| 02-Aug-2007 |
ad | file kern_condvar.c was added on branch matt-mips64 on 2007-08-02 22:01:41 +0000
|
1.12.6.1 |
| 14-Oct-2007 |
yamt | sync with head.
|
1.12.4.2 |
| 23-Mar-2008 |
matt | sync with HEAD
|
1.12.4.1 |
| 06-Nov-2007 |
matt | sync with HEAD
|
1.12.2.2 |
| 06-Nov-2007 |
joerg | Sync with HEAD.
|
1.12.2.1 |
| 26-Oct-2007 |
joerg | Sync with HEAD.
Follow the merge of pmap.c on i386 and amd64 and move pmap_init_tmp_pgtbl into arch/x86/x86/pmap.c. Modify the ACPI wakeup code to restore CR4 before jumping back into kernel space as the large page option might cover that.
|
1.13.4.1 |
| 19-Nov-2007 |
mjf | Sync with HEAD.
|
1.13.2.1 |
| 13-Nov-2007 |
bouyer | Sync with HEAD
|
1.14.14.5 |
| 17-Jan-2009 |
mjf | Sync with HEAD.
|
1.14.14.4 |
| 29-Jun-2008 |
mjf | Sync with HEAD.
|
1.14.14.3 |
| 05-Jun-2008 |
mjf | Sync with HEAD.
Also fix build.
|
1.14.14.2 |
| 02-Jun-2008 |
mjf | Sync with HEAD.
|
1.14.14.1 |
| 03-Apr-2008 |
mjf | Sync with HEAD.
|
1.14.10.1 |
| 24-Mar-2008 |
keiichi | sync with head.
|
1.16.4.3 |
| 11-Mar-2010 |
yamt | sync with head
|
1.16.4.2 |
| 04-May-2009 |
yamt | sync with head.
|
1.16.4.1 |
| 16-May-2008 |
yamt | sync with head.
|
1.16.2.3 |
| 17-Jun-2008 |
yamt | sync with head.
|
1.16.2.2 |
| 04-Jun-2008 |
yamt | sync with head
|
1.16.2.1 |
| 18-May-2008 |
yamt | sync with head.
|
1.17.2.1 |
| 23-Jun-2008 |
wrstuden | Sync w/ -current. 34 merge conflicts to follow.
|
1.22.2.1 |
| 18-Jun-2008 |
simonb | Sync with head.
|
1.25.4.1 |
| 19-Jan-2009 |
skrll | Sync with HEAD.
|
1.28.6.1 |
| 06-Jun-2011 |
jruoho | Sync with HEAD.
|
1.28.4.1 |
| 21-Apr-2011 |
rmind | sync with head
|
1.30.12.3 |
| 03-Dec-2017 |
jdolecek | update from HEAD
|
1.30.12.2 |
| 20-Aug-2014 |
tls | Rebase to HEAD as of a few days ago.
|
1.30.12.1 |
| 23-Jun-2013 |
tls | resync from head
|
1.30.2.1 |
| 22-May-2014 |
yamt | sync with head.
for a reference, the tree before this commit was tagged as yamt-pagecache-tag8.
this commit was splitted into small chunks to avoid a limitation of cvs. ("Protocol error: too many arguments")
|
1.32.6.1 |
| 18-May-2014 |
rmind | sync with head
|
1.34.6.2 |
| 28-Aug-2017 |
skrll | Sync with HEAD
|
1.34.6.1 |
| 22-Sep-2015 |
skrll | Sync with HEAD
|
1.35.10.1 |
| 13-Jan-2018 |
snj | Pull up following revision(s) (requested by ozaki-r in ticket #495): lib/librumpuser/rumpfiber.c: revision 1.13 lib/librumpuser/rumpuser_pth.c: revision 1.46 lib/librumpuser/rumpuser_pth_dummy.c: revision 1.18 sys/kern/kern_condvar.c: revision 1.40 sys/kern/kern_lock.c: revision 1.161 sys/kern/kern_mutex.c: revision 1.68 sys/kern/kern_rwlock.c: revision 1.48 sys/rump/include/rump/rumpuser.h: revision 1.115 sys/rump/librump/rumpkern/locks.c: revision 1.76-1.79 Apply C99-style struct initialization to lockops_t -- Tweak LOCKDEBUG macros (NFC) -- Distinguish spin mutex and adaptive mutex on rump kernels for LOCKDEBUG Formerly rump kernels treated the two types of mutexes as both adaptive for LOCKDEBUG for some reasons. Now we can detect violations of mutex restrictions on rump kernels such as taking an adaptive mutex with holding a spin mutex as well as normal kernels. -- rump: check if the mutex is surely owned by the caller in mutex_exit Unlocking a not-owned mutex wasn't detected well (it could detect if the mutex is not held by anyone but that's not enough). Let's check it (the check is the same as normal kernel's mutex). If LOCKDEBUG is enabled, give the check over LOCKDEBUG because it can provide better debugging information.
|
1.41.4.3 |
| 21-Apr-2020 |
martin | Sync with HEAD
|
1.41.4.2 |
| 13-Apr-2020 |
martin | Mostly merge changes from HEAD upto 20200411
|
1.41.4.1 |
| 08-Apr-2020 |
martin | Merge changes from current as of 20200406
|
1.42.2.1 |
| 29-Feb-2020 |
ad | Sync with head.
|
1.44.2.1 |
| 20-Apr-2020 |
bouyer | Sync with HEAD
|
1.52.2.1 |
| 14-Dec-2020 |
thorpej | Sync w/ HEAD.
|