History log of /src/sys/kern/kern_lock.c |
Revision | | Date | Author | Comments |
1.188 |
| 14-Jan-2024 |
andvar | Surround db_stacktrace() with "#ifdef DDB" check.
Fixes LOCKDEBUG enabled build without DDB option.
|
1.187 |
| 04-Oct-2023 |
ad | Eliminate l->l_ncsw and l->l_nivcsw. From memory think they were added before we had per-LWP struct rusage; the same is now tracked there.
|
1.186 |
| 07-Jul-2023 |
riastradh | Revert unintentional changes to kern_lock.c in previous commit.
|
1.185 |
| 07-Jul-2023 |
riastradh | heartbeat(9): Test whether curcpu is stable, not kpreempt_disabled.
kpreempt_disabled worked for my testing because I tested on aarch64, which doesn't have kpreemption.
XXX Should move curcpu_stable() to somewhere that other things can use it.
|
1.184 |
| 09-Apr-2023 |
riastradh | ASSERT_SLEEPABLE(9): Micro-optimize this a little bit.
This convinces gcc to do less -- make a smaller stack frame, compute fewer conditional moves in favour of predicted-not-taken branches -- in the fast path where we are sleepable as the caller expects.
Wasn't able to convince it to do the ncsw loop with a predicted-not-taken branch, but let's leave the __predict_false in there anyway because it's still a good prediction.
|
1.183 |
| 23-Feb-2023 |
riastradh | KERNEL_LOCK(9): Minor tweaks to ci->ci_biglock_wanted access.
1. Use atomic_load_relaxed to read ci->ci_biglock_wanted from another CPU, for clarity and to avoid the appearance of data races in thread sanitizers. (Reading ci->ci_biglock_wanted on the local CPU need not be atomic because no other CPU can be writing to it.)
2. Use atomic_store_relaxed to update ci->ci_biglock_wanted when we start to spin, to avoid the appearance of data races.
3. Add comments to explain what's going on and cross-reference the specific matching membars in mutex_vector_enter.
related to PR kern/57240
|
1.182 |
| 27-Jan-2023 |
ozaki-r | Sprinkle __predict_{true,false} for panicstr checks
|
1.181 |
| 26-Oct-2022 |
riastradh | branches: 1.181.2; kern/kern_lock.c: We get start_init_exec from sys/kernel.h now.
|
1.180 |
| 13-Sep-2022 |
riastradh | KERNEL_LOCK(9): Avoid spinning out until 10sec have passed.
This means we'll never spin out if the hardclock timer is stuck. But the hardclock timer never runs with the kernel lock held itself, so it's not immediately clear that's important.
|
1.179 |
| 13-Sep-2022 |
riastradh | KERNEL_LOCK(9): Restore backoff while spinning in !LOCKDEBUG case.
When the spinout logic was put under LOCKDEBUG among a series of other changes that got reverted, the backoff was inadvertently made LOCKDEBUG-only too.
|
1.178 |
| 20-Aug-2022 |
riastradh | KERNEL_LOCK(9): Limit ipi trace diagnostic to after init has started.
|
1.177 |
| 16-Aug-2022 |
riastradh | KERNEL_LOCK(9): Fix previous for non-LOCKDEBUG builds.
|
1.176 |
| 16-Aug-2022 |
riastradh | KERNEL_LOCK(9): Record kernel lock holder in fast path too.
|
1.175 |
| 16-Aug-2022 |
riastradh | KERNEL_LOCK(9): Need kpreempt_disable to ipi_send, oops.
|
1.174 |
| 16-Aug-2022 |
riastradh | KERNEL_LOCK(9): Send an IPI to print holder's stack trace on spinout.
|
1.173 |
| 31-Oct-2021 |
skrll | Revert the 2015 change I made that allowed sleeping in the idle lwp if it wasn't running yet, e.g. in cpu_hatch --- sys/kern/kern_lock.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/sys/kern/kern_lock.c b/sys/kern/kern_lock.c index 40557427de86..c0c9d8adaf9e 100644 --- a/sys/kern/kern_lock.c +++ b/sys/kern/kern_lock.c @@ -89,8 +89,7 @@ assert_sleepable(void) } while (pctr != lwp_pctr());
reason = NULL; - if (idle && !cold && - kcpuset_isset(kcpuset_running, cpu_index(curcpu()))) { + if (idle && !cold) { reason = "idle"; } if (cpu_intr_p()) { -- 2.25.1
|
1.172 |
| 22-Dec-2020 |
ad | Comments.
|
1.171 |
| 02-May-2020 |
martin | branches: 1.171.2; Fix inverted condition in r1.136 - we do want LOCKDEBUG spinouts of the kernel lock to assert as soon as we have userland running - not in the early boot phase (where firmware loading and device init could take a long time).
|
1.170 |
| 08-Mar-2020 |
ad | Kill off kernel_lock_plug_leak(), and go back to dropping kernel_lock in exit1(), since there seems little hope of finding the leaking code any time soon. Can still be caught with LOCKDEBUG.
|
1.169 |
| 10-Feb-2020 |
christos | Put back the delay hooks which were deleted before. Without them VirtualBox spins out.
|
1.168 |
| 27-Jan-2020 |
ad | Add a kernel_lock_plug_leak() that drops any holds and tries to identify the baddy.
|
1.167 |
| 24-Jan-2020 |
ad | Carefully put kernel_lock back the way it was, and add a comment hinting that changing it is not a good idea, and hopefully nobody will ever try to change it ever again.
|
1.166 |
| 22-Jan-2020 |
ad | - DIAGNOSTIC: check for leaked kernel_lock in mi_switch().
- Now that ci_biglock_wanted is set later, explicitly disable preemption while acquiring kernel_lock. It was blocked in a roundabout way previously.
Reported-by: syzbot+43111d810160fb4b978b@syzkaller.appspotmail.com Reported-by: syzbot+f5b871bd00089bf97286@syzkaller.appspotmail.com Reported-by: syzbot+cd1f15eee5b1b6d20078@syzkaller.appspotmail.com Reported-by: syzbot+fb945a331dabd0b6ba9e@syzkaller.appspotmail.com Reported-by: syzbot+53a0c2342b361db25240@syzkaller.appspotmail.com Reported-by: syzbot+552222a952814dede7d1@syzkaller.appspotmail.com Reported-by: syzbot+c7104a72172b0f9093a4@syzkaller.appspotmail.com Reported-by: syzbot+efbd30c6ca0f7d8440e8@syzkaller.appspotmail.com Reported-by: syzbot+330a421bd46794d8b750@syzkaller.appspotmail.com
|
1.165 |
| 17-Jan-2020 |
ad | kernel_lock:
- Defer setting ci_biglock_wanted for a bit, because if curlwp holds a mutex or rwlock, and otherlwp is spinning waiting for the mutex/rwlock, setting ci_biglock_wanted causes otherlwp to block to avoid deadlock. If the spin on kernel_lock is short there's no point causing trouble.
- Do exponential backoff.
- Put the spinout check under LOCKDEBUG to match the others.
|
1.164 |
| 03-Dec-2019 |
riastradh | branches: 1.164.2; Use __insn_barrier to enforce ordering in l_ncsw loops.
(Only need ordering observable by interruption, not by other CPUs.)
|
1.163 |
| 09-May-2019 |
ozaki-r | branches: 1.163.2; Avoid prepending a timestamp to lock debug outputs on ddb
Lock printer functions (lockops_t#lo_dump) use printf_nolog to print, but printf_nolog now prepends a timestamp which is unnecessary for ddb:
db{0}> show all locks/t [Locks tracked through LWPs] Locks held by an LWP (iperf): Lock 0 (initialized at soinit) lock address : 0xffffedeb84b06080 type : sleep/adaptive initialized : 0xffffffff806d8c3f shared holds : 0 exclusive: 1 shares wanted: 0 exclusive: 11 current cpu : 0 last held: 1 current lwp : 0xffffedeb849ff040 last held: 0xffffedeb7dfdb240 last locked* : 0xffffffff806d8335 unlocked : 0xffffffff806d8385 [ 79103.0868574] owner field : 0xffffedeb7dfdb240 wait/spin: 1/0
Fix it by passing a printer function to lo_dump functions, i.e., make the functions use db_printf on ddb.
|
1.162 |
| 09-May-2019 |
ozaki-r | Make _kernel_lock_dump static
|
1.161 |
| 25-Dec-2017 |
ozaki-r | branches: 1.161.4; Apply C99-style struct initialization to lockops_t
|
1.160 |
| 21-Nov-2017 |
ozaki-r | Implement debugging feature for pserialize(9)
The debugging feature detects violations of pserialize constraints. It causes a panic: - if a context switch happens in a read section, or - if a sleepable function is called in a read section.
The feature is enabled only if LOCKDEBUG is on.
Discussed on tech-kern@
|
1.159 |
| 16-Sep-2017 |
christos | more const
|
1.158 |
| 26-Jan-2017 |
christos | branches: 1.158.6; For LOCKDEBUG: Always provide the location of the caller of the lock as __func__, __LINE__.
|
1.157 |
| 11-Apr-2015 |
skrll | branches: 1.157.2; 1.157.4; Trailing whitespace
|
1.156 |
| 11-Apr-2015 |
skrll | Allow sleeping in the idle lwp if the cpu isn't running yet.
OK'ed by rmind a while ago.
|
1.155 |
| 14-Sep-2013 |
martin | branches: 1.155.4; 1.155.6; Move a CTASSERT to global scope (easiest way to avoid gcc 4.8.1 local unused typedef warnings)
|
1.154 |
| 27-Apr-2013 |
mlelstv | branches: 1.154.4; Revert change that allowed rw_tryenter(&lock, RW_READER) to recurse for vfs_busy(). This is no longer necessary.
|
1.153 |
| 30-Aug-2012 |
matt | branches: 1.153.2; Use __cacheline_aligned
|
1.152 |
| 27-Nov-2011 |
jmcneill | add KERNEL_LOCKED_P() macro
|
1.151 |
| 17-Jul-2011 |
joerg | branches: 1.151.2; Retire varargs.h support. Move machine/stdarg.h logic into MI sys/stdarg.h and expect compiler to provide proper builtins, defaulting to the GCC interface. lint still has a special fallback. Reduce abuse of _BSD_VA_LIST_ by defining __va_list by default and derive va_list as required by standards.
|
1.150 |
| 20-Dec-2009 |
mrg | remove dated and wrong comments about curlwp being NULL. _kernel_{,un}lock() always assume it is valid now.
|
1.149 |
| 17-Jul-2009 |
dyoung | Fix spelling: situatations -> situations.
|
1.148 |
| 23-May-2009 |
ad | - Add lwp_pctr(), get an LWP's preemption/ctxsw counter. - Fix a preemption bug in CURCPU_IDLE_P() that can lead to a bogus assertion failure on DEBUG kernels. - Fix MP/preemption races with timecounter detachment.
|
1.147 |
| 12-Nov-2008 |
ad | branches: 1.147.4; Remove LKMs and switch to the module framework, pass 1.
Proposed on tech-kern@.
|
1.146 |
| 02-Jul-2008 |
matt | branches: 1.146.2; 1.146.4; Switch from KASSERT to CTASSERT for those asserts testing sizes of types.
|
1.145 |
| 25-Jun-2008 |
pooka | Don't compile kern_lock for rump any more, it's no longer required. Allows us to get rid of the incorrect _RUMPKERNEL ifdefs outside sys/rump.
|
1.144 |
| 31-May-2008 |
ad | branches: 1.144.2; LOCKDEBUG:
- Tweak it so it can also catch common errors with condition variables. The change to kern_condvar.c is not included in this commit and will come later.
- Don't call kmem_alloc() if operating in interrupt context, just fail the allocation and disable debugging for the object. Makes it safe to do mutex_init/rw_init/cv_init in interrupt context, when running a LOCKDEBUG kernel.
|
1.143 |
| 27-May-2008 |
ad | Replace a couple of tsleep calls with cv_wait.
|
1.142 |
| 19-May-2008 |
ad | Reduce ifdefs due to MULTIPROCESSOR slightly.
|
1.141 |
| 06-May-2008 |
ad | branches: 1.141.2; Allow rw_tryenter(&lock, RW_READER) to recurse, for vfs_busy().
|
1.140 |
| 28-Apr-2008 |
martin | Remove clause 3 and 4 from TNF licenses
|
1.139 |
| 28-Apr-2008 |
ad | Add MI code to support in-kernel preemption. Preemption is deferred by one of the following:
- Holding kernel_lock (indicating that the code is not MT safe). - Bracketing critical sections with kpreempt_disable/kpreempt_enable. - Holding the interrupt priority level above IPL_NONE.
Statistics on kernel preemption are reported via event counters, and where preemption is deferred for some reason, it's also reported via lockstat. The LWP priority at which preemption is triggered is tuneable via sysctl.
|
1.138 |
| 27-Apr-2008 |
ad | Extend spl protection to keep all kernel_lock state in sync. There could have been problems before. This might help with the assertion failures seen on sparc64.
|
1.137 |
| 01-Apr-2008 |
drochner | branches: 1.137.2; 1.137.4; remove useless passing of the lwp from the KERNEL_LOCK() ABI (not the API; this would be easy as well) agreed (a while ago) by ad
|
1.136 |
| 30-Mar-2008 |
ad | Don't report kernel lock spinouts if init has not yet started. XXX This should be backed out when we are sure that the drivers are good citizens and configure nicely with interrupts enabled / the system running.
|
1.135 |
| 17-Mar-2008 |
yamt | - simplify ASSERT_SLEEPABLE. - move it from proc.h to systm.h. - add some more checks. - make it a little more lkm friendly.
|
1.134 |
| 30-Jan-2008 |
ad | branches: 1.134.2; 1.134.6; Goodbye lockmgr().
|
1.133 |
| 26-Jan-2008 |
ad | lockstat: no longer track lockmgr() events.
|
1.132 |
| 10-Jan-2008 |
ad | - Fix a memory order problem with non-interlocked mutex release. - Give kernel_lock its own cache line.
|
1.131 |
| 04-Jan-2008 |
ad | Start detangling lock.h from intr.h. This is likely to cause short term breakage, but the mess of dependencies has been regularly breaking the build recently anyhow.
|
1.130 |
| 02-Jan-2008 |
ad | Merge vmlocking2 to head.
|
1.129 |
| 06-Dec-2007 |
ad | branches: 1.129.4; Nothing uses shared -> exclusive upgrades any more, so remove the code. This is good since they are effectively the same as ...
lockmgr(&lock, LK_RELEASE); lockmgr(&lock, LK_EXCLUSIVE);
.. and therefore don't behave as expected.
|
1.128 |
| 30-Nov-2007 |
ad | branches: 1.128.2; Use membar_*().
|
1.127 |
| 21-Nov-2007 |
yamt | make kmutex_t and krwlock_t smaller by killing lock id. ok'ed by Andrew Doran.
|
1.126 |
| 13-Nov-2007 |
ad | Remove KERNEL_LOCK_ASSERT_LOCKED, KERNEL_LOCK_ASSERT_UNLOCKED since the kernel_lock functions can be patched out at runtime now. Assertions are provided by the existing functions and by LOCKDEBUG_BARRIER.
|
1.125 |
| 06-Nov-2007 |
ad | Merge scheduler changes from the vmlocking branch. All discussed on tech-kern:
- Invert priority space so that zero is the lowest priority. Rearrange number and type of priority levels into bands. Add new bands like 'kernel real time'. - Ignore the priority level passed to tsleep. Compute priority for sleep dynamically. - For SCHED_4BSD, make priority adjustment per-LWP, not per-process.
|
1.124 |
| 31-Oct-2007 |
pooka | branches: 1.124.2; Wrap parts dealing with kernel_lock behind #ifndef _RUMPKERNEL. I don't like doing this, but there's too much pain to get this file to compile clean due to how SPINLOCK_{BACKOFF,SPIN}_HOOK and mb_write() are spread out in weird weird places throughout MD code.
|
1.123 |
| 17-Oct-2007 |
ad | branches: 1.123.2; Use __SIMPLELOCK_LOCKED_P().
|
1.122 |
| 11-Oct-2007 |
ad | Merge from vmlocking:
- G/C spinlockmgr() and simple_lock debugging. - Always include the kernel_lock functions, for LKMs. - Slightly improved subr_lockdebug code. - Keep sizeof(struct lock) the same if LOCKDEBUG.
|
1.121 |
| 10-Oct-2007 |
ad | Kill transferlockers() now that it's unused.
|
1.120 |
| 17-Sep-2007 |
ad | branches: 1.120.2; __FUNCTION__ -> __func__
|
1.119 |
| 10-Sep-2007 |
skrll | Merge nick-csl-alignment.
|
1.118 |
| 29-Jul-2007 |
pooka | branches: 1.118.4; 1.118.6; 1.118.8; Define a new lockmgr flag LK_RESURRECT which can be used in conjunction with LK_DRAIN. This has the same effect as LK_DRAIN except it atomically does NOT mark the lock as drained. This guarantees that when we got the lock, we were the last one currently waiting for the lock.
Use LK_DRAIN|LK_RESURRECT in vclean() to make sure there are no waiters for the lock. This should fix behaviour theoretized to be caused by vfs_subr.c 1.289 which caused vclean() to run into completion and free the vnode before all lock-waiters had been processed. Should therefore fix the "simple_lock: unitialized lock" problems seen recently.
thanks to Juergen Hannken-Illjes for some analysis of the problem and Erik Bertelsen for testing
|
1.117 |
| 29-Jul-2007 |
ad | Be more forgiving if panicstr != NULL.
|
1.116 |
| 18-Jun-2007 |
ad | branches: 1.116.2; Re-apply rev 1.111: Always include kernel_lock so that LOCKDEBUG checks can find the symbol.
|
1.115 |
| 15-Jun-2007 |
ad | Nuke __HAVE_SPLBIGLOCK.
|
1.114 |
| 15-Jun-2007 |
ad | splstatclock, spllock -> splhigh
|
1.113 |
| 17-May-2007 |
yamt | merge yamt-idlelwp branch. asked by core@. some ports still needs work.
from doc/BRANCHES:
idle lwp, and some changes depending on it.
1. separate context switching and thread scheduling. (cf. gmcgarry_ctxsw) 2. implement idle lwp. 3. clean up related MD/MI interfaces. 4. make scheduler(s) modular.
|
1.112 |
| 14-Apr-2007 |
perseant | Include the lwpid in the lock panic message, so we don't see silly messages like lockmgr: pid 17, not exclusive lock holder 17 unlocking
|
1.111 |
| 30-Mar-2007 |
ad | Always include kernel_lock so that LOCKDEBUG checks can find the symbol.
|
1.110 |
| 04-Mar-2007 |
christos | branches: 1.110.2; 1.110.4; 1.110.6; add a lockpanic function that prints more detailed error messages.
|
1.109 |
| 27-Feb-2007 |
yamt | typedef pri_t and use it instead of int and u_char.
|
1.108 |
| 22-Feb-2007 |
thorpej | TRUE -> true, FALSE -> false
|
1.107 |
| 20-Feb-2007 |
ad | kernel_lock():
- Fix error in previous. - Call LOCKDEBUG_WANTLOCK() so the "exclusive wanted" count isn't off.
|
1.106 |
| 20-Feb-2007 |
ad | _kernel_lock(): we can recurse here if we take an interrupt while spinning. Don't double book the time spent with lockstat.
|
1.105 |
| 09-Feb-2007 |
ad | branches: 1.105.2; Merge newlock2 to head.
|
1.104 |
| 25-Dec-2006 |
ad | lockstat: improve reporting slightly, and fix a bug where the command could spin while resorting lists.
|
1.103 |
| 09-Dec-2006 |
chs | in lockstatus(), report LK_EXCLOTHER if LK_WANT_EXCL or LK_WANT_UPGRADE is set, since the thread that set either of those flags will be the next one to get the lock. fixes PR 35143.
|
1.102 |
| 01-Nov-2006 |
yamt | branches: 1.102.2; 1.102.4; remove some __unused from function parameters.
|
1.101 |
| 12-Oct-2006 |
christos | - sprinkle __unused on function decls. - fix a couple of unused bugs - no more -Wno-unused for i386
|
1.100 |
| 30-Sep-2006 |
yamt | - KERNEL_LOCK_ASSERT_LOCKED: check cpu_biglock_count as well. - implement KERNEL_LOCK_ASSERT_UNLOCKED.
|
1.99 |
| 07-Sep-2006 |
ad | branches: 1.99.2; 1.99.4; Add lock_owner_onproc().
|
1.98 |
| 07-Sep-2006 |
ad | Track lockmgr() sleep events for lockstat.
|
1.97 |
| 21-Jul-2006 |
yamt | assert_sleepable: panic if curlwp == NULL.
|
1.96 |
| 21-Jul-2006 |
yamt | add ASSERT_SLEEPABLE() macro to assert we can sleep.
|
1.95 |
| 31-Mar-2006 |
erh | Fix call to simple_lock_assert_held() so builds with -DDEBUG work.
|
1.94 |
| 26-Mar-2006 |
erh | Add simple_lock_assert_locked/simple_lock_assert_unlocked to provide additional useful information when panic'ing because the assertion fails. Use these to define the SCHED_ASSERT_LOCKED/SCHED_ASSERT_UNLOCKED macros.
|
1.93 |
| 16-Mar-2006 |
erh | branches: 1.93.2; Check db_onpanic before dropping into the debugger on lock errors.
|
1.92 |
| 27-Dec-2005 |
chs | branches: 1.92.4; 1.92.6; 1.92.8; 1.92.10; changes for making DIAGNOSTIC not change the kernel ABI: - for structure fields that are conditionally present, make those fields always present. - for functions which are conditionally inline, make them never inline. - remove some other functions which are conditionally defined but don't actually do anything anymore. - make a lock-debugging function conditional on only LOCKDEBUG.
as discussed on tech-kern some time back.
|
1.91 |
| 24-Dec-2005 |
perry | Remove leading __ from __(const|inline|signed|volatile) -- it is obsolete.
|
1.90 |
| 11-Dec-2005 |
christos | merge ktrace-lwp.
|
1.89 |
| 08-Oct-2005 |
chs | default to simple_lock_debugger=1 with LOCKDEBUG.
|
1.88 |
| 01-Jun-2005 |
blymn | branches: 1.88.2; Fix function variable names shadowing global declarations.
|
1.87 |
| 29-May-2005 |
christos | Now we can fix the volatile cast-aways. Rename some shadowed variables while here.
|
1.86 |
| 26-Feb-2005 |
perry | branches: 1.86.2; nuke trailing whitespace
|
1.85 |
| 26-Oct-2004 |
yamt | branches: 1.85.4; 1.85.6; a relatively lightweight implementation of kernel_lock.
|
1.84 |
| 23-Oct-2004 |
yamt | don't reference kernel_lock directly.
|
1.83 |
| 04-Aug-2004 |
yamt | add missing wakeups in the cases of lock failure. from Stephan Uphoff, FreeBSD PR/69964.
(http://www.freebsd.org/cgi/query-pr.cgi?pr=69964) > The LK_WANT_EXCL and LK_WANT_UPGRADE bits act as mini-locks and can block > other threads. > Normally this is not a problem since the mini locks are upgraded to full loc > and the release of the locks will unblock the other threads. > However if a thread reset the bits without optaining a full lock > other threads are not awoken. > This can happens if obtaining the full lock fails because of a LK_SLEEPFAIL, > or a signal (if lock priority includes PCATCH .. don't think this is used).
|
1.82 |
| 04-Aug-2004 |
yamt | - revert a part of the previous which breaks LK_SPIN locks. (reported by Nicolas Joly on current-users@) - propagate the previous to spinlock_acquire_count.
|
1.81 |
| 03-Aug-2004 |
yamt | when acquiring an exclusive lock, ensure that no one else have the same lock. a patch from Stephan Uphoff, FreeBSD PR/69934.
(http://www.freebsd.org/cgi/query-pr.cgi?pr=69934) > Upgrading a lock does not play well together with acquiring > an exclusive lock and can lead to two threads being > granted exclusive access. > > Problematic sequence: > Thread A acquires a previous unlocked lock in shared mode. > Thread B tries to acquire the same lock in exclusive mode > and blocks. > Thread A upgrades its lock - waking up thread B. > Thread B wakes up and also acquires the same lock as it only checks > if the lock is not shared or if someone wants to upgrade the lock > and not if someone already upgraded the lock to an exclusive lock.
|
1.80 |
| 31-May-2004 |
yamt | lockmgr: add a comment about LK_RETRY.
|
1.79 |
| 30-May-2004 |
yamt | lockmgr: assert that LK_RETRY is not specified.
|
1.78 |
| 25-May-2004 |
hannken | Add ffs internal snapshots. Written by Marshall Kirk McKusick for FreeBSD.
- Not enabled by default. Needs kernel option FFS_SNAPSHOT. - Change parameters of ffs_blkfree. - Let the copy-on-write functions return an error so spec_strategy may fail if the copy-on-write fails. - Change genfs_*lock*() to use vp->v_vnlock instead of &vp->v_lock. - Add flag B_METAONLY to VOP_BALLOC to return indirect block buffer. - Add a function ffs_checkfreefile needed for snapshot creation. - Add special handling of snapshot files: Snapshots may not be opened for writing and the attributes are read-only. Use the mtime as the time this snapshot was taken. Deny mtime updates for snapshot files. - Add function transferlockers to transfer any waiting processes from one lock to another. - Add vfsop VFS_SNAPSHOT to take a snapshot and make it accessible through a vnode. - Add snapshot support to ls, fsck_ffs and dump.
Welcome to 2.0F.
Approved by: Jason R. Thorpe <thorpej@netbsd.org>
|
1.77 |
| 18-May-2004 |
yamt | use lockstatus() instead of L_BIGLOCK to check if we're holding a biglock. fix PR/25595.
|
1.76 |
| 18-May-2004 |
yamt | introduce LK_EXCLOTHER for lockstatus(). from FreeBSD, but a little differently. instead of letting lockstatus() take an additional thread argument, always use curlwp/curcpu.
|
1.75 |
| 13-Feb-2004 |
wiz | branches: 1.75.2; Uppercase CPU, plural is CPUs.
|
1.74 |
| 08-Dec-2003 |
hannken | Fix last commit. The current spl was an implicit argument to the ACQUIRE macro. With help and approval from YAMAMOTO Takashi <yamt@netbsd.org>
|
1.73 |
| 23-Nov-2003 |
yamt | turn ACQUIRE macro into a function by introducing new internal flags, LK_SHARE_NONZERO and LK_WAIT_NONZERO. from FreeBSD.
|
1.72 |
| 07-Aug-2003 |
agc | Move UCB-licensed code from 4-clause to 3-clause licence.
Patches provided by Joel Baker in PR 22364, verified by myself.
|
1.71 |
| 19-Feb-2003 |
pk | branches: 1.71.2; Use lock_printf() in SPINLOCK_SPINCHECK() and SLOCK_TRACE().
|
1.70 |
| 19-Jan-2003 |
pk | _simple_lock(): revert to IPL at entry while spinning on the lock; raise to spllock() again after we get it.
|
1.69 |
| 18-Jan-2003 |
thorpej | Merge the nathanw_sa branch.
|
1.68 |
| 15-Jan-2003 |
pk | lock_printf(): use vsnprintf/printf_nolog to avoid covertly using the system log and thereby invoking scheduler code.
|
1.67 |
| 24-Nov-2002 |
scw | Quell uninitialised variable warnings.
|
1.66 |
| 02-Nov-2002 |
perry | /*CONTCOND*/ while (0)'ed macros
|
1.65 |
| 01-Nov-2002 |
fvdl | For INTERLOCK_ACQUIRE, s/splsched/spllock/.
|
1.64 |
| 27-Sep-2002 |
provos | remove trailing \n in panic(). approved perry.
|
1.63 |
| 14-Sep-2002 |
chs | print a stack trace in the "spinout" case too.
|
1.62 |
| 21-May-2002 |
thorpej | Move kernel_lock manipulation info functions so that they will show up in a profile.
|
1.61 |
| 11-May-2002 |
enami | branches: 1.61.2; Remove #ifdef DIAGNOSTIC around panic(). It is better than NULL pointer dereference.
|
1.60 |
| 12-Nov-2001 |
lukem | add RCSIDs
|
1.59 |
| 29-Sep-2001 |
chs | branches: 1.59.2; replace wakeup_one() with wakeup(). wakeup_one() can only be used if the woken-up thread is guaranteed to pass the buck to the next guy before going back to sleep, and the rest of the lockmgr() code doesn't do that. from Bill Sommerfeld. fixes PR 14097.
|
1.58 |
| 25-Sep-2001 |
chs | print a stack trace in more LOCKDEBUG cases. add a blank line between complaints. use TAILQ_FOREACH where appropriate.
|
1.57 |
| 22-Sep-2001 |
sommerfeld | Correct comment to match code
|
1.56 |
| 08-Jul-2001 |
wiz | branches: 1.56.2; 1.56.4; synchron*, not sychron*
|
1.55 |
| 05-Jun-2001 |
thorpej | Add a simple_lock_only_held() LOCKDEBUG routine, which allows code to assert that exactly zero or one (and a specific one) locks are held.
From Bill Sommerfeld.
|
1.54 |
| 01-May-2001 |
enami | Define local variable cpu_id only when either MULTIPROCESSOR or DIAGNOSTIC is defined since it isn't used otherwise.
|
1.53 |
| 27-Apr-2001 |
marcus | STDC cleanup: volatile needs to be cast away for lk_flags as well.
|
1.52 |
| 20-Apr-2001 |
thorpej | SPINLOCK_INTERLOCK_RELEASE_HOOK should actually be SPINLOCK_SPIN_HOOK, so that we actually check for pending IPIs on the Alpha more than once. Also, when we call alpha_ipi_process(), make sure to go to splipi().
|
1.51 |
| 24-Dec-2000 |
jmc | branches: 1.51.2; Default lock_printf to syslog rather than printf. Some of the lock debug checks are done inside of wakeup which is holding the sched lock. Printf can cause wakeup to get called again (pty redirection of console message) which will panic with sched lock already held.
This isn't a long term fix as not being able to printf vs. sched lock should be cleaned up better but this avoids continual panics with lockdebug running and an xterm -C.
|
1.50 |
| 22-Nov-2000 |
thorpej | Add a LOCKDEBUG check for a r/w spinlock spinning out of control. Partially from Bill Sommerfeld.
|
1.49 |
| 20-Nov-2000 |
thorpej | Allow machine dependent code to specify a hook to be run when a spinlock's interlock is released.
Idea from Bill Sommerfeld.
|
1.48 |
| 28-Aug-2000 |
sommerfeld | Fix !LOCKDEBUG && !DIAGNOSTIC case
|
1.47 |
| 26-Aug-2000 |
sommerfeld | Since the spinlock count is per-cpu, we don't need atomic operations to update it, so don't bother with <machine/atomic.h>
Flush kernel_lock_release_all() and kernel_lock_acquire_count() (which didn't do spinlock accounting correctly), and replace them with spinlock_release_all() and spinlock_acquire_count().
|
1.46 |
| 26-Aug-2000 |
thorpej | Fix a printf format (for Alpha).
|
1.45 |
| 23-Aug-2000 |
sommerfeld | Default simple_lock_debugger to "on" on MULTIPROCESSOR. Change uninitialized simple_lock check from KASSERT to use SLOCK_WHERE (to show the "real" source line where the error was detected).
|
1.44 |
| 22-Aug-2000 |
thorpej | Use spllock() rather than splhigh().
|
1.43 |
| 22-Aug-2000 |
thorpej | Slight adjustment to INTERLOCK_*() macros to make it easier for the compiler to optimize.
|
1.42 |
| 21-Aug-2000 |
thorpej | - Clean up _simple_lock_held() - In simple_lock_switchcheck(), allow/enforce exactly one lock to be held: sched_lock. - Per e-mail to tech-smp from Bill Sommerfeld, r/w spin locks have an interlock at splsched(), rather than splhigh().
|
1.41 |
| 19-Aug-2000 |
thorpej | Lock debugging fix: Make sure a simplelock's lock_holder gets initialized properly, and consistently tracks the owning CPU's cpuid. Add some diagnostic assertions to enforce this.
|
1.40 |
| 17-Aug-2000 |
thorpej | For spinlocks, block interrupts while holding the interlock. Partially from Bill Sommerfeld.
|
1.39 |
| 17-Aug-2000 |
thorpej | Add a DIAGNOSTIC check for release of an unlocked lock.
From Bill Sommerfeld.
|
1.38 |
| 17-Aug-2000 |
thorpej | Some more lock debugging support: - LOCK_ASSERT(), which expands to KASSERT() if LOCKDEBUG. - new simple_lock_held(), which tests if the calling CPU holds the specified simple lock.
From Bill Sommerfeld, modified slightly by me.
|
1.37 |
| 10-Aug-2000 |
eeh | Nother __kprintf_attribute__ to be removed.
|
1.36 |
| 08-Aug-2000 |
thorpej | Fix printf format error pointed out by Steve Woodford.
|
1.35 |
| 07-Aug-2000 |
thorpej | Add a DIAGNOSTIC or LOCKDEBUG check for held spin locks.
|
1.34 |
| 07-Aug-2000 |
thorpej | It doesn't make sense to charge simple locks to proc's, because simple locks are held by CPUs. Remove p_simple_locks (which was unused anyway, really), and add a LOCKDEBUG check for held simple locks in mi_switch(). Grow p_locks to an int to take up the space previously used by p_simple_locks so that the proc structure doens't change size.
|
1.33 |
| 14-Jul-2000 |
thorpej | ANSI'ify.
|
1.32 |
| 10-Jun-2000 |
sommerfeld | branches: 1.32.2; Fix assorted bugs around shutdown/reboot/panic time. - add a new global variable, doing_shutdown, which is nonzero if vfs_shutdown() or panic() have been called. - in panic, set RB_NOSYNC if doing_shutdown is already set on entry so we don't reenter vfs_shutdown if we panic'ed there. - in vfs_shutdown, don't use proc0's process for sys_sync unless curproc is NULL. - in lockmgr, attribute successful locks to proc0 if doing_shutdown && curproc==NULL, and panic if we can't get the lock right away; avoids the spurious lockmgr DIAGNOSTIC panic from the ddb reboot command. - in subr_pool, deal with curproc==NULL in the doing_shutdown case. - in mfs_strategy, bitbucket writes if doing_shutdown, so we don't wedge waiting for the mfs process. - in ltsleep, treat ((curproc == NULL) && doing_shutdown) like the panicstr case.
Appears to fix: kern/9239, kern/10187, kern/9367. May also fix kern/10122.
|
1.31 |
| 08-Jun-2000 |
thorpej | Use ltsleep().
|
1.30 |
| 23-May-2000 |
thorpej | branches: 1.30.2; Fix a typo, and add some lint comments.
|
1.29 |
| 03-May-2000 |
sommerfeld | Let MULTIPROCESSOR && LOCKDEBUG case compile again
|
1.28 |
| 02-May-2000 |
thorpej | - If a platform defines __HAVE_ATOMIC_OPERATIONS, use them for counting in the MULTIPROCESSOR case. - Move a misplaced #ifdef so that LK_REENABLE actually works.
|
1.27 |
| 29-Apr-2000 |
thorpej | Require that each each MACHINE/MACHINE_ARCH supply a lock.h. This file contains the values __SIMPLELOCK_LOCKED and __SIMPLELOCK_UNLOCKED, which replace the old SIMPLELOCK_LOCKED and SIMPLELOCK_UNLOCKED. These files are also required to supply inline functions __cpu_simple_lock(), __cpu_simple_lock_try(), and __cpu_simple_unlock() if locking is to be supported on that platform (i.e. if MULTIPROCESSOR is defined in the _KERNEL case). Change these functions to take an int * (&alp->lock_data) rather than the struct simplelock * itself.
These changes make it possible for userland to use the locking primitives by including <machine/lock.h>.
|
1.26 |
| 09-Feb-2000 |
sommerfeld | Three MULTIPROCESSOR + LOCKDEBUG fixes: 1) fix typo preventing compilation (missing comma). 2) in SLOCK_WHERE, display cpu number in the MP case. 3) the folowing race condition was observed in _simple_lock: cpu 1 releases lock, cpu 0 grabs lock cpu 1 sees it's already locked. cpu 1 sees that lock_holder== "cpu 1" cpu 1 assumes that it already holds it and barfs. cpu 0 sets lock_holder == "cpu 0" Fix: set lock_holder to LK_NOCPU in _simple_unlock().
|
1.25 |
| 27-Aug-1999 |
thorpej | branches: 1.25.2; Make it possible to direct LOCKDEBUG messages to syslog only.
|
1.24 |
| 10-Aug-1999 |
thorpej | Use cpuid_t and cpu_number().
|
1.23 |
| 28-Jul-1999 |
thorpej | Fix a thinko in draining of spin locks: bump waitcount in the spin case, too. Remove some needless code duplication by adding a "drain" argument to the ACQUIRE() macro (compiler can [and does] optimize the constant conditional).
|
1.22 |
| 28-Jul-1999 |
mellon | - Correct the definition of the COUNT macro so that it takes the same number of arguments when compiled without DIAGNOSTIC as with.
|
1.21 |
| 27-Jul-1999 |
thorpej | Improve the LOCKDEBUG code: - Now compatible with MULTIPROCESSOR (requires other changes not yet committed, but which will be later today). - In addition to tracking simple locks, track exclusive spin locks. - Count spin locks like we do sleep locks (in the cpu_info for this CPU). - Lock debug lists are now TAILQs, so as to make the locking order more obvious when dumping the list.
Also, some suggestions from Bill Sommerfeld: - SIMPLELOCK_LOCKED and SIMPLELOCK_UNLOCKED constants, which may be defined in <machine/lock.h> (default to 1 and 0, respectively). This makes it easier to support architectures which use test-and-clear rather than test-and-set. - Add __attribute__((__aligned__)) to the `lock_data' member of the simplelock structure. This makes it easier to support architectures which can only perform atomic operations on very-well-aligned memory locations. NOTE: This changes the size of struct simplelock, and will cause a version bump.
|
1.20 |
| 26-Jul-1999 |
thorpej | Use wakeup_one() for waking up sleep lock sleepers.
|
1.19 |
| 25-Jul-1999 |
thorpej | Add a spin lock mode to the lock manager. Provides a read/write spin lock facility. Some code and ideas from Ross Harvey.
|
1.18 |
| 19-Jul-1999 |
chs | more cleanup: remove simplelockrecurse, lockpausetime and PAUSE(): none of these serve any purpose anymore. in the LOCKDEBUG functions, expand the splhigh() region to cover the entire function. without this there can still be races.
|
1.17 |
| 04-May-1999 |
sommerfe | Count lockmgr locks held by process if LOCKDEBUG || DIAGNOSTIC. (previously, it was just under LOCKDEBUG).
|
1.16 |
| 25-Mar-1999 |
sommerfe | branches: 1.16.2; Prevent deadlock cited in PR4629 from crashing the system. (copyout and system call now just return EFAULT). A complete fix will presumably have to wait for UBC and/or for vnode locking protocols to be revamped to allow use of shared locks.
|
1.15 |
| 28-Feb-1999 |
fvdl | Recursive locks were previously only available with LK_CANRECURSE. This could be done in one of 2 ways:
* call lk_init with LK_CANRECURSE, resulting in a lock that always can be used recursively. * call lockmgr with LK_CANRECURSE, meaning that it's ok if this lock is already held by us.
Sometimes we need a locking type that says: take this lock now, exclusively, but while I am holding it, I may go through a code path which could attempt to get the lock again, and which is unaware that the lock might already be taken. Implement LK_SETRECURSE for this purpose. Assume that locks and unlocks come in matching pairs (they should), and check for this 'level' using SETRECURSE locks.
|
1.14 |
| 22-Jan-1999 |
chs | print a little more info in simple_lock_freecheck().
|
1.13 |
| 02-Dec-1998 |
bouyer | Cosmectic change in a panic(), so that the panic string printed by savecore has more meaning.
|
1.12 |
| 04-Nov-1998 |
chs | branches: 1.12.2; LOCKDEBUG enhancements for non-MP: keep a list of locked locks. use this to print where the lock was locked when we either go to sleep with a lock held or try to free a locked lock.
|
1.11 |
| 14-Oct-1998 |
pk | Disable the daft PAUSE() macro, which manages to skip all the relevant code in lockmgr() most of the time. This a no doubt a case of Bad Coding Style.
|
1.10 |
| 29-Sep-1998 |
thorpej | Initialize the CPU ID in the simplelock.
|
1.9 |
| 24-Sep-1998 |
thorpej | Key off MULTIPROCESSOR, not NCPUS > 1. Pull in <machine/lock.h> if MULTIPROCESSOR is defined, and rely on it to define the simple lock operations.
|
1.8 |
| 04-Aug-1998 |
perry | Abolition of bcopy, ovbcopy, bcmp, and bzero, phase one. bcopy(x, y, z) -> memcpy(y, x, z) ovbcopy(x, y, z) -> memmove(y, x, z) bcmp(x, y, z) -> memcmp(x, y, z) bzero(x, y) -> memset(x, 0, y)
|
1.7 |
| 20-May-1998 |
thorpej | defopt LOCKDEBUG
|
1.6 |
| 01-Mar-1998 |
fvdl | Merge with Lite2 + local changes
|
1.5 |
| 07-Feb-1998 |
chs | snazzier LOCKDEBUG code.
|
1.4 |
| 09-Oct-1997 |
mycroft | Make wmesg arguments to various functions const.
|
1.3 |
| 06-Jul-1997 |
fvdl | branches: 1.3.2; There appear to be spinlock bugs in the VM code. They are not a problem now, as we're always one on CPU (they will be later, though). With DEBUG, they cause a lot of output, so DEBUG -> LOCKDEBUG for now.
|
1.2 |
| 06-Jul-1997 |
fvdl | Add NetBSD RCS Id, and a few minor changes to make it compile.
|
1.1 |
| 06-Jul-1997 |
fvdl | branches: 1.1.1; Initial revision
|
1.1.1.1 |
| 06-Jul-1997 |
fvdl | Import Lite2 locking code
|
1.3.2.1 |
| 14-Oct-1997 |
thorpej | Update marc-pcmcia branch from trunk.
|
1.12.2.3 |
| 30-May-1999 |
chs | add a flag "simple_lock_debugger". if this is set and we detect a locking error, call Debugger().
|
1.12.2.2 |
| 25-Feb-1999 |
chs | use SLOCK_{,UN}LOCKED for LOCKDEBUG code. don't bother sleeping when the lock is in the wrong state, it'll just panic anyway. change _simple_lock_try() to return 0 if the lock is already held... this should never happen on a uniprocessor so it's better to know right away.
|
1.12.2.1 |
| 09-Nov-1998 |
chs | initial snapshot. lots left to do.
|
1.16.2.1 |
| 04-May-1999 |
perry | branches: 1.16.2.1.2; pullup 1.16->1.17 (sommerfeld)
|
1.16.2.1.2.3 |
| 02-Aug-1999 |
thorpej | Update from trunk.
|
1.16.2.1.2.2 |
| 04-Jul-1999 |
chs | expand splhigh() to be around the entire body of the LOCKDEBUG functions. things could still get out of sync and cause panics as they were.
|
1.16.2.1.2.1 |
| 07-Jun-1999 |
chs | merge everything from chs-ubc branch.
|
1.25.2.5 |
| 23-Apr-2001 |
bouyer | Sync with HEAD.
|
1.25.2.4 |
| 05-Jan-2001 |
bouyer | Sync with HEAD
|
1.25.2.3 |
| 08-Dec-2000 |
bouyer | Sync with HEAD.
|
1.25.2.2 |
| 22-Nov-2000 |
bouyer | Sync with HEAD.
|
1.25.2.1 |
| 20-Nov-2000 |
bouyer | Update thorpej_scsipi to -current as of a month ago
|
1.30.2.1 |
| 22-Jun-2000 |
minoura | Sync w/ netbsd-1-5-base.
|
1.32.2.3 |
| 05-Sep-2000 |
gmcgarry | Pull up revision 1.36 (approved by jhawk)
>revision 1.36 >date: 2000/08/08 19:55:26; author: thorpej; state: Exp; lines: +4 -3 >Fix printf format error pointed out by Steve Woodford.
|
1.32.2.2 |
| 11-Aug-2000 |
thorpej | Pullup from trunk: Add a DIAGNOSTIC or LOCKDEBUG check for held spin locks.
|
1.32.2.1 |
| 11-Aug-2000 |
thorpej | Pullup from trunk: It doesn't make sense to charge simple locks to proc's, because simple locks are held by CPUs. Remove p_simple_locks (which was unused anyway, really), and add a LOCKDEBUG check for held simple locks in mi_switch(). Grow p_locks to an int to take up the space previously used by p_simple_locks so that the proc structure doens't change size.
|
1.51.2.16 |
| 17-Jan-2003 |
thorpej | Sync with HEAD.
|
1.51.2.15 |
| 11-Dec-2002 |
thorpej | Sync with HEAD.
|
1.51.2.14 |
| 11-Nov-2002 |
nathanw | Catch up to -current
|
1.51.2.13 |
| 18-Oct-2002 |
nathanw | P_BIGLOCK -> L_BIGLOCK
|
1.51.2.12 |
| 18-Oct-2002 |
nathanw | Catch up to -current.
|
1.51.2.11 |
| 17-Sep-2002 |
nathanw | Catch up to -current.
|
1.51.2.10 |
| 12-Jul-2002 |
nathanw | No longer need to pull in lwp.h; proc.h pulls it in for us.
|
1.51.2.9 |
| 24-Jun-2002 |
nathanw | Curproc->curlwp renaming.
Change uses of "curproc->l_proc" back to "curproc", which is more like the original use. Bare uses of "curproc" are now "curlwp".
"curproc" is now #defined in proc.h as ((curlwp) ? (curlwp)->l_proc) : NULL) so that it is always safe to reference curproc (*de*referencing curproc is another story, but that's always been true).
|
1.51.2.8 |
| 20-Jun-2002 |
nathanw | Catch up to -current.
|
1.51.2.7 |
| 27-Nov-2001 |
thorpej | Make lockmgr() lwp-aware: - Locks are counted against LWPs, not procs. - When we record the lockholder in the lock structure, we need to also record the lwpid. - When we are checking who holds the lock, also consider lwpid.
Fixes a "locking against myself" panic reported by Allen Briggs that could be easily triggered by redirecting the output of an LWP-using program to a file.
|
1.51.2.6 |
| 14-Nov-2001 |
nathanw | Catch up to -current.
|
1.51.2.5 |
| 08-Oct-2001 |
nathanw | Catch up to -current.
|
1.51.2.4 |
| 26-Sep-2001 |
nathanw | Catch up to -current. Again.
|
1.51.2.3 |
| 24-Aug-2001 |
nathanw | Catch up with -current.
|
1.51.2.2 |
| 21-Jun-2001 |
nathanw | Catch up to -current.
|
1.51.2.1 |
| 05-Mar-2001 |
nathanw | Initial commit of scheduler activations and lightweight process support.
|
1.56.4.1 |
| 01-Oct-2001 |
fvdl | Catch up with -current.
|
1.56.2.3 |
| 10-Oct-2002 |
jdolecek | sync kqueue with -current; this includes merge of gehenna-devsw branch, merge of i386 MP branch, and part of autoconf rototil work
|
1.56.2.2 |
| 23-Jun-2002 |
jdolecek | catch up with -current on kqueue branch
|
1.56.2.1 |
| 10-Jan-2002 |
thorpej | Sync kqueue branch with -current.
|
1.59.2.1 |
| 12-Nov-2001 |
thorpej | Sync the thorpej-mips-cache branch with -current.
|
1.61.2.1 |
| 30-May-2002 |
gehenna | Catch up with -current.
|
1.71.2.7 |
| 10-Nov-2005 |
skrll | Sync with HEAD. Here we go again...
|
1.71.2.6 |
| 04-Mar-2005 |
skrll | Sync with HEAD.
Hi Perry!
|
1.71.2.5 |
| 02-Nov-2004 |
skrll | Sync with HEAD.
|
1.71.2.4 |
| 21-Sep-2004 |
skrll | Fix the sync with head I botched.
|
1.71.2.3 |
| 18-Sep-2004 |
skrll | Sync with HEAD.
|
1.71.2.2 |
| 12-Aug-2004 |
skrll | Sync with HEAD.
|
1.71.2.1 |
| 03-Aug-2004 |
skrll | Sync with HEAD
|
1.75.2.1 |
| 23-Aug-2004 |
tron | branches: 1.75.2.1.2; Pull up revision 1.81-1.83 via patch (requested by yamt in ticket #752): when acquiring an exclusive lock, ensure that no one else have the same lock. a patch from Stephan Uphoff, FreeBSD PR/69934. (http://www.freebsd.org/cgi/query-pr.cgi?pr=69934) Upgrading a lock does not play well together with acquiring an exclusive lock and can lead to two threads being granted exclusive access. Problematic sequence: Thread A acquires a previous unlocked lock in shared mode. Thread B tries to acquire the same lock in exclusive mode and blocks. Thread A upgrades its lock - waking up thread B. Thread B wakes up and also acquires the same lock as it only checks if the lock is not shared or if someone wants to upgrade the lock and not if someone already upgraded the lock to an exclusive lock. - revert a part of the previous which breaks LK_SPIN locks. (reported by Nicolas Joly on current-users@) - propagate the previous to spinlock_acquire_count. add missing wakeups in the cases of lock failure. from Stephan Uphoff, FreeBSD PR/69964.
|
1.75.2.1.2.1 |
| 11-Aug-2007 |
bouyer | Pull up following revision(s) (requested by pooka in ticket #11349): sys/sys/lock.h: revision 1.72 sys/kern/kern_lock.c: revision 1.118 via patch sys/kern/vfs_subr.c: revision 1.295 Define a new lockmgr flag LK_RESURRECT which can be used in conjunction with LK_DRAIN. This has the same effect as LK_DRAIN except it atomically does NOT mark the lock as drained. This guarantees that when we got the lock, we were the last one currently waiting for the lock. Use LK_DRAIN|LK_RESURRECT in vclean() to make sure there are no waiters for the lock. This should fix behaviour theoretized to be caused by vfs_subr.c 1.289 which caused vclean() to run into completion and free the vnode before all lock-waiters had been processed. Should therefore fix the "simple_lock: unitialized lock" problems seen recently. thanks to Juergen Hannken-Illjes for some analysis of the problem and Erik Bertelsen for testing
|
1.85.6.1 |
| 19-Mar-2005 |
yamt | sync with head. xen and whitespace. xen part is not finished.
|
1.85.4.1 |
| 29-Apr-2005 |
kent | sync with -current
|
1.86.2.1 |
| 26-Aug-2007 |
bouyer | Pull up following revision(s) (requested by pooka in ticket #1816): sys/sys/lock.h: revision 1.72 sys/kern/kern_lock.c: revision 1.118 via patch sys/kern/vfs_subr.c: revision 1.295 Define a new lockmgr flag LK_RESURRECT which can be used in conjunction with LK_DRAIN. This has the same effect as LK_DRAIN except it atomically does NOT mark the lock as drained. This guarantees that when we got the lock, we were the last one currently waiting for the lock. Use LK_DRAIN|LK_RESURRECT in vclean() to make sure there are no waiters for the lock. This should fix behaviour theoretized to be caused by vfs_subr.c 1.289 which caused vclean() to run into completion and free the vnode before all lock-waiters had been processed. Should therefore fix the "simple_lock: unitialized lock" problems seen recently. thanks to Juergen Hannken-Illjes for some analysis of the problem and Erik Bertelsen for testing
|
1.88.2.10 |
| 17-Mar-2008 |
yamt | sync with head.
|
1.88.2.9 |
| 04-Feb-2008 |
yamt | sync with head.
|
1.88.2.8 |
| 21-Jan-2008 |
yamt | sync with head
|
1.88.2.7 |
| 07-Dec-2007 |
yamt | sync with head
|
1.88.2.6 |
| 15-Nov-2007 |
yamt | sync with head.
|
1.88.2.5 |
| 27-Oct-2007 |
yamt | sync with head.
|
1.88.2.4 |
| 03-Sep-2007 |
yamt | sync with head.
|
1.88.2.3 |
| 26-Feb-2007 |
yamt | sync with head.
|
1.88.2.2 |
| 30-Dec-2006 |
yamt | sync with head.
|
1.88.2.1 |
| 21-Jun-2006 |
yamt | sync with head.
|
1.92.10.1 |
| 19-Apr-2006 |
elad | sync with head.
|
1.92.8.3 |
| 14-Sep-2006 |
yamt | sync with head.
|
1.92.8.2 |
| 11-Aug-2006 |
yamt | sync with head
|
1.92.8.1 |
| 01-Apr-2006 |
yamt | sync with head.
|
1.92.6.1 |
| 22-Apr-2006 |
simonb | Sync with head.
|
1.92.4.1 |
| 09-Sep-2006 |
rpaulo | sync with head
|
1.93.2.2 |
| 31-Mar-2006 |
tron | Merge 2006-03-31 NetBSD-current into the "peter-altq" branch.
|
1.93.2.1 |
| 28-Mar-2006 |
tron | Merge 2006-03-28 NetBSD-current into the "peter-altq" branch.
|
1.99.4.2 |
| 10-Dec-2006 |
yamt | sync with head.
|
1.99.4.1 |
| 22-Oct-2006 |
yamt | sync with head
|
1.99.2.13 |
| 06-Feb-2007 |
ad | lockstat:
- Cache enabled/disabled status on entry. - Don't read the cycle counter unless enabled.
|
1.99.2.12 |
| 26-Jan-2007 |
ad | - Increase spinout timeout. - Spin testing kernel_lock to reduce bus traffic.
|
1.99.2.11 |
| 25-Jan-2007 |
yamt | _kernel_lock_assert_unlocked: don't panic when other cpu holds the lock.
|
1.99.2.10 |
| 17-Jan-2007 |
ad | Fix detection of deadlock against the big lock.
|
1.99.2.9 |
| 12-Jan-2007 |
ad | Make DEBUG kernels build again.
|
1.99.2.8 |
| 12-Jan-2007 |
ad | Sync with head.
|
1.99.2.7 |
| 11-Jan-2007 |
ad | Checkpoint work in progress.
|
1.99.2.6 |
| 29-Dec-2006 |
ad | Checkpoint work in progress.
|
1.99.2.5 |
| 18-Nov-2006 |
ad | Sync with head.
|
1.99.2.4 |
| 17-Nov-2006 |
ad | Checkpoint work in progress.
|
1.99.2.3 |
| 24-Oct-2006 |
ad | _kernel_proc_lock: add a LOCKDEBUG_BARRIER() here.
|
1.99.2.2 |
| 20-Oct-2006 |
ad | - sched_lock is no more - Use mutex_setspl() for kernel_mutex
|
1.99.2.1 |
| 11-Sep-2006 |
ad | Make the kernel_lock a mutex.
|
1.102.4.2 |
| 03-Sep-2007 |
wrstuden | Sync w/ NetBSD-4-RC_1
|
1.102.4.1 |
| 04-Jun-2007 |
wrstuden | Update to today's netbsd-4.
|
1.102.2.2 |
| 01-Aug-2007 |
liamjfoy | Pull up following revision(s) (requested by pooka in ticket #808): sys/sys/lock.h: revision 1.72 sys/kern/kern_lock.c: revision 1.118 sys/kern/vfs_subr.c: revision 1.295 Define a new lockmgr flag LK_RESURRECT which can be used in conjunction with LK_DRAIN. This has the same effect as LK_DRAIN except it atomically does NOT mark the lock as drained. This guarantees that when we got the lock, we were the last one currently waiting for the lock. Use LK_DRAIN|LK_RESURRECT in vclean() to make sure there are no waiters for the lock. This should fix behaviour theoretized to be caused by vfs_subr.c 1.289 which caused vclean() to run into completion and free the vnode before all lock-waiters had been processed. Should therefore fix the "simple_lock: unitialized lock" problems seen recently. thanks to Juergen Hannken-Illjes for some analysis of the problem and Erik Bertelsen for testing
|
1.102.2.1 |
| 23-May-2007 |
riz | Pull up following revision(s) (requested by tls in ticket #652): sys/kern/kern_lock.c: revision 1.103 in lockstatus(), report LK_EXCLOTHER if LK_WANT_EXCL or LK_WANT_UPGRADE is set, since the thread that set either of those flags will be the next one to get the lock. fixes PR 35143.
|
1.105.2.5 |
| 15-Apr-2007 |
yamt | sync with head.
|
1.105.2.4 |
| 24-Mar-2007 |
rmind | Checkpoint: - Abstract for per-CPU locking of runqueues. As a workaround for SCHED_4BSD global runqueue, covered by sched_mutex, spc_mutex is a pointer for now. After making SCHED_4BSD runqueues per-CPU, it will became a storage mutex. - suspendsched: Locking is not necessary for cpu_need_resched(). - Remove mutex_spin_exit() prototype in patch.c and LOCK_ASSERT() check in runqueue_nextlwp() in sched_4bsd.c to make them compile again.
|
1.105.2.3 |
| 12-Mar-2007 |
rmind | Sync with HEAD.
|
1.105.2.2 |
| 27-Feb-2007 |
yamt | - sync with head. - move sched_changepri back to kern_synch.c as it doesn't know PPQ anymore.
|
1.105.2.1 |
| 17-Feb-2007 |
yamt | - separate context switching and thread scheduling. - introduce idle lwp. - change some related MD/MI interfaces and implement i386 version.
|
1.110.6.1 |
| 09-Dec-2007 |
reinoud | Pullup to HEAD
|
1.110.4.1 |
| 11-Jul-2007 |
mjf | Sync with head.
|
1.110.2.20 |
| 05-Nov-2007 |
ad | Cosmetic change for clarity.
|
1.110.2.19 |
| 01-Nov-2007 |
ad | - Fix interactivity problems under high load. Beacuse soft interrupts are being stacked on top of regular LWPs, more often than not aston() was being called on a soft interrupt thread instead of a user thread, meaning that preemption was not happening on EOI.
- Don't use bool in a couple of data structures. Sub-word writes are not always atomic and may clobber other fields in the containing word.
- For SCHED_4BSD, make p_estcpu per thread (l_estcpu). Rework how the dynamic priority level is calculated - it's much better behaved now.
- Kill the l_usrpri/l_priority split now that priorities are no longer directly assigned by tsleep(). There are three fields describing LWP priority:
l_priority: Dynamic priority calculated by the scheduler. This does not change for kernel/realtime threads, and always stays within the correct band. Eg for timeshared LWPs it never moves out of the user priority range. This is basically what l_usrpri was before.
l_inheritedprio: Lent to the LWP due to priority inheritance (turnstiles).
l_kpriority: A boolean value set true the first time an LWP sleeps within the kernel. This indicates that the LWP should get a priority boost as compensation for blocking. lwp_eprio() now does the equivalent of sched_kpri() if the flag is set. The flag is cleared in userret().
- Keep track of scheduling class (OTHER, FIFO, RR) in struct lwp, and use this to make decisions in a few places where we previously tested for a kernel thread.
- Partially fix itimers and usr/sys/intr time accounting in the presence of software interrupts.
- Use kthread_create() to create idle LWPs. Move priority definitions from the various modules into sys/param.h.
- newlwp -> lwp_create
|
1.110.2.18 |
| 23-Oct-2007 |
ad | Sync with head.
|
1.110.2.17 |
| 18-Oct-2007 |
ad | Update for soft interrupt changes. See kern_softint.c 1.1.2.17 for details.
|
1.110.2.16 |
| 11-Oct-2007 |
ad | 'volatile' isn't needed here.
|
1.110.2.15 |
| 11-Oct-2007 |
ad | - Always include the kernel_lock functions, for LKMs. - Fix uniprocessor builds. - Tidy up a bit.
|
1.110.2.14 |
| 10-Oct-2007 |
ad | unbork
|
1.110.2.13 |
| 10-Oct-2007 |
ad | crackmgr(): don't keep track of line numbers/file names for LOCKDEBUG. Instead, just stash a couple of text addresses into struct lock. Keep these in struct lock even if compiled without LOCKDEBUG, so that the size of struct lock is not changed by it.
|
1.110.2.12 |
| 09-Oct-2007 |
ad | Sync with head.
|
1.110.2.11 |
| 08-Oct-2007 |
ad | _kernel_lock: cut back on spl calls.
|
1.110.2.10 |
| 20-Aug-2007 |
ad | - Track where locks were initialized. - Sync with HEAD.
|
1.110.2.9 |
| 29-Jul-2007 |
ad | Add lockdestroy() which tears down lk_interlock.
|
1.110.2.8 |
| 09-Jul-2007 |
ad | KASSERT((l->l_flag & LW_INTR) == 0) -> KASSERT((l->l_flag & LW_INTR) == 0 || panicstr != NULL)
|
1.110.2.7 |
| 17-Jun-2007 |
ad | - Increase the number of thread priorities from 128 to 256. How the space is set up is to be revisited. - Implement soft interrupts as kernel threads. A generic implementation is provided, with hooks for fast-path MD code that can run the interrupt threads over the top of other threads executing in the kernel. - Split vnode::v_flag into three fields, depending on how the flag is locked (by the interlock, by the vnode lock, by the file system). - Miscellaneous locking fixes and improvements.
|
1.110.2.6 |
| 08-Jun-2007 |
ad | Sync with head.
|
1.110.2.5 |
| 10-Apr-2007 |
ad | Sync with head.
|
1.110.2.4 |
| 09-Apr-2007 |
ad | Fix an assertion.
|
1.110.2.3 |
| 05-Apr-2007 |
ad | Make it compile.
|
1.110.2.2 |
| 21-Mar-2007 |
ad | GC the simplelock/spinlock debugging stuff.
|
1.110.2.1 |
| 13-Mar-2007 |
ad | Pull in the initial set of changes for the vmlocking branch.
|
1.116.2.3 |
| 10-Sep-2007 |
skrll | Adapt some more code to the branch.
|
1.116.2.2 |
| 15-Aug-2007 |
skrll | Sync with HEAD.
|
1.116.2.1 |
| 18-Jul-2007 |
skrll | Initial work on provided correctly aligned __cpu_simple_lock_t for hppa and first attempt at adapting i386 to the changes.
More to come.
|
1.118.8.2 |
| 29-Jul-2007 |
pooka | Define a new lockmgr flag LK_RESURRECT which can be used in conjunction with LK_DRAIN. This has the same effect as LK_DRAIN except it atomically does NOT mark the lock as drained. This guarantees that when we got the lock, we were the last one currently waiting for the lock.
Use LK_DRAIN|LK_RESURRECT in vclean() to make sure there are no waiters for the lock. This should fix behaviour theoretized to be caused by vfs_subr.c 1.289 which caused vclean() to run into completion and free the vnode before all lock-waiters had been processed. Should therefore fix the "simple_lock: unitialized lock" problems seen recently.
thanks to Juergen Hannken-Illjes for some analysis of the problem and Erik Bertelsen for testing
|
1.118.8.1 |
| 29-Jul-2007 |
pooka | file kern_lock.c was added on branch matt-mips64 on 2007-07-29 12:40:38 +0000
|
1.118.6.3 |
| 23-Mar-2008 |
matt | sync with HEAD
|
1.118.6.2 |
| 09-Jan-2008 |
matt | sync with HEAD
|
1.118.6.1 |
| 06-Nov-2007 |
matt | sync with HEAD
|
1.118.4.8 |
| 09-Dec-2007 |
jmcneill | Sync with HEAD.
|
1.118.4.7 |
| 03-Dec-2007 |
joerg | Sync with HEAD.
|
1.118.4.6 |
| 21-Nov-2007 |
joerg | Sync with HEAD.
|
1.118.4.5 |
| 14-Nov-2007 |
joerg | Sync with HEAD.
|
1.118.4.4 |
| 06-Nov-2007 |
joerg | Sync with HEAD.
|
1.118.4.3 |
| 31-Oct-2007 |
joerg | Sync with HEAD.
|
1.118.4.2 |
| 26-Oct-2007 |
joerg | Sync with HEAD.
Follow the merge of pmap.c on i386 and amd64 and move pmap_init_tmp_pgtbl into arch/x86/x86/pmap.c. Modify the ACPI wakeup code to restore CR4 before jumping back into kernel space as the large page option might cover that.
|
1.118.4.1 |
| 02-Oct-2007 |
joerg | Sync with HEAD.
|
1.120.2.2 |
| 18-Oct-2007 |
yamt | sync with head.
|
1.120.2.1 |
| 14-Oct-2007 |
yamt | sync with head.
|
1.123.2.3 |
| 21-Nov-2007 |
bouyer | Sync with HEAD
|
1.123.2.2 |
| 18-Nov-2007 |
bouyer | Sync with HEAD
|
1.123.2.1 |
| 13-Nov-2007 |
bouyer | Sync with HEAD
|
1.124.2.3 |
| 18-Feb-2008 |
mjf | Sync with HEAD.
|
1.124.2.2 |
| 08-Dec-2007 |
mjf | Sync with HEAD.
|
1.124.2.1 |
| 19-Nov-2007 |
mjf | Sync with HEAD.
|
1.128.2.5 |
| 27-Dec-2007 |
ad | Fix !lockdebug.
|
1.128.2.4 |
| 27-Dec-2007 |
ad | Allocate but do not use a lockdebug record for 'struct lock' so that it's easier to find leaks.
|
1.128.2.3 |
| 10-Dec-2007 |
ad | - Don't drain the vnode lock in vclean(); reference counting and XLOCK should be enough. - LK_SETRECURSE is gone.
|
1.128.2.2 |
| 08-Dec-2007 |
ad | Sync with head.
|
1.128.2.1 |
| 04-Dec-2007 |
ad | Pull the vmlocking changes into a new branch.
|
1.129.4.3 |
| 10-Jan-2008 |
bouyer | Sync with HEAD
|
1.129.4.2 |
| 08-Jan-2008 |
bouyer | Sync with HEAD
|
1.129.4.1 |
| 02-Jan-2008 |
bouyer | Sync with HEAD
|
1.134.6.5 |
| 17-Jan-2009 |
mjf | Sync with HEAD.
|
1.134.6.4 |
| 02-Jul-2008 |
mjf | Sync with HEAD.
|
1.134.6.3 |
| 29-Jun-2008 |
mjf | Sync with HEAD.
|
1.134.6.2 |
| 02-Jun-2008 |
mjf | Sync with HEAD.
|
1.134.6.1 |
| 03-Apr-2008 |
mjf | Sync with HEAD.
|
1.134.2.1 |
| 24-Mar-2008 |
keiichi | sync with head.
|
1.137.4.5 |
| 11-Mar-2010 |
yamt | sync with head
|
1.137.4.4 |
| 19-Aug-2009 |
yamt | sync with head.
|
1.137.4.3 |
| 20-Jun-2009 |
yamt | sync with head
|
1.137.4.2 |
| 04-May-2009 |
yamt | sync with head.
|
1.137.4.1 |
| 16-May-2008 |
yamt | sync with head.
|
1.137.2.2 |
| 04-Jun-2008 |
yamt | sync with head
|
1.137.2.1 |
| 18-May-2008 |
yamt | sync with head.
|
1.141.2.2 |
| 18-Sep-2008 |
wrstuden | Sync with wrstuden-revivesa-base-2.
|
1.141.2.1 |
| 23-Jun-2008 |
wrstuden | Sync w/ -current. 34 merge conflicts to follow.
|
1.144.2.2 |
| 03-Jul-2008 |
simonb | Sync with head.
|
1.144.2.1 |
| 27-Jun-2008 |
simonb | Sync with head.
|
1.146.4.1 |
| 19-Jan-2009 |
skrll | Sync with HEAD.
|
1.146.2.1 |
| 13-Dec-2008 |
haad | Update haad-dm branch to haad-dm-base2.
|
1.147.4.1 |
| 23-Jul-2009 |
jym | Sync with HEAD.
|
1.151.2.3 |
| 22-May-2014 |
yamt | sync with head.
for a reference, the tree before this commit was tagged as yamt-pagecache-tag8.
this commit was splitted into small chunks to avoid a limitation of cvs. ("Protocol error: too many arguments")
|
1.151.2.2 |
| 30-Oct-2012 |
yamt | sync with head
|
1.151.2.1 |
| 17-Apr-2012 |
yamt | sync with head
|
1.153.2.3 |
| 03-Dec-2017 |
jdolecek | update from HEAD
|
1.153.2.2 |
| 20-Aug-2014 |
tls | Rebase to HEAD as of a few days ago.
|
1.153.2.1 |
| 23-Jun-2013 |
tls | resync from head
|
1.154.4.1 |
| 18-May-2014 |
rmind | sync with head
|
1.155.6.2 |
| 05-Feb-2017 |
skrll | Sync with HEAD
|
1.155.6.1 |
| 06-Jun-2015 |
skrll | Sync with HEAD
|
1.155.4.1 |
| 05-Jan-2016 |
snj | Pull up following revision(s) (requested by skrll in ticket #1056): sys/kern/kern_lock.c: revision 1.156 Allow sleeping in the idle lwp if the cpu isn't running yet. OK'ed by rmind a while ago.
|
1.157.4.1 |
| 21-Apr-2017 |
bouyer | Sync with HEAD
|
1.157.2.1 |
| 20-Mar-2017 |
pgoyette | Sync with HEAD
|
1.158.6.3 |
| 31-Jul-2023 |
martin | Pull up following revision(s) (requested by riastradh in ticket #1860):
sys/kern/kern_rwlock.c: revision 1.67 sys/kern/kern_lock.c: revision 1.182 sys/kern/kern_mutex.c: revision 1.102 (all via patch)
Sprinkle __predict_{true,false} for panicstr checks
|
1.158.6.2 |
| 13-Jan-2018 |
snj | Pull up following revision(s) (requested by ozaki-r in ticket #495): lib/librumpuser/rumpfiber.c: revision 1.13 lib/librumpuser/rumpuser_pth.c: revision 1.46 lib/librumpuser/rumpuser_pth_dummy.c: revision 1.18 sys/kern/kern_condvar.c: revision 1.40 sys/kern/kern_lock.c: revision 1.161 sys/kern/kern_mutex.c: revision 1.68 sys/kern/kern_rwlock.c: revision 1.48 sys/rump/include/rump/rumpuser.h: revision 1.115 sys/rump/librump/rumpkern/locks.c: revision 1.76-1.79 Apply C99-style struct initialization to lockops_t -- Tweak LOCKDEBUG macros (NFC) -- Distinguish spin mutex and adaptive mutex on rump kernels for LOCKDEBUG Formerly rump kernels treated the two types of mutexes as both adaptive for LOCKDEBUG for some reasons. Now we can detect violations of mutex restrictions on rump kernels such as taking an adaptive mutex with holding a spin mutex as well as normal kernels. -- rump: check if the mutex is surely owned by the caller in mutex_exit Unlocking a not-owned mutex wasn't detected well (it could detect if the mutex is not held by anyone but that's not enough). Let's check it (the check is the same as normal kernel's mutex). If LOCKDEBUG is enabled, give the check over LOCKDEBUG because it can provide better debugging information.
|
1.158.6.1 |
| 30-Nov-2017 |
martin | Pull up following revision(s) (requested by ozaki-r in ticket #405): sys/sys/pserialize.h: revision 1.2 sys/kern/kern_lock.c: revision 1.160 sys/kern/subr_pserialize.c: revision 1.9 sys/rump/librump/rumpkern/emul.c: revision 1.184 sys/rump/librump/rumpkern/emul.c: revision 1.185 sys/rump/librump/rumpkern/rump.c: revision 1.330 Implement debugging feature for pserialize(9) The debugging feature detects violations of pserialize constraints. It causes a panic: - if a context switch happens in a read section, or - if a sleepable function is called in a read section. The feature is enabled only if LOCKDEBUG is on. Discussed on tech-kern@ Add missing inclusion of pserialize.h (fix build)
|
1.161.4.2 |
| 08-Apr-2020 |
martin | Merge changes from current as of 20200406
|
1.161.4.1 |
| 10-Jun-2019 |
christos | Sync with HEAD
|
1.163.2.1 |
| 31-Jul-2023 |
martin | Pull up following revision(s) (requested by riastradh in ticket #1677):
sys/kern/kern_rwlock.c: revision 1.67 sys/kern/kern_lock.c: revision 1.182 sys/kern/kern_mutex.c: revision 1.102
Sprinkle __predict_{true,false} for panicstr checks
|
1.164.2.3 |
| 29-Feb-2020 |
ad | Sync with head.
|
1.164.2.2 |
| 25-Jan-2020 |
ad | Sync with head.
|
1.164.2.1 |
| 17-Jan-2020 |
ad | Sync with head.
|
1.171.2.1 |
| 03-Jan-2021 |
thorpej | Sync w/ HEAD.
|
1.181.2.1 |
| 31-Jul-2023 |
martin | Pull up following revision(s) (requested by riastradh in ticket #265):
sys/kern/kern_rwlock.c: revision 1.67 sys/kern/kern_lock.c: revision 1.182 sys/kern/kern_mutex.c: revision 1.102
Sprinkle __predict_{true,false} for panicstr checks
|