p .Cd "options DIAGNOSTIC" .Cd "options LOCKDEBUG" .Sh DESCRIPTION Mutexes are used in the kernel to implement mutual exclusion among LWPs (lightweight processes) and interrupt handlers.
p The .Vt kmutex_t type provides storage for the mutex object. This should be treated as an opaque object and not examined directly by consumers.
p Mutexes replace the .Xr spl 9 system traditionally used to provide synchronization between interrupt handlers and LWPs, and in combination with reader / writer locks replace the .Xr lockmgr 9 facility. .Sh OPTIONS l -tag -width abcd t Cd "options DIAGNOSTIC"
p Kernels compiled with the .Dv DIAGNOSTIC option perform basic sanity checks on mutex operations. t Cd "options LOCKDEBUG"
p Kernels compiled with the .Dv LOCKDEBUG option perform potentially CPU intensive sanity checks on mutex operations. .El .Sh FUNCTIONS l -tag -width abcd t Fn mutex_init "mtx" "type" "ipl"
p Dynamically initialize a mutex for use.
p No other operations can be performed on a mutex until it has been initialized. Once initialized, all types of mutex are manipulated using the same interface. Note that .Fn mutex_init may block in order to allocate memory.
p The .Fa type argument must be given as MUTEX_DEFAULT. Other constants are defined but are for low-level system use and are not an endorsed, stable part of the interface.
p The type of mutex returned depends on the .Fa ipl argument: l -tag -width abcd t IPL_NONE, or one of the IPL_SOFT* constants
p An adaptive mutex will be returned. Adaptive mutexes provide mutual exclusion between LWPs, and between LWPs and soft interrupt handlers.
p Adaptive mutexes cannot not be acquired from a hardware interrupt handler. An LWP may either sleep or busy-wait when attempting to acquire an adaptive mutex that is already held. t IPL_VM, IPL_SCHED, IPL_HIGH
p A spin mutex will be returned. Spin mutexes provide mutual exclusion between LWPs, and between LWPs and interrupt handlers.
p The .Fa ipl argument is used to pass an system interrupt priority level (IPL) that will block all interrupt handlers that may try to acquire the mutex.
p LWPs that own spin mutexes may not sleep, and therefore must not try to acquire adaptive mutexes or other sleep locks.
p A processor will always busy-wait when attempting to acquire a spin mutex that is already held. .El
p See .Xr spl 9 for further information on interrupt priority levels (IPLs).
p t Fn mutex_destroy "mtx"
p Release resources used by a mutex. The mutex may not be used after it has been destroyed. .Fn mutex_destroy may block in order to free memory. t Fn mutex_enter "mtx"
p Acquire a mutex. If the mutex is already held, the caller will block and not return until the mutex is acquired.
p Mutexes and other types of locks must always be acquired in a consistent order with respect to each other. Otherwise, the potential for system deadlock exists.
p Adaptive mutexes and other types of lock that can sleep may not be acquired once a spin mutex is held by the caller. t Fn mutex_exit "mtx"
p Release a mutex. The mutex must have been previously acquired by the caller. Mutexes may be released out of order as needed. t Fn mutex_owned "mtx"
p For adaptive mutexes, return non-zero if the current LWP holds the mutex. For spin mutexes, return non-zero if the mutex is held, potentially by the current processor. Otherwise, return zero.
p .Fn mutex_owned is provided for making diagnostic checks to verify that a lock is held. For example: d -literal KASSERT(mutex_owned(\*[Am]driver_lock)); .Ed
p It should not be used to make locking decisions at run time, or to verify that a lock is unheld. t Fn mutex_spin_enter "mtx"
p Equivalent to .Fn mutex_enter , but may only be used when it is known that .Ar mtx is a spin mutex. On some architectures, this can substantially reduce the cost of acquring a spin mutex. t Fn mutex_spin_exit "mtx"
p Equivalent to .Fn mutex_exit , but may only be used when it is known that .Ar mtx is a spin mutex. On some architectures, this can substantially reduce the cost of releasing an unheld spin mutex. t Fn mutex_tryenter "mtx"
p Try to acquire a mutex, but do not block if the mutex is already held. Returns non-zero if the mutex was acquired, or zero if the mutex was already held.
p .Fn mutex_tryenter can be used as an optimization when acquiring locks in the the wrong order. For example, in a setting where the convention is that .Dv first_lock must be acquired before .Dv second_lock , the following can be used to optimistically lock in reverse order: d -literal /* We hold second_lock, but not first_lock. */ KASSERT(mutex_owned(\*[Am]second_lock)); if (!mutex_tryenter(\*[Am]first_lock)) { /* Failed to get it - lock in the correct order. */ mutex_exit(\*[Am]second_lock); mutex_enter(\*[Am]first_lock); mutex_enter(\*[Am]second_lock); /* * We may need to recheck any conditions the code * path depends on, as we released second_lock * briefly. */ } .Ed .El .Sh CODE REFERENCES This section describes places within the .Nx source tree where code implementing mutexes can be found. All pathnames are relative to
p The core of the mutex implementation is in
a sys/kern/kern_mutex.c .
p The header file
a sys/sys/mutex.h describes the public interface, and interfaces that machine-dependent code must provide to support mutexes. .Sh SEE ALSO .Xr condvar 9 , .Xr mb 9 , .Xr rwlock 9 , .Xr spl 9
p .Rs .%A Jim Mauro .%A Richard McDougall .%T Solaris Internals: Core Kernel Architecture , .%I Prentice Hall .%D 2001 .%O ISBN 0-13-022496-0 .Re .Sh HISTORY The mutex primitives first appeared in .Nx 5.0 .