Home | History | Annotate | Download | only in kern

Lines Matching refs:so_lock

130  * o socket::so_lock can change on the fly.  The low level routines used
131 * to lock sockets are aware of this. When so_lock is acquired, the
132 * routine locking must check to see if so_lock still points to the
133 * lock that was acquired. If so_lock has changed in the meantime, the
139 * o In order to mutate so_lock, the lock pointed to by the current value
140 * of so_lock must be held: i.e., the socket must be held locked by the
142 * memory accesses being reordered, and can set so_lock to the desired
143 * value. If the lock pointed to by the new value of so_lock is not
147 * o If so_lock is mutated, and the previous lock referred to by so_lock
332 * so_lock is stable while we hold the socket locked, so no
335 mutex_obj_hold(head->so_lock);
336 so->so_lock = head->so_lock;
416 mutex_obj_free(so->so_lock);
542 lock = so->so_lock;
547 if (__predict_false(lock != atomic_load_relaxed(&so->so_lock)))
608 so->so_lock = lock;
1476 while (lock != atomic_load_relaxed(&so->so_lock)) {
1478 lock = atomic_load_consume(&so->so_lock);
1488 * Used only for diagnostic assertions, so so_lock should be
1491 return mutex_owned(so->so_lock);
1500 * Used only for diagnostic assertions, so so_lock should be
1503 lock = so1->so_lock;
1504 if (lock != so2->so_lock)
1515 if (so->so_lock == NULL) {
1518 so->so_lock = lock;
1547 lock = so->so_lock;
1553 if (__predict_false(lock != atomic_load_relaxed(&so->so_lock)))
1583 lock = so->so_lock;
1588 if (__predict_false(lock != atomic_load_relaxed(&so->so_lock)))