Home | History | Annotate | Download | only in kern
History log of /src/sys/kern/sched_4bsd.c
RevisionDateAuthorComments
 1.47  17-Jan-2025  mrg partly prepare for more than 2-level CPU speed scheduler support

put the calls behind looking at SPCF_IDLE and SPCF_1STCLASS mostly
behind functions that can grow support for more than 2 CPU classes.
4 new functions, with 2 of them just simple aliases for the 1st:

bool cpu_is_type(struct cpu_info *ci, int wanted);
bool cpu_is_idle_1stclass(struct cpu_info *ci)
bool cpu_is_1stclass(struct cpu_info *ci)
bool cpu_is_better(struct cpu_info *ci1, struct cpu_info *ci2);

with this in place, we can retain the desire to run on 1st-class by
preference, while also expanding cpu_is_better() to handle multiple
non 1st-class CPUs. ultimately, i envision seeing a priority number
where we can mark the fastest turbo-speed cores ahead of others, for
the case we can detect this.

XXX: use struct schedstate_percpu instead of cpu_info?

NFCI.
 1.46  26-Oct-2022  riastradh sys/sched.h: New home for extern sched_pstats_ticks in kernel.
 1.45  09-Aug-2021  andvar fix typos in asymmetry, asymmetric(al), symmetrical.
 1.44  23-May-2020  ad Oops. If a SCHED_RR thread is preempted and has exceeded its timeslice it
needs to go to the back of the run queue so round-robin actually happens,
otherwise it should go to the front.
 1.43  12-Mar-2020  ad Put back missing set of SPCF_SHOULDYIELD.
 1.42  09-Jan-2020  ad - Many small tweaks to the SMT awareness in the scheduler. It does a much
better job now at keeping all physical CPUs busy, while using the extra
threads to help out. In particular, during preempt() if we're using SMT,
try to find a better CPU to run on and teleport curlwp there.

- Change the CPU topology stuff so it can work on asymmetric systems. This
mainly entails rearranging one of the CPU lists so it makes sense in all
configurations.

- Add a parameter to cpu_topology_set() to note that a CPU is "slow", for
where there are fast CPUs and slow CPUs, like with the Rockwell RK3399.
Extend the SMT awareness to try and handle that situation too (keep fast
CPUs busy, use slow CPUs as helpers).
 1.41  06-Dec-2019  ad branches: 1.41.2;
sched_tick(): don't try to optimise something that's called 10 times a
second, it's a fine way to introduce bugs (and I did). Use the MI
interface for rescheduling which always does the correct thing.
 1.40  01-Dec-2019  ad Fix false sharing problems with cpu_info. Identified with tprof(8).
This was a very nice win in my tests on a 48 CPU box.

- Reorganise cpu_data slightly according to usage.
- Put cpu_onproc into struct cpu_info alongside ci_curlwp (now is ci_onproc).
- On x86, put some items in their own cache lines according to usage, like
the IPI bitmask and ci_want_resched.
 1.39  01-Dec-2019  ad PR port-sparc/54718 (sparc install hangs since recent scheduler changes)

- sched_tick: cpu_need_resched is no longer the correct thing to do here.
All we need to do is OR the request into the local ci_want_resched.

- sched_resched_cpu: we need to set RESCHED_UPREEMPT even on softint LWPs,
especially in the !__HAVE_FAST_SOFTINTS case, because the LWP with the
LP_INTR flag could be running via softint_overlay() - i.e. it has been
temporarily borrowed from a user process, and it needs to notice the
resched after it has stopped running softints.
 1.38  29-Nov-2019  ad Don't try to kpreempt a CPU hog unless __HAVE_PREEMPTION (oops).
 1.37  23-Nov-2019  ad Pull in sys/atomic.h.
 1.36  23-Nov-2019  ad Minor scheduler cleanup:

- Adapt to cpu_need_resched() changes. Avoid lost & duplicate IPIs and ASTs.
sched_resched_cpu() and sched_resched_lwp() contain the logic for this.
- Changes for LSIDL to make the locking scheme match the intended design.
- Reduce lock contention and false sharing further.
- Numerous small bugfixes, including some corrections for SCHED_FIFO/RT.
- Use setrunnable() in more places, and merge cut & pasted code.
 1.35  03-Sep-2018  riastradh Rename min/max -> uimin/uimax for better honesty.

These functions are defined on unsigned int. The generic name
min/max should not silently truncate to 32 bits on 64-bit systems.
This is purely a name change -- no functional change intended.

HOWEVER! Some subsystems have

#define min(a, b) ((a) < (b) ? (a) : (b))
#define max(a, b) ((a) > (b) ? (a) : (b))

even though our standard name for that is MIN/MAX. Although these
may invite multiple evaluation bugs, these do _not_ cause integer
truncation.

To avoid `fixing' these cases, I first changed the name in libkern,
and then compile-tested every file where min/max occurred in order to
confirm that it failed -- and thus confirm that nothing shadowed
min/max -- before changing it.

I have left a handful of bootloaders that are too annoying to
compile-test, and some dead code:

cobalt ews4800mips hp300 hppa ia64 luna68k vax
acorn32/if_ie.c (not included in any kernels)
macppc/if_gm.c (superseded by gem(4))

It should be easy to fix the fallout once identified -- this way of
doing things fails safe, and the goal here, after all, is to _avoid_
silent integer truncations, not introduce them.

Maybe one day we can reintroduce min/max as type-generic things that
never silently truncate. But we should avoid doing that for a while,
so that existing code has a chance to be detected by the compiler for
conversion to uimin/uimax without changing the semantics until we can
properly audit it all. (Who knows, maybe in some cases integer
truncation is actually intended!)
 1.34  12-Jul-2018  maxv Remove the kernel PMC code. Sent yesterday on tech-kern@.

This change:

* Removes "options PERFCTRS", the associated includes, and the associated
ifdefs. In doing so, it removes several XXXSMPs in the MI code, which is
good.

* Removes the PMC code of ARM XSCALE.

* Removes all the pmc.h files. They were all empty, except for ARM XSCALE.

* Reorders the x86 PMC code not to rely on the legacy pmc.h file. The
definitions are put in sysarch.h.

* Removes the kern/sys_pmc.c file, and along with it, the sys_pmc_control
and sys_pmc_get_info syscalls. They are marked as OBSOL in kern,
netbsd32 and rump.

* Removes the pmc_evid_t and pmc_ctr_t types.

* Removes all the associated man pages. The sets are marked as obsolete.
 1.33  14-Jul-2017  maxv branches: 1.33.4; 1.33.6;
Should be loadfactor().
 1.32  14-Jul-2017  maxv Revert rev1.26. l_estcpu is increased by only one cpu, not all of them.
 1.31  08-Jul-2017  maxv explain a bit
 1.30  24-Jun-2014  maxv branches: 1.30.4; 1.30.20;
'miliseconds' -> 'milliseconds'.
 1.29  25-Feb-2014  pooka branches: 1.29.2;
Ensure that the top level sysctl nodes (kern, vfs, net, ...) exist before
the sysctl link sets are processed, and remove redundancy.

Shaves >13kB off of an amd64 GENERIC, not to mention >1k duplicate
lines of code.
 1.28  02-Dec-2011  yamt branches: 1.28.8; 1.28.12;
update a comment
 1.27  27-Jul-2011  uebayasi branches: 1.27.2;
These don't need uvm/uvm_extern.h.
 1.26  14-Apr-2011  yamt bluntly balance estcpu decay for ncpu > 1. PR/31966.
 1.25  31-May-2009  yamt branches: 1.25.4; 1.25.6;
sched_pstats_hook: fix estcpu decay.
this makes my desktop usable when running "make -j4".
 1.24  07-Oct-2008  rmind branches: 1.24.4; 1.24.8; 1.24.10;
- Replace lwp_t::l_sched_info with union: pointer and timeslice.
- Change minimal time-quantum to ~20 ms.
- Thus remove unneeded pool in M2, and unused sched_lwp_exit().
- Do not increase l_slptime twice for SCHED_4BSD (regression fix).
 1.23  25-May-2008  ad branches: 1.23.4;
sched_tick:

- Do timeslicing for SCHED_RR threads. At ~16Hz it's too slow but better
than nothing. XXX

- If a SCHED_OTHER thread has hogged the CPU for 1/8s without taking a
trip through mi_switch(), try to force a kernel preemption to give other
threads a chance.
 1.22  19-May-2008  rmind - Make periodical balancing mandatory.
- Fix priority raising in M2 (broken after making runqueues mandatory).
 1.21  28-Apr-2008  martin branches: 1.21.2;
Remove clause 3 and 4 from TNF licenses
 1.20  24-Apr-2008  ad branches: 1.20.2;
Merge proc::p_mutex and proc::p_smutex into a single adaptive mutex, since
we no longer need to guard against access from hardware interrupt handlers.

Additionally, if cloning a process with CLONE_SIGHAND, arrange to have the
child process share the parent's lock so that signal state may be kept in
sync. Partially addresses PR kern/37437.
 1.19  17-Apr-2008  yamt branches: 1.19.2;
sched_tick: don't expire timeslices for SCHED_FIFO lwps.
 1.18  14-Apr-2008  yamt remove unnecessary __MUTEX_PRIVATE.
 1.17  14-Apr-2008  yamt make decay_cpu static.
 1.16  12-Apr-2008  ad Take the run queue management code from the M2 scheduler, and make it
mandatory. Remove the 4BSD run queue code. Effects:

- Pluggable scheduler is only responsible for co-ordinating timeshared jobs.
- All systems run with per-CPU run queues.
- 4BSD scheduler gets processor sets / affinity.
- 4BSD scheduler gets a significant peformance boost on some workloads.

Discussed on tech-kern@.
 1.15  02-Apr-2008  ad sched_tick: only case a preemption if the current thread is hogging the CPU,
or if we are idle and should look for new work (matters with per-CPU queues).
 1.14  27-Feb-2008  matt Convert to ansi definitions from old-style definitons.
Remember that func() is not ansi, func(void) is.
 1.13  14-Feb-2008  ad branches: 1.13.2; 1.13.6;
Make schedstate_percpu::spc_lwplock an exernally allocated item. Remove
the hacks in sparc/cpu.c to reinitialize it. This should be in its own
cache line but that's another change.
 1.12  15-Jan-2008  rmind Implementation of processor-sets, affinity and POSIX real-time extensions.
Add schedctl(8) - a program to control scheduling of processes and threads.

Notes:
- This is supported only by SCHED_M2;
- Migration of LWP mechanism will be revisited;

Proposed on: <tech-kern>. Reviewed by: <ad>.
 1.11  21-Dec-2007  ad KM_NOSLEEP -> KM_SLEEP for clarity.
 1.10  15-Dec-2007  ad sched_mutex -> runqueue_lock
 1.9  05-Dec-2007  ad branches: 1.9.4;
Match the docs: MUTEX_DRIVER/SPIN are now only for porting code written
for Solaris.
 1.8  06-Nov-2007  ad branches: 1.8.2;
Merge scheduler changes from the vmlocking branch. All discussed on
tech-kern:

- Invert priority space so that zero is the lowest priority. Rearrange
number and type of priority levels into bands. Add new bands like
'kernel real time'.
- Ignore the priority level passed to tsleep. Compute priority for
sleep dynamically.
- For SCHED_4BSD, make priority adjustment per-LWP, not per-process.
 1.7  10-Oct-2007  rmind branches: 1.7.2; 1.7.4;
sched_tick: There is no need to re-schedule in a case when
CURCPU_IDLE_P() is true. Simplify a little bit.

OK by <ad>.
 1.6  09-Oct-2007  rmind Import of SCHED_M2 - the implementation of new scheduler, which is based
on the original approach of SVR4 with some inspirations about balancing
and migration from Solaris. It implements per-CPU runqueues, provides a
real-time (RT) and time-sharing (TS) queues, ready to support a POSIX
real-time extensions, and also prepared for the support of CPU affinity.

The following lines in the kernel config enables the SCHED_M2:

no options SCHED_4BSD
options SCHED_M2

The scheduler seems to be stable. Further work will come soon.

http://mail-index.netbsd.org/tech-kern/2007/10/04/0001.html
http://www.netbsd.org/~rmind/m2/mysql_bench_ro_4x_local.png
Thanks <ad> for the benchmarks!
 1.5  08-Oct-2007  ad Merge run time accounting changes from the vmlocking branch. These make
the LWP "start time" per-thread instead of per-CPU.
 1.4  04-Aug-2007  ad branches: 1.4.2; 1.4.4; 1.4.6; 1.4.8;
Add cpuctl(8). For now this is not much more than a toy for debugging and
benchmarking that allows taking CPUs online/offline.
 1.3  09-Jul-2007  ad branches: 1.3.2; 1.3.4; 1.3.8;
Merge some of the less invasive changes from the vmlocking branch:

- kthread, callout, devsw API changes
- select()/poll() improvements
- miscellaneous MT safety improvements
 1.2  17-May-2007  yamt merge yamt-idlelwp branch. asked by core@. some ports still needs work.

from doc/BRANCHES:

idle lwp, and some changes depending on it.

1. separate context switching and thread scheduling.
(cf. gmcgarry_ctxsw)
2. implement idle lwp.
3. clean up related MD/MI interfaces.
4. make scheduler(s) modular.
 1.1  20-Feb-2007  rmind branches: 1.1.2; 1.1.6;
file sched_4bsd.c was initially added on branch yamt-idlelwp.
 1.1.6.16  05-Nov-2007  ad Oops. Continue to decay l_estcpu for threads that are runnable.
 1.1.6.15  05-Nov-2007  ad Expand the LWP priority space again to include 32 levels for kthreads,
so that they always run before user processes.
 1.1.6.14  05-Nov-2007  ad - Locking tweaks for estcpu/nice. XXX The schedclock musn't run above
IPL_SCHED.
- Hide most references to l_estcpu.
- l_policy was here first, but l_class is referenced in more places now.
 1.1.6.13  01-Nov-2007  ad - Fix interactivity problems under high load. Beacuse soft interrupts
are being stacked on top of regular LWPs, more often than not aston()
was being called on a soft interrupt thread instead of a user thread,
meaning that preemption was not happening on EOI.

- Don't use bool in a couple of data structures. Sub-word writes are not
always atomic and may clobber other fields in the containing word.

- For SCHED_4BSD, make p_estcpu per thread (l_estcpu). Rework how the
dynamic priority level is calculated - it's much better behaved now.

- Kill the l_usrpri/l_priority split now that priorities are no longer
directly assigned by tsleep(). There are three fields describing LWP
priority:

l_priority: Dynamic priority calculated by the scheduler.
This does not change for kernel/realtime threads,
and always stays within the correct band. Eg for
timeshared LWPs it never moves out of the user
priority range. This is basically what l_usrpri
was before.

l_inheritedprio: Lent to the LWP due to priority inheritance
(turnstiles).

l_kpriority: A boolean value set true the first time an LWP
sleeps within the kernel. This indicates that the LWP
should get a priority boost as compensation for blocking.
lwp_eprio() now does the equivalent of sched_kpri() if
the flag is set. The flag is cleared in userret().

- Keep track of scheduling class (OTHER, FIFO, RR) in struct lwp, and use
this to make decisions in a few places where we previously tested for a
kernel thread.

- Partially fix itimers and usr/sys/intr time accounting in the presence
of software interrupts.

- Use kthread_create() to create idle LWPs. Move priority definitions
from the various modules into sys/param.h.

- newlwp -> lwp_create
 1.1.6.12  24-Oct-2007  yamt resetpriority: fix bounds check.
 1.1.6.11  18-Oct-2007  ad Update for soft interrupt changes. See kern_softint.c 1.1.2.17 for details.
 1.1.6.10  10-Oct-2007  rmind Sync with HEAD.
 1.1.6.9  08-Oct-2007  ad Try to fix a number of problems with the scheduler since the priority scale
was turned on its head. There is still a problem: sometimes preemption of
user LWPs seems to stop working and interactivty gets pretty bad.
 1.1.6.8  26-Aug-2007  ad - Add a generic cross-call facility. Right now this only does threaded cross
calls but that should be extended to do IPIs. These are deliberately set
up as bound kthreads (and not soft interrupts or something else) so that
the called functions can use the spl framework or disable preemption in
order to guarantee exclusive access to CPU-local data.

- Use cross calls to take CPUs online or offline. Ok to do since bound LWPs
still execute on offline CPUs. As a result schedstate_percpu's::spc_flags
is CPU-local again and doesn't need locking.
 1.1.6.7  21-Aug-2007  ad A few minor corrections around calls to cpu_need_resched().
 1.1.6.6  20-Aug-2007  ad Sync with HEAD.
 1.1.6.5  14-Jul-2007  ad Make it possible to track time spent by soft interrupts as is done for
normal LWPs, and provide a sysctl to switch it on/off. Not enabled by
default because microtime() is not free. XXX Not happy with this but
I want it get it out of my local tree for the time being.
 1.1.6.4  07-Jul-2007  ad - Remove the interrupt priority range and use 'kernel RT' instead,
since only soft interrupts are threaded.
- Rename l->l_pinned to l->l_switchto. It might be useful for (re-)
implementing SA or doors.
- Simplify soft interrupt dispatch so MD code is doing as little as
possible that is new.
 1.1.6.3  01-Jul-2007  ad - Adapt to callout API change.
- Add a counter to track how often soft interrupts sleep.
 1.1.6.2  17-Jun-2007  ad - Increase the number of thread priorities from 128 to 256. How the space
is set up is to be revisited.
- Implement soft interrupts as kernel threads. A generic implementation
is provided, with hooks for fast-path MD code that can run the interrupt
threads over the top of other threads executing in the kernel.
- Split vnode::v_flag into three fields, depending on how the flag is
locked (by the interlock, by the vnode lock, by the file system).
- Miscellaneous locking fixes and improvements.
 1.1.6.1  08-Jun-2007  ad Sync with head.
 1.1.2.31  13-May-2007  ad Assign a per-CPU lock to LWPs as they transition into the ONPROC state.

http://mail-index.netbsd.org/tech-kern/2007/05/06/0003.html
 1.1.2.30  07-May-2007  yamt update comments.
 1.1.2.29  30-Apr-2007  rmind - Remove KERN_SCHED, do not break KERN_MAXID and use dynamic node creation,
since we are moving to dynamic sysctl anyway - note by <mrg>.
- Remove sched_slept() hook - we are not going to use it.
 1.1.2.28  25-Apr-2007  yamt unwrap a short line.
 1.1.2.27  22-Apr-2007  yamt remove inline from sched_pstats_hook.
 1.1.2.26  18-Apr-2007  ad sched_curcpu_runnable_p: also consider the per-cpu queue.
 1.1.2.25  16-Apr-2007  ad Add rudimentary support for bound kernel threads. Needed for handling
interrupts with LWPs, and may also be useful for workqueues.
 1.1.2.24  03-Apr-2007  matt Nuke __HAVE_BITENDIAN_BITOPS
 1.1.2.23  02-Apr-2007  rmind - Move the ccpu sysctl back to the scheduler-independent part.
- Move the scheduler-independent parts of 4BSD's schedcpu() to
kern_synch.c.
- Add scheduler-specific hook to satisfy individual scheduler's
needs.
- Remove autonice, which is archaic and not useful.

Patch provided by Daniel Sieger.
 1.1.2.22  24-Mar-2007  rmind sched_nextlwp: Remove struct lwp * argument, it is no longer needed.
Note by yamt@
 1.1.2.21  24-Mar-2007  yamt kill caddr_t.
 1.1.2.20  24-Mar-2007  yamt initialize ci->ci_schedstate.spc_mutex of APs.
(sched_rqinit is called before APs are attached.)
 1.1.2.19  24-Mar-2007  rmind Checkpoint:
- Abstract for per-CPU locking of runqueues.
As a workaround for SCHED_4BSD global runqueue, covered by sched_mutex,
spc_mutex is a pointer for now. After making SCHED_4BSD runqueues
per-CPU, it will became a storage mutex.
- suspendsched: Locking is not necessary for cpu_need_resched().
- Remove mutex_spin_exit() prototype in patch.c and LOCK_ASSERT() check
in runqueue_nextlwp() in sched_4bsd.c to make them compile again.
 1.1.2.18  23-Mar-2007  yamt KNF.
 1.1.2.17  23-Mar-2007  yamt reduce number of #ifdef.
 1.1.2.16  23-Mar-2007  yamt - put sched_qs and sched_whichqs into a single structure.
tweak relevant functions to take a pointer to the structure.
- kill __HAVE_MD_RUNQUEUE. XXX we need fls(9) for __HAVE_BIGENDIAN_BITOPS.
 1.1.2.15  23-Mar-2007  yamt make several things static, so that they can't be abused.
 1.1.2.14  23-Mar-2007  yamt KNF.
 1.1.2.13  20-Mar-2007  yamt - revive schedclock and rename sched_clock to sched_schedclock.
(yes, a poor name...)
make schedclock check if curlwp is idle.
- statclock: in the case of schedhz==0, call schedclock periodically,
regardless of idleness.
- fix a comment. (don't assume schedhz==0.)
 1.1.2.12  17-Mar-2007  rmind sched_switch() -> sched_nextlwp()
 1.1.2.11  17-Mar-2007  rmind Do not do an implicit enqueue in sched_switch(), move enqueueing back to
the dispatcher. Rename sched_switch() back to sched_nextlwp(). Add for
sched_enqueue() new argument, which indicates the calling from mi_switch().

Requested by yamt@
 1.1.2.10  10-Mar-2007  rmind Create kern.sched.name correctly.
Pointed out by Daniel Sieger.
 1.1.2.9  09-Mar-2007  rmind Checkpoint:

- Addition of scheduler-specific pointers in the struct proc, lwp and
schedstate_percpu.
- Addition of sched_lwp_fork(), sched_lwp_exit() and sched_slept() hooks.
- mi_switch() now has only one argument.
- sched_nextlwp(void) becomes sched_switch(struct lwp *) and does an
enqueueing of LWP.
- Addition of general kern.sched sysctl node.
- Remove twice called uvmexp.swtch++, other cleanups.

Discussed on tech-kern@
 1.1.2.8  27-Feb-2007  yamt - sync with head.
- move sched_changepri back to kern_synch.c as it doesn't know PPQ anymore.
 1.1.2.7  26-Feb-2007  yamt add an "immediate" flag for cpu_need_resched(). suggested by Andrew Doran.
 1.1.2.6  23-Feb-2007  yamt - introduce sys/cpu.h which has cpu_idle and cpu_need_resched.
- use it where appropriate.
- while i'm here, remove several unnecessary #include.
 1.1.2.5  23-Feb-2007  yamt remove an SCHED_4BSD #ifdef (schedclock). from Daniel Sieger.
discussed on tech-kern@.
 1.1.2.4  21-Feb-2007  yamt update comments and panic messages.
 1.1.2.3  21-Feb-2007  yamt whitespace.
 1.1.2.2  21-Feb-2007  yamt remove some unnecessary #include.
 1.1.2.1  20-Feb-2007  rmind General Common Scheduler Framework (CSF) patch import. Huge thanks for
Daniel Sieger <dsieger at TechFak.Uni-Bielefeld de> for this work.

Short abstract: Split the dispatcher from the scheduler in order to
make the scheduler more modular. Introduce initial API for other
schedulers' implementations.

Discussed in tech-kern@
OK: yamt@, ad@

Note: further work will go soon.
 1.3.8.4  09-Dec-2007  jmcneill Sync with HEAD.
 1.3.8.3  06-Nov-2007  joerg Sync with HEAD.
 1.3.8.2  26-Oct-2007  joerg Sync with HEAD.

Follow the merge of pmap.c on i386 and amd64 and move
pmap_init_tmp_pgtbl into arch/x86/x86/pmap.c. Modify the ACPI wakeup
code to restore CR4 before jumping back into kernel space as the large
page option might cover that.
 1.3.8.1  04-Aug-2007  jmcneill Sync with HEAD.
 1.3.4.1  15-Aug-2007  skrll Sync with HEAD.
 1.3.2.2  11-Jul-2007  mjf Sync with head.
 1.3.2.1  09-Jul-2007  mjf file sched_4bsd.c was added on branch mjf-ufs-trans on 2007-07-11 20:10:00 +0000
 1.4.8.2  04-Aug-2007  ad Add cpuctl(8). For now this is not much more than a toy for debugging and
benchmarking that allows taking CPUs online/offline.
 1.4.8.1  04-Aug-2007  ad file sched_4bsd.c was added on branch matt-mips64 on 2007-08-04 11:03:03 +0000
 1.4.6.1  14-Oct-2007  yamt sync with head.
 1.4.4.8  17-Mar-2008  yamt sync with head.
 1.4.4.7  27-Feb-2008  yamt sync with head.
 1.4.4.6  21-Jan-2008  yamt sync with head
 1.4.4.5  07-Dec-2007  yamt sync with head
 1.4.4.4  15-Nov-2007  yamt sync with head.
 1.4.4.3  27-Oct-2007  yamt sync with head.
 1.4.4.2  03-Sep-2007  yamt sync with head.
 1.4.4.1  04-Aug-2007  yamt file sched_4bsd.c was added on branch yamt-lazymbuf on 2007-09-03 14:40:59 +0000
 1.4.2.3  23-Mar-2008  matt sync with HEAD
 1.4.2.2  09-Jan-2008  matt sync with HEAD
 1.4.2.1  06-Nov-2007  matt sync with HEAD
 1.7.4.4  18-Feb-2008  mjf Sync with HEAD.
 1.7.4.3  27-Dec-2007  mjf Sync with HEAD.
 1.7.4.2  08-Dec-2007  mjf Sync with HEAD.
 1.7.4.1  19-Nov-2007  mjf Sync with HEAD.
 1.7.2.1  13-Nov-2007  bouyer Sync with HEAD
 1.8.2.2  26-Dec-2007  ad Sync with head.
 1.8.2.1  08-Dec-2007  ad Sync with head.
 1.9.4.2  19-Jan-2008  bouyer Sync with HEAD
 1.9.4.1  02-Jan-2008  bouyer Sync with HEAD
 1.13.6.3  17-Jan-2009  mjf Sync with HEAD.
 1.13.6.2  02-Jun-2008  mjf Sync with HEAD.
 1.13.6.1  03-Apr-2008  mjf Sync with HEAD.
 1.13.2.1  24-Mar-2008  keiichi sync with head.
 1.19.2.2  04-Jun-2008  yamt sync with head
 1.19.2.1  18-May-2008  yamt sync with head.
 1.20.2.3  20-Jun-2009  yamt sync with head
 1.20.2.2  04-May-2009  yamt sync with head.
 1.20.2.1  16-May-2008  yamt sync with head.
 1.21.2.2  10-Oct-2008  skrll Sync with HEAD.
 1.21.2.1  23-Jun-2008  wrstuden Sync w/ -current. 34 merge conflicts to follow.
 1.23.4.1  19-Oct-2008  haad Sync with HEAD.
 1.24.10.1  06-Jun-2009  bouyer branches: 1.24.10.1.2;
Pull up following revision(s) (requested by rmind in ticket #791):
sys/kern/sched_4bsd.c: revision 1.25
sched_pstats_hook: fix estcpu decay.
this makes my desktop usable when running "make -j4".
 1.24.10.1.2.1  21-Apr-2010  matt sync to netbsd-5
 1.24.8.1  23-Jul-2009  jym Sync with HEAD.
 1.24.4.1  06-Jun-2009  bouyer Pull up following revision(s) (requested by rmind in ticket #791):
sys/kern/sched_4bsd.c: revision 1.25
sched_pstats_hook: fix estcpu decay.
this makes my desktop usable when running "make -j4".
 1.25.6.1  06-Jun-2011  jruoho Sync with HEAD.
 1.25.4.1  21-Apr-2011  rmind sync with head
 1.27.2.2  22-May-2014  yamt sync with head.

for a reference, the tree before this commit was tagged
as yamt-pagecache-tag8.

this commit was splitted into small chunks to avoid
a limitation of cvs. ("Protocol error: too many arguments")
 1.27.2.1  17-Apr-2012  yamt sync with head
 1.28.12.1  18-May-2014  rmind sync with head
 1.28.8.2  03-Dec-2017  jdolecek update from HEAD
 1.28.8.1  20-Aug-2014  tls Rebase to HEAD as of a few days ago.
 1.29.2.1  10-Aug-2014  tls Rebase.
 1.30.20.1  26-Jul-2017  martin Pull up following revision(s) (requested by maxv in ticket #158):
sys/kern/sched_4bsd.c: revision 1.31-1.33

explain a bit
-
Revert rev1.26. l_estcpu is increased by only one cpu, not all of them.
-
Should be loadfactor().
 1.30.4.1  28-Aug-2017  skrll Sync with HEAD
 1.33.6.2  08-Apr-2020  martin Merge changes from current as of 20200406
 1.33.6.1  10-Jun-2019  christos Sync with HEAD
 1.33.4.2  06-Sep-2018  pgoyette Sync with HEAD

Resolve a couple of conflicts (result of the uimin/uimax changes)
 1.33.4.1  28-Jul-2018  pgoyette Sync with HEAD
 1.41.2.1  17-Jan-2020  ad Sync with head.

RSS XML Feed