Home | History | Annotate | Download | only in uvm
History log of /src/sys/uvm/uvm_readahead.c
RevisionDateAuthorComments
 1.16  23-Sep-2023  ad Repply this change with a couple of bugs fixed:

- Do away with separate pool_cache for some kernel objects that have no special
requirements and use the general purpose allocator instead. On one of my
test systems this makes for a small (~1%) but repeatable reduction in system
time during builds presumably because it decreases the kernel's cache /
memory bandwidth footprint a little.
- vfs_lockf: cache a pointer to the uidinfo and put mutex in the data segment.
 1.15  12-Sep-2023  ad Back out recent change to replace pool_cache with then general allocator.
Will return to this when I have time again.
 1.14  10-Sep-2023  ad - Do away with separate pool_cache for some kernel objects that have no special
requirements and use the general purpose allocator instead. On one of my
test systems this makes for a small (~1%) but repeatable reduction in system
time during builds presumably because it decreases the kernel's cache /
memory bandwidth footprint a little.
- vfs_lockf: cache a pointer to the uidinfo and put mutex in the data segment.
 1.13  19-May-2020  ad Drop & re-acquire vmobjlock less often.
 1.12  08-Mar-2020  ad Only need a read lock for uvm_pagelookup().
 1.11  23-Feb-2020  ad UVM locking changes, proposed on tech-kern:

- Change the lock on uvm_object, vm_amap and vm_anon to be a RW lock.
- Break v_interlock and vmobjlock apart. v_interlock remains a mutex.
- Do partial PV list locking in the x86 pmap. Others to follow later.
 1.10  19-May-2018  jdolecek branches: 1.10.2; 1.10.8;
adjust heuristics for read-ahead to skip the full read-ahead when last page of
the range is already cached; this speeds up I/O from cache, since it avoids
the lookup and allocation overhead

on my system I observed 4.5% - 15% improvement for cached I/O - from 2.2 GB/s to
2.3 GB/s for cached reads using non-direct UBC, and from 5.6 GB/s to 6.5 GB/s
for UBC using direct map

part of PR kern/53124
 1.9  30-Mar-2018  mlelstv Increase UVM read ahead window limit a bit to match concurrency of reading
from the raw device.
 1.8  12-Jun-2011  rmind branches: 1.8.12; 1.8.52;
Welcome to 5.99.53! Merge rmind-uvmplock branch:

- Reorganize locking in UVM and provide extra serialisation for pmap(9).
New lock order: [vmpage-owner-lock] -> pmap-lock.

- Simplify locking in some pmap(9) modules by removing P->V locking.

- Use lock object on vmobjlock (and thus vnode_t::v_interlock) to share
the locks amongst UVM objects where necessary (tmpfs, layerfs, unionfs).

- Rewrite and optimise x86 TLB shootdown code, make it simpler and cleaner.
Add TLBSTATS option for x86 to collect statistics about TLB shootdowns.

- Unify /dev/mem et al in MI code and provide required locking (removes
kernel-lock on some ports). Also, avoid cache-aliasing issues.

Thanks to Andrew Doran and Joerg Sonnenberger, as their initial patches
formed the core changes of this branch.
 1.7  15-Oct-2010  tsutsui branches: 1.7.6;
Make common kernel module binaries work on both sun3 and sun3x.
Tested on 3/160 (on TME) and (real) 3/80.

XXX: module files can be loaded only on single user?
 1.6  10-Jun-2009  yamt branches: 1.6.2; 1.6.4;
- add a function to perform explicit read-ahead.
- ra_startio: tweak locking a bit.
 1.5  02-Jan-2008  ad branches: 1.5.10; 1.5.24;
Merge vmlocking2 to head.
 1.4  11-May-2007  tsutsui branches: 1.4.8; 1.4.14; 1.4.16; 1.4.20;
Add temporary workaround for PR kern/36019 (panic on sun2 and sun3).
Ok'ed by yamt.
 1.3  12-Mar-2007  ad branches: 1.3.2;
Pass an ipl argument to pool_init/POOL_INIT to be used when initializing
the pool's lock.
 1.2  29-Nov-2005  yamt branches: 1.2.2; 1.2.20; 1.2.28; 1.2.30; 1.2.34;
add files i forgot to add when merging yamt-readahead branch.
 1.1  15-Nov-2005  yamt branches: 1.1.2;
file uvm_readahead.c was initially added on branch yamt-readahead.
 1.1.2.16  22-Nov-2005  yamt comments.
 1.1.2.15  22-Nov-2005  yamt make ractx_pool static.
 1.1.2.14  22-Nov-2005  yamt comments.
 1.1.2.13  20-Nov-2005  yamt uvm_ra_request: fix an off-by-one error.
 1.1.2.12  20-Nov-2005  yamt uvm_ra_request: don't shrink window when reading the same chunk repeatedly.
 1.1.2.11  19-Nov-2005  yamt - as read-ahead context is per-vnode now,
there are less reasons to make VOP_READ call uvm_ra_request explicitly.
move it to pager (uvn_get) so that it can handle accesses via mmap as well.
- pass advice to pager via ubc.
- tweak DPRINTF.

XXX can be disturbed by PGO_LOCKED.

XXX it's controversial where it should be done.
(uvm_fault, uvn_get or genfs_getpages.)
 1.1.2.10  19-Nov-2005  yamt ra_startio: don't bother to read busy chunk again and again.
 1.1.2.9  18-Nov-2005  yamt - associate read-ahead context to vnode, rather than file.
- revert VOP_READ prototype.
 1.1.2.8  17-Nov-2005  yamt correct a comment.
 1.1.2.7  17-Nov-2005  yamt use UVM_ADV_ rather than POSIX_FADV_.
 1.1.2.6  17-Nov-2005  yamt use DPRINTF rather than explicit #ifdef. suggested by Chuck Silvers.
 1.1.2.5  17-Nov-2005  yamt comments.
 1.1.2.4  15-Nov-2005  yamt fix a reversed condition in the previous.
 1.1.2.3  15-Nov-2005  yamt - #ifdef out debug printf.
- an assertion.
 1.1.2.2  15-Nov-2005  yamt add posix_fadvise.
 1.1.2.1  15-Nov-2005  yamt add simple readahead routines.
 1.2.34.3  08-Jun-2007  ad Sync with head.
 1.2.34.2  13-Mar-2007  ad Pull in the initial set of changes for the vmlocking branch.
 1.2.34.1  13-Mar-2007  ad Sync with head.
 1.2.30.2  17-May-2007  yamt sync with head.
 1.2.30.1  24-Mar-2007  yamt sync with head.
 1.2.28.1  13-May-2007  pavel Pull up following revision(s) (requested by tsutsui in ticket #641):
sys/uvm/uvm_readahead.c: revision 1.4
Add temporary workaround for PR kern/36019 (panic on sun2 and sun3).
Ok'ed by yamt.
 1.2.20.4  21-Jan-2008  yamt sync with head
 1.2.20.3  03-Sep-2007  yamt sync with head.
 1.2.20.2  21-Jun-2006  yamt sync with head.
 1.2.20.1  29-Nov-2005  yamt file uvm_readahead.c was added on branch yamt-lazymbuf on 2006-06-21 15:12:40 +0000
 1.2.2.2  11-Dec-2005  christos Sync with head.
 1.2.2.1  29-Nov-2005  christos file uvm_readahead.c was added on branch ktrace-lwp on 2005-12-11 10:29:42 +0000
 1.3.2.1  11-Jul-2007  mjf Sync with head.
 1.4.20.1  02-Jan-2008  bouyer Sync with HEAD
 1.4.16.2  18-Dec-2007  ad Lock readahead context using the associated object's lock.
 1.4.16.1  04-Dec-2007  ad Pull the vmlocking changes into a new branch.
 1.4.14.1  18-Feb-2008  mjf Sync with HEAD.
 1.4.8.1  09-Jan-2008  matt sync with HEAD
 1.5.24.1  23-Jul-2009  jym Sync with HEAD.
 1.5.10.1  20-Jun-2009  yamt sync with head
 1.6.4.2  05-Mar-2011  rmind sync with head
 1.6.4.1  16-Mar-2010  rmind Change struct uvm_object::vmobjlock to be dynamically allocated with
mutex_obj_alloc(). It allows us to share the locks among UVM objects.
 1.6.2.1  22-Oct-2010  uebayasi Sync with HEAD (-D20101022).
 1.7.6.1  23-Jun-2011  cherry Catchup with rmind-uvmplock merge.
 1.8.52.2  21-May-2018  pgoyette Sync with HEAD
 1.8.52.1  07-Apr-2018  pgoyette Sync with HEAD. 77 conflicts resolved - all of them $NetBSD$
 1.8.12.3  09-Oct-2012  bouyer Redo previous: it seems that the point of the bytelen computation was to
get transfers aligned to chunksz. So reintroduce the code, but using chunksz
instead of chunksize (if the readahead is trucated there's no point in
trying to align it anyway).
Now I get 64k read requests at the drive level again.
 1.8.12.2  09-Oct-2012  bouyer Fix panic "bad chunksize ..." in read-ahead code:
- off comes from the pager, so should already be page-aligned.
KASSERT() that it is, and remove the off = trunc_page(off)
- as off is not changed any more, the size of the transfer is chunksize.
Don't compute bytelen any more, which is what required chunksize
to be a power of 2. KASSERT() that chunksize is a multiple of page size.
 1.8.12.1  12-Sep-2012  tls Initial snapshot of work to eliminate 64K MAXPHYS. Basically works for
physio (I/O to raw devices); needs more doing to get it going with the
filesystems, but it shouldn't damage data.

All work's been done on amd64 so far. Not hard to add support to other
ports. If others want to pitch in, one very helpful thing would be to
sort out when and how IDE disks can do 128K or larger transfers, and
adjust the various PCI IDE (or at least ahcisata) drivers and wd.c
accordingly -- it would make testing much easier. Another very helpful
thing would be to implement a smart minphys() for RAIDframe along the
lines detailed in the MAXPHYS-NOTES file.
 1.10.8.1  29-Feb-2020  ad Sync with head.
 1.10.2.1  08-Apr-2020  martin Merge changes from current as of 20200406

RSS XML Feed