History log of /src/sys/arch/x86/include/pmap_pv.h |
Revision | | Date | Author | Comments |
1.17 |
| 17-Mar-2020 |
ad | Hallelujah, the bug has been found. Resurrect prior changes, to be fixed with following commit.
|
1.16 |
| 17-Mar-2020 |
ad | Back out the recent pmap changes until I can figure out what is going on with pmap_page_remove() (to pmap.c rev 1.365).
|
1.15 |
| 15-Mar-2020 |
ad | - pmap_enter(): Remove cosmetic differences between the EPT & native cases. Remove old code to free PVEs that should not be there that caused panics (merge error moving between source trees on my part).
- pmap_destroy(): pmap_remove_all() doesn't work for EPT yet, so need to catch up on deferred PTP frees manually in the EPT case.
- pp_embedded: Remove it. It's one more variable to go wrong and another store to be made. Just check for non-zero PTP pointer & non-zero VA instead.
|
1.14 |
| 14-Mar-2020 |
ad | PR kern/55071 (Panic shortly after running X11 due to kernel diagnostic assertion "mutex_owned(&pp->pp_lock)")
- Fix a locking bug in pmap_pp_clear_attrs() and in pmap_pp_remove() do the TLB shootdown while still holding the target pmap's lock.
Also:
- Finish PV list locking for x86 & update comments around same.
- Keep track of the min/max index of PTEs inserted into each PTP, and use that to clip ranges of VAs passed to pmap_remove_ptes().
- Based on the above, implement a pmap_remove_all() for x86 that clears out the pmap in a single pass. Makes exit() / fork() much cheaper.
|
1.13 |
| 10-Mar-2020 |
ad | - pmap_check_inuse() is expensive so make it DEBUG not DIAGNOSTIC.
- Put PV locking back in place with only a minor performance impact. pmap_enter() still needs more work - it's not easy to satisfy all the competing requirements so I'll do that with another change.
- Use pmap_find_ptp() (lookup only) in preference to pmap_get_ptp() (alloc). Make pm_ptphint indexed by VA not PA. Replace the per-pmap radixtree for dynamic PV entries with a per-PTP rbtree. Cuts system time during kernel build by ~10% for me.
|
1.12 |
| 23-Feb-2020 |
ad | The PV locking changes are expensive and not needed yet, so back them out for the moment. I want to find a cheaper approach.
|
1.11 |
| 23-Feb-2020 |
ad | UVM locking changes, proposed on tech-kern:
- Change the lock on uvm_object, vm_amap and vm_anon to be a RW lock. - Break v_interlock and vmobjlock apart. v_interlock remains a mutex. - Do partial PV list locking in the x86 pmap. Others to follow later.
|
1.10 |
| 12-Jan-2020 |
ad | x86 pmap:
- It turns out that every page the pmap frees is necessarily zeroed. Tell the VM system about this and use the pmap as a source of pre-zeroed pages.
- Redo deferred freeing of PTPs more elegantly, including the integration with pmap_remove_all(). This fixes problems with nvmm, and possibly also a crash discovered during fuzzing.
Reported-by: syzbot+a97186518c84f1d85c0c@syzkaller.appspotmail.com
|
1.9 |
| 04-Jan-2020 |
ad | branches: 1.9.2; x86 pmap improvements, reducing system time during a build by about 15% on my test machine:
- Replace the global pv_hash with a per-pmap record of dynamically allocated pv entries. The data structure used for this can be changed easily, and has no special concurrency requirements. For now go with radixtree.
- Change pmap_pdp_cache back into a pool; cache the page directory with the pmap, and avoid contention on pmaps_lock by adjusting the global list in the pool_cache ctor & dtor. Align struct pmap and its lock, and update some comments.
- Simplify pv_entry lists slightly. Allow both PP_EMBEDDED and dynamically allocated entries to co-exist on a single page. This adds a pointer to struct vm_page on x86, but shrinks pv_entry to 32 bytes (which also gets it nicely aligned).
- More elegantly solve the chicken-and-egg problem introduced into the pmap with radixtree lookup for pages, where we need PTEs mapped and page allocations to happen under a single hold of the pmap's lock. While here undo some cut-n-paste.
- Don't adjust pmap_kernel's stats with atomics, because its mutex is now held in the places the stats are changed.
|
1.8 |
| 02-Jan-2020 |
ad | Back the pv_hash stuff out. Now seeing errors from ATOMIC_*. For another day.
|
1.7 |
| 02-Jan-2020 |
ad | Replace the pv_hash_locks with atomic ops.
Leave the hash table at the same size for now: with the hash table size doubled, system time for a build drops 10-15%, but user time starts to rise suspiciously, presumably because the cache is wrecked. Need to try another data structure.
|
1.6 |
| 13-Nov-2019 |
maxv | Rename: PP_ATTRS_M -> PP_ATTRS_D PP_ATTRS_U -> PP_ATTRS_A For consistency.
|
1.5 |
| 09-Mar-2019 |
maxv | Start replacing the x86 PTE bits.
|
1.4 |
| 01-Feb-2019 |
maxv | Change the format of the pp_attrs field: instead of using PTE bits directly, use abstracted bits that are converted from/to PTE bits when needed (in pmap_sync_pv).
This allows us to use the same pp_attrs for pmaps that have PTE bits at different locations.
|
1.3 |
| 12-Jun-2011 |
rmind | branches: 1.3.54; Welcome to 5.99.53! Merge rmind-uvmplock branch:
- Reorganize locking in UVM and provide extra serialisation for pmap(9). New lock order: [vmpage-owner-lock] -> pmap-lock.
- Simplify locking in some pmap(9) modules by removing P->V locking.
- Use lock object on vmobjlock (and thus vnode_t::v_interlock) to share the locks amongst UVM objects where necessary (tmpfs, layerfs, unionfs).
- Rewrite and optimise x86 TLB shootdown code, make it simpler and cleaner. Add TLBSTATS option for x86 to collect statistics about TLB shootdowns.
- Unify /dev/mem et al in MI code and provide required locking (removes kernel-lock on some ports). Also, avoid cache-aliasing issues.
Thanks to Andrew Doran and Joerg Sonnenberger, as their initial patches formed the core changes of this branch.
|
1.2 |
| 28-Jan-2008 |
yamt | branches: 1.2.2; 1.2.10; 1.2.28; 1.2.36; 1.2.46; save a word in pv_entry by making pv_hash SLIST.
although this can slow down pmap_sync_pv if hash lists get long, we should keep them short anyway.
|
1.1 |
| 20-Jan-2008 |
yamt | branches: 1.1.2; 1.1.4; - rewrite P->V tracking. - use a hash rather than SPLAY trees. SPLAY tree is a wrong algorithm to use here. will be revisited if it slows down anything other than micro-benchmarks. - optimize the single mapping case (it's a common case) by embedding an entry into mdpage. - don't keep a pmap pointer as it can be obtained from ptp. (discussed on port-i386 some years ago.) ideally, a single paddr_t should be enough to describe a pte. but it needs some more thoughts as it can increase computational costs. - pmap_enter: simplify and fix races with pmap_sync_pv. - don't bother to lock pm_obj[i] where i > 0, unless DIAGNOSTIC. - kill mp_link to save space. - add many KASSERTs.
|
1.1.4.3 |
| 04-Feb-2008 |
yamt | sync with head.
|
1.1.4.2 |
| 21-Jan-2008 |
yamt | sync with head
|
1.1.4.1 |
| 20-Jan-2008 |
yamt | file pmap_pv.h was added on branch yamt-lazymbuf on 2008-01-21 09:40:09 +0000
|
1.1.2.2 |
| 20-Jan-2008 |
bouyer | Sync with HEAD
|
1.1.2.1 |
| 20-Jan-2008 |
bouyer | file pmap_pv.h was added on branch bouyer-xeni386 on 2008-01-20 17:51:26 +0000
|
1.2.46.1 |
| 23-Jun-2011 |
cherry | Catchup with rmind-uvmplock merge.
|
1.2.36.1 |
| 25-Apr-2010 |
rmind | Drop per-"MD page" (i.e. struct pmap_page) locking i.e. pp_lock/pp_unlock and rely on locking provided by upper layer, UVM. Sprinkle asserts.
|
1.2.28.1 |
| 27-Aug-2011 |
jym | Sync with HEAD. Most notably: uvm/pmap work done by rmind@, and MP Xen work of cherry@.
No regression observed on suspend/restore.
|
1.2.10.2 |
| 23-Mar-2008 |
matt | sync with HEAD
|
1.2.10.1 |
| 28-Jan-2008 |
matt | file pmap_pv.h was added on branch matt-armv6 on 2008-03-23 02:04:28 +0000
|
1.2.2.2 |
| 18-Feb-2008 |
mjf | Sync with HEAD.
|
1.2.2.1 |
| 28-Jan-2008 |
mjf | file pmap_pv.h was added on branch mjf-devfs on 2008-02-18 21:05:17 +0000
|
1.3.54.2 |
| 13-Apr-2020 |
martin | Mostly merge changes from HEAD upto 20200411
|
1.3.54.1 |
| 10-Jun-2019 |
christos | Sync with HEAD
|
1.9.2.2 |
| 29-Feb-2020 |
ad | Sync with head.
|
1.9.2.1 |
| 17-Jan-2020 |
ad | Sync with head.
|