History log of /src/sys/kern/subr_kmem.c |
Revision | | Date | Author | Comments |
1.89 |
| 10-Sep-2023 |
ad | Assert that kmem_alloc() provides the expected alignment.
|
1.88 |
| 09-Apr-2023 |
riastradh | kmem(9): Tweak branch predictions in fast paths.
|
1.87 |
| 30-May-2022 |
mrg | re-do previous - it likely broke kmem cache init.
use {0} for zero sentinel.
|
1.86 |
| 30-May-2022 |
mrg | apply some missing #ifn?def KDTRACE_HOOKS from the previous.
|
1.85 |
| 30-May-2022 |
riastradh | kmem(9): Create dtrace sdt probes for each kmem cache size.
The names of the probes correspond to the names shown in vmstat -m. This should make it much easier to track down who's allocating memory when there's a leak, e.g. by getting a histogram of stack traces for the matching kmem cache pool:
# vmstat -m Memory resource pool statistics Name Size Requests Fail Releases Pgreq Pgrel Npage Hiwat Minpg Maxpg Idle ... kmem-00128 256 62242 0 0 3891 0 3891 3891 0 inf 0 ... # dtrace -n 'sdt:kmem:*:kmem-00128 { @[probefunc, stack()] = count() }' ^C
When there's no leak, the allocs and frees (probefunc) will be roughly matched; when there's a leak, the allocs will far outnumber the frees.
|
1.84 |
| 12-Mar-2022 |
riastradh | kmem(9): Show the pointer in kmem_free(..., 0) assertion like before.
|
1.83 |
| 12-Mar-2022 |
riastradh | kmem(9): Make kmem_alloc and kmem_free agree about rejecting zero.
Let's do both as KASSERT, unless there's a good reason to make them both do an unconditional if/panic even in release builds.
|
1.82 |
| 06-Feb-2021 |
joerg | Do not cast memcpy arguments when the intention is unaligned access. The standard is pretty explicit that misaligned pointers is UB and LLVM does exploit the promised alignment on SPARC, resulting in kernel crashes during early boot.
|
1.81 |
| 24-Jan-2021 |
thorpej | Add kmem_tmpbuf_alloc(), a utility function for allocating memory for temporary use where allocation on the stack is desirable, but only up to a certain size. If the requested size fits within the specified stack buffer, the stack buffer is returned. Otherwise, memory is allocated with kmem_alloc(). Add a corresponding kmem_tmpbuf_free() function that frees the memory using kmem_free() if it is not the tempory stack buffer location.
|
1.80 |
| 14-May-2020 |
maxv | branches: 1.80.2; KASSERT -> panic
|
1.79 |
| 08-Mar-2020 |
ad | KMEM_SIZE: append the size_t to the allocated buffer, rather than prepending, so it doesn't screw up the alignment of the buffer.
Reported-by: syzbot+c024c50570cccac51532@syzkaller.appspotmail.com
|
1.78 |
| 25-Jan-2020 |
ad | - Pad kmem cache names with zeros so vmstat -m and -C are readable. - Exclude caches with size not a factor or multiple of the coherency unit.
Proposed on tech-kern@. Also:
Reported-by: syzbot+c024c50570cccac51532@syzkaller.appspotmail.com
|
1.77 |
| 14-Nov-2019 |
maxv | branches: 1.77.2; Add support for Kernel Memory Sanitizer (kMSan). It detects uninitialized memory used by the kernel at run time, and just like kASan and kCSan, it is an excellent feature. It has already detected 38 uninitialized variables in the kernel during my testing, which I have since discreetly fixed.
We use two shadows: - "shad", to track uninitialized memory with a bit granularity (1:1). Each bit set to 1 in the shad corresponds to one uninitialized bit of real kernel memory. - "orig", to track the origin of the memory with a 4-byte granularity (1:1). Each uint32_t cell in the orig indicates the origin of the associated uint32_t of real kernel memory.
The memory consumption of these shadows is consequent, so at least 4GB of RAM is recommended to run kMSan.
The compiler inserts calls to specific __msan_* functions on each memory access, to manage both the shad and the orig and detect uninitialized memory accesses that change the execution flow (like an "if" on an uninitialized variable).
We mark as uninit several types of memory buffers (stack, pools, kmem, malloc, uvm_km), and check each buffer passed to copyout, copyoutstr, bwrite, if_transmit_lock and DMA operations, to detect uninitialized memory that leaves the system. This allows us to detect kernel info leaks in a way that is more efficient and also more user-friendly than KLEAK.
Contrary to kASan, kMSan requires comprehensive coverage, ie we cannot tolerate having one non-instrumented function, because this could cause false positives. kMSan cannot instrument ASM functions, so I converted most of them to __asm__ inlines, which kMSan is able to instrument. Those that remain receive special treatment.
Contrary to kASan again, kMSan uses a TLS, so we must context-switch this TLS during interrupts. We use different contexts depending on the interrupt level.
The orig tracks precisely the origin of a buffer. We use a special encoding for the orig values, and pack together in each uint32_t cell of the orig: - a code designating the type of memory (Stack, Pool, etc), and - a compressed pointer, which points either (1) to a string containing the name of the variable associated with the cell, or (2) to an area in the kernel .text section which we resolve to a symbol name + offset.
This encoding allows us not to consume extra memory for associating information with each cell, and produces a precise output, that can tell for example the name of an uninitialized variable on the stack, the function in which it was pushed on the stack, and the function where we accessed this uninitialized variable.
kMSan is available with LLVM, but not with GCC.
The code is organized in a way that is similar to kASan and kCSan, so it means that other architectures than amd64 can be supported.
|
1.76 |
| 15-Aug-2019 |
maxv | Retire KMEM_GUARD. It has been superseded by kASan, which is much more powerful, has much more coverage - far beyond just kmem(9) -, and also consumes less memory.
KMEM_GUARD was a debug-only option that required special DDB tweaking, and had no use in releases or even diagnostic kernels.
As a general rule, the policy now is to harden the pool layer by default in GENERIC, and use kASan as a diagnostic/debug/fuzzing feature to verify each memory allocation & access in the system.
|
1.75 |
| 07-Apr-2019 |
maxv | Provide a code argument in kasan_mark(), and give a code to each caller. Five codes used: GenericRedZone, MallocRedZone, KmemRedZone, PoolRedZone, and PoolUseAfterFree.
This can greatly help debugging complex memory corruptions.
|
1.74 |
| 26-Mar-2019 |
maxv | Remove unneeded PR_NOALIGN, pool_allocator_kmem is already page-aligned.
|
1.73 |
| 04-Feb-2019 |
maxv | Clobber the size when freeing a buffer. This way, if the same buffer gets freed twice, the second size check will fire.
|
1.72 |
| 23-Dec-2018 |
maxv | Simplify the KASAN API, use only kasan_mark() and explain briefly. The alloc/free naming was too confusing.
|
1.71 |
| 22-Aug-2018 |
christos | - opt_kasan.h is included from <sys/asan.h> - now that we are not using inlines, we need one more ifdef.
|
1.70 |
| 22-Aug-2018 |
maxv | Reduce the number of KASAN ifdefs, suggested by Christos/Taylor.
|
1.69 |
| 20-Aug-2018 |
maxv | Add support for kASan on amd64. Written by me, with some parts inspired from Siddharth Muralee's initial work. This feature can detect several kinds of memory bugs, and it's an excellent feature.
It can be enabled by uncommenting these three lines in GENERIC:
#makeoptions KASAN=1 # Kernel Address Sanitizer #options KASAN #no options SVS
The kernel is compiled without SVS, without DMAP and without PCPU area. A shadow area is created at boot time, and it can cover the upper 128TB of the address space. This area is populated gradually as we allocate memory. With this design the memory consumption is kept at its lowest level.
The compiler calls the __asan_* functions each time a memory access is done. We verify whether this access is legal by looking at the shadow area.
We declare our own special memcpy/memset/etc functions, because the compiler's builtins don't add the __asan_* instrumentation.
Initially all the mappings are marked as valid. During dynamic allocations, we add a redzone, which we mark as invalid. Any access on it will trigger a kASan error message. Additionally, the compiler adds a redzone on global variables, and we mark these redzones as invalid too. The illegal-access detection works with a 1-byte granularity.
For now, we cover three areas:
- global variables - kmem_alloc-ated areas - malloc-ated areas
More will come, but that's a good start.
|
1.68 |
| 20-Aug-2018 |
maxv | Compute the pointer earlier, not in the return statement. No functional change.
|
1.67 |
| 20-Aug-2018 |
maxv | Retire KMEM_REDZONE and KMEM_POISON.
KMEM_REDZONE is not very efficient and cannot detect read overflows. KASAN can, and will be used instead.
KMEM_POISON is enabled along with KMEM_GUARD, but it is redundant, since the latter can detect read UAFs contrary to the former. In fact maybe KMEM_GUARD should be retired too, because there are many cases where it doesn't apply.
Simplifies the code.
|
1.66 |
| 09-Jan-2018 |
christos | branches: 1.66.2; 1.66.4; add strndup and an alias to strdup.
|
1.65 |
| 09-Nov-2017 |
riastradh | Assert KM_SLEEP xor KM_NOSLEEP in all kmem allocation.
|
1.64 |
| 07-Nov-2017 |
christos | Add two utility functions to help use kmem with strings: kmem_strdupsize, kmem_strfree.
|
1.63 |
| 12-Apr-2017 |
christos | use opt_kmem.h for the KMEM_ variables.
|
1.62 |
| 29-Feb-2016 |
chs | branches: 1.62.2; 1.62.4; fix vmem_alloc() to never return an error for VM_SLEEP requests, thus fixing kmem_alloc() to never return NULL for KM_SLEEP requests. instead these operations will retry forever, which was the intent.
|
1.61 |
| 27-Jul-2015 |
maxv | Several changes and improvements in KMEM_GUARD: - merge uvm_kmguard.{c,h} into subr_kmem.c. It is only user there, and makes it more consistent. Also, it allows us to enable KMEM_GUARD without enabling DEBUG. - rename uvm_kmguard_XXX to kmem_guard_XXX, for consistency - improve kmem_guard_alloc() so that it supports allocations bigger than PAGE_SIZE - remove the canary value, and use directly the kmem header as underflow pattern. - fix some comments
(The UAF fifo is disabled for the moment; we actually need to register the va and its size, and add a weight support not to consume too much memory.)
|
1.60 |
| 22-Jul-2014 |
maxv | branches: 1.60.2; 1.60.4; Enable KMEM_REDZONE on DIAGNOSTIC. It will try to catch overflows.
No comment on tech-kern@
|
1.59 |
| 03-Jul-2014 |
maxv | Change the pattern of KMEM_REDZONE so that the first byte is never '\0'.
From me and lars@.
|
1.58 |
| 02-Jul-2014 |
maxv | Fix the KMEM_POISON check: it should check the whole buffer, otherwise some write-after-free's wouldn't be detected (those occurring in the 8 last bytes of the allocated buffer).
Was here before my changes, spotted by lars@.
|
1.57 |
| 01-Jul-2014 |
maxv | 1) Define a malloc(9)-like kmem_header structure for KMEM_SIZE. It is in fact more consistent, and more flexible (eg if we want to add new fields). 2) When I say "page" I actually mean "kmem page". It may not be clear, so replace it by "memory chunk" (suggested by lars@). 3) Minor changes for KMEM_REDZONE.
|
1.56 |
| 25-Jun-2014 |
maxv | 1) Make clear that we want the space allocated for the KMEM_SIZE header to be aligned, by using kmem_roundup_size(). There's no functional difference with the current MAX().
2) If there isn't enough space in the page padding for the red zone, allocate one more page, not just 2 bytes. We only poison 1 or 2 bytes in this page, depending on the space left in the previous page. That way 'allocsz' is properly aligned. Again, there's no functional difference since the shift already handles it correctly.
|
1.55 |
| 25-Jun-2014 |
maxv | Rephrase some comments and remove whitespaces. No functional change.
|
1.54 |
| 24-Jun-2014 |
maxv | KMEM_REDZONE+KMEM_POISON is supposed to detect buffer overflows. But it only poisons memory after kmem_roundup_size(), which means that if an overflow occurs in the page padding, it won't be detected.
Fix this by making KMEM_REDZONE independent from KMEM_POISON and making it put a 2-byte pattern at the end of each requested buffer, and check it when freeing memory to ensure the caller hasn't written outside the requested area.
Not enabled on DIAGNOSTIC for the moment.
|
1.53 |
| 23-Jun-2014 |
maxv | Enable KMEM_SIZE on DIAGNOSTIC. It will catch memory corruption bugs due to a different size given to kmem_alloc() and kmem_free(), with no performance impact.
|
1.52 |
| 22-Jun-2014 |
maxv | Put the KMEM_GUARD code under #if defined(KMEM_GUARD). No functional change.
|
1.51 |
| 25-Oct-2013 |
martin | branches: 1.51.2; Mark a diagnostic-only variable
|
1.50 |
| 22-Apr-2013 |
yamt | branches: 1.50.4; - make debug size check more strict - add comments about debug features
|
1.49 |
| 22-Apr-2013 |
yamt | whitespace
|
1.48 |
| 21-Apr-2013 |
uebayasi | Whitespace.
|
1.47 |
| 16-Apr-2013 |
para | addresses PR/47512 properly return NULL for failed allocations not 0x8 with size checks enabled.
|
1.46 |
| 21-Jul-2012 |
para | branches: 1.46.2; split allocation lookup table to decrease overall memory used making allocator more flexible for allocations larger then 4kb move the encoded "size" under DEBUG back to the begining of allocated chunk
no objections on tech-kern@
|
1.45 |
| 15-Apr-2012 |
martin | We don't support KMEM_GUARD nor FREECHECK yet with rump, so disable them in debug builds of the rump kernel.
|
1.44 |
| 13-Apr-2012 |
mrg | allow kmem_guard_depth to be set in the config file.
|
1.43 |
| 01-Apr-2012 |
para | don't overallocated once we leave the caches
|
1.42 |
| 05-Feb-2012 |
rmind | branches: 1.42.2; - Make KMGUARD interrupt-safe. - kmem_intr_{alloc,free}: remove workaround.
Changes affect KMGUARD-enabled debug kernels only.
|
1.41 |
| 30-Jan-2012 |
rmind | Fix for KMEM_GUARD; do not use it from interrupt context.
|
1.40 |
| 28-Jan-2012 |
rmind | - Instead of kmem_cache_max, calculate max index and avoid a shift. - Use __read_mostly and __cacheline_aligned. - Make kmem_{intr_alloc,free} public. - Misc.
|
1.39 |
| 27-Jan-2012 |
para | extending vmem(9) to be able to allocated resources for it's own needs. simplifying uvm_map handling (no special kernel entries anymore no relocking) make malloc(9) a thin wrapper around kmem(9) (with private interface for interrupt safety reasons)
releng@ acknowledged
|
1.38 |
| 20-Nov-2011 |
christos | branches: 1.38.2; simplify, no need for va_copy here. Add KASSERT.
|
1.37 |
| 20-Nov-2011 |
apb | Use va_copy to avoid undefined behaviour in handling the va_list arg.
|
1.36 |
| 02-Sep-2011 |
dyoung | branches: 1.36.2; Report vmem(9) errors out-of-band so that we can use vmem(9) to manage ranges that include the least and the greatest vmem_addr_t. Update vmem(9) uses throughout the kernel. Slightly expand on the tests in subr_vmem.c, which still pass. I've been running a kernel with this patch without any trouble.
|
1.35 |
| 17-Jul-2011 |
joerg | Retire varargs.h support. Move machine/stdarg.h logic into MI sys/stdarg.h and expect compiler to provide proper builtins, defaulting to the GCC interface. lint still has a special fallback. Reduce abuse of _BSD_VA_LIST_ by defining __va_list by default and derive va_list as required by standards.
|
1.34 |
| 17-Feb-2011 |
matt | Init kmem_guard_depth to 0 so it will be placed in .data so it can be patched with gdb.
|
1.33 |
| 11-Feb-2010 |
haad | branches: 1.33.2; 1.33.4; 1.33.6; Add kmem_asprintf rotuine which allocates string accordingly to format string from kmem pool. Allocated string is string length + 1 char for ending zero.
Ok: ad@.
|
1.32 |
| 31-Jan-2010 |
skrll | branches: 1.32.2; 1 CTASSERT(foo) is enough for anyone.
|
1.31 |
| 04-Jan-2010 |
uebayasi | Use CTASSERT() for constant only assertions.
|
1.30 |
| 12-Oct-2009 |
yamt | constify
|
1.29 |
| 12-Oct-2009 |
yamt | fix KMEM_SIZE vs KMEM_GUARD
|
1.28 |
| 03-Jun-2009 |
jnemeth | add KASSERT(p != NULL); to kmem_free()
|
1.27 |
| 29-Mar-2009 |
ad | kernel memory guard for DEBUG kernels, proposed on tech-kern. See kmem_alloc(9) for details.
|
1.26 |
| 18-Feb-2009 |
yamt | use %zu for size_t
|
1.25 |
| 17-Feb-2009 |
ad | Fix min/max confusion that causes a problem with DEBUG on some architectures. Independently spotted by yamt@. /brick ad
|
1.24 |
| 06-Feb-2009 |
enami | branches: 1.24.2; Use same expression to decide to use pool cache or not in both kmem_alloc/free.
|
1.23 |
| 01-Feb-2009 |
ad | Apply kmem patch posted to tech-kern.
- Add another level of caches, for max quantum cache size -> PAGE_SIZE. - Add debug code to verify that kmem_free() is given the correct size.
|
1.22 |
| 15-Dec-2008 |
ad | Back VMEM_ADDR_NULL change. It's too invasive.
|
1.21 |
| 15-Dec-2008 |
ad | Check for VMEM_ADDR_NULL, not NULL.
|
1.20 |
| 15-Dec-2008 |
ad | Define VMEM_ADDR_NULL as UINTPTR_MAX, otherwise a vmem that can allocate a block starting at zero will not work.
XXX pool_cache uses NULL to signify failed allocation. XXX how did the percpu allocator work before?
|
1.19 |
| 09-Feb-2008 |
yamt | branches: 1.19.10; 1.19.18; 1.19.26; if DEBUG, over-allocate 1 byte to detect overrun.
|
1.18 |
| 28-Dec-2007 |
yamt | sprinkle more kmem_poison_check.
|
1.17 |
| 07-Nov-2007 |
ad | branches: 1.17.4; 1.17.6; Merge from vmlocking:
- pool_cache changes. - Debugger/procfs locking fixes. - Other minor changes.
|
1.16 |
| 09-Jul-2007 |
ad | branches: 1.16.6; 1.16.8; 1.16.12; 1.16.14; Merge some of the less invasive changes from the vmlocking branch:
- kthread, callout, devsw API changes - select()/poll() improvements - miscellaneous MT safety improvements
|
1.15 |
| 26-Mar-2007 |
hubertf | Remove duplicate #include's From: Slava Semushin <php-coder@altlinux.ru>
|
1.14 |
| 02-Mar-2007 |
yamt | branches: 1.14.2; 1.14.4; 1.14.6; kmem_backend_alloc: fix a null dereference.
|
1.13 |
| 09-Feb-2007 |
ad | branches: 1.13.2; Merge newlock2 to head.
|
1.12 |
| 05-Feb-2007 |
yamt | kmem_alloc: fix a null dereference reported by Chuck Silvers.
|
1.11 |
| 01-Nov-2006 |
yamt | branches: 1.11.2; 1.11.4; remove some __unused from function parameters.
|
1.10 |
| 12-Oct-2006 |
christos | - sprinkle __unused on function decls. - fix a couple of unused bugs - no more -Wno-unused for i386
|
1.9 |
| 28-Aug-2006 |
yamt | branches: 1.9.2; 1.9.4; 1.9.6; don't include sys/lock.h as it is no longer necessary.
|
1.8 |
| 21-Aug-2006 |
martin | Add <sys/lock.h> include for <sys/callback.h>
|
1.7 |
| 20-Aug-2006 |
yamt | move kmem_kva_reclaim_callback out of #ifdef DEBUG. fixes compilation problem in the case of !DEBUG. pointed by Kurt Schreiner.
|
1.6 |
| 20-Aug-2006 |
yamt | implement kva reclamation for kmem_alloc quantum cache.
|
1.5 |
| 20-Aug-2006 |
yamt | kmem_init: use vmem quantum cache. XXX needs tune.
|
1.4 |
| 08-Jul-2006 |
yamt | branches: 1.4.2; add DEBUG code to detect modifications on free memory.
|
1.3 |
| 03-Jul-2006 |
yamt | change KMEM_QUANTUM_SIZE from sizeof(void *) to (ALIGNBYTES + 1). the latter is larger on eg. sparc.
noted by Christos Zoulas. http://mail-index.NetBSD.org/port-sparc/2006/07/02/0001.html
|
1.2 |
| 25-Jun-2006 |
yamt | branches: 1.2.2; implement kmem_zalloc.
|
1.1 |
| 25-Jun-2006 |
yamt | 1. implement solaris-like vmem. (still primitive, though) 2. implement solaris-like kmem_alloc/free api, using #1. (note: this implementation is backed by kernel_map, thus can't be used from interrupt context.)
|
1.2.2.4 |
| 03-Sep-2006 |
yamt | sync with head.
|
1.2.2.3 |
| 11-Aug-2006 |
yamt | sync with head
|
1.2.2.2 |
| 26-Jun-2006 |
yamt | sync with head.
|
1.2.2.1 |
| 25-Jun-2006 |
yamt | file subr_kmem.c was added on branch yamt-pdpolicy on 2006-06-26 12:52:57 +0000
|
1.4.2.2 |
| 13-Jul-2006 |
gdamore | Merge from HEAD.
|
1.4.2.1 |
| 08-Jul-2006 |
gdamore | file subr_kmem.c was added on branch gdamore-uart on 2006-07-13 17:49:51 +0000
|
1.9.6.2 |
| 10-Dec-2006 |
yamt | sync with head.
|
1.9.6.1 |
| 22-Oct-2006 |
yamt | sync with head
|
1.9.4.2 |
| 09-Sep-2006 |
rpaulo | sync with head
|
1.9.4.1 |
| 28-Aug-2006 |
rpaulo | file subr_kmem.c was added on branch rpaulo-netinet-merge-pcb on 2006-09-09 02:57:16 +0000
|
1.9.2.4 |
| 09-Feb-2007 |
ad | Sync with HEAD.
|
1.9.2.3 |
| 04-Feb-2007 |
ad | Back out previous. kmem_alloc() seems to get called with locks held.
|
1.9.2.2 |
| 04-Feb-2007 |
ad | Temporary hack: grab the kernel_lock around calls into vmem.
|
1.9.2.1 |
| 19-Jan-2007 |
ad | Add some DEBUG code to check that items being freed were previously allocated from the same source. Needs to be enabled via DDB.
|
1.11.4.7 |
| 11-Feb-2008 |
yamt | sync with head.
|
1.11.4.6 |
| 21-Jan-2008 |
yamt | sync with head
|
1.11.4.5 |
| 15-Nov-2007 |
yamt | sync with head.
|
1.11.4.4 |
| 03-Sep-2007 |
yamt | sync with head.
|
1.11.4.3 |
| 26-Feb-2007 |
yamt | sync with head.
|
1.11.4.2 |
| 30-Dec-2006 |
yamt | sync with head.
|
1.11.4.1 |
| 01-Nov-2006 |
yamt | file subr_kmem.c was added on branch yamt-lazymbuf on 2006-12-30 20:50:06 +0000
|
1.11.2.2 |
| 04-Mar-2007 |
bouyer | Pull up following revision(s) (requested by yamt in ticket #488): sys/kern/subr_kmem.c: revision 1.14 kmem_backend_alloc: fix a null dereference.
|
1.11.2.1 |
| 16-Feb-2007 |
tron | Pull up following revision(s) (requested by chs in ticket #418): sys/kern/subr_kmem.c: revision 1.12 kmem_alloc: fix a null dereference reported by Chuck Silvers.
|
1.13.2.2 |
| 15-Apr-2007 |
yamt | sync with head.
|
1.13.2.1 |
| 12-Mar-2007 |
rmind | Sync with HEAD.
|
1.14.6.1 |
| 29-Mar-2007 |
reinoud | Pullup to -current
|
1.14.4.1 |
| 11-Jul-2007 |
mjf | Sync with head.
|
1.14.2.3 |
| 29-Jul-2007 |
ad | Trap free() of areas that contain undestroyed locks. Not a major problem but it helps to catch bugs.
|
1.14.2.2 |
| 10-Apr-2007 |
ad | Sync with head.
|
1.14.2.1 |
| 21-Mar-2007 |
ad | - Replace more simple_locks, and fix up in a few places. - Use condition variables. - LOCK_ASSERT -> KASSERT.
|
1.16.14.2 |
| 18-Feb-2008 |
mjf | Sync with HEAD.
|
1.16.14.1 |
| 19-Nov-2007 |
mjf | Sync with HEAD.
|
1.16.12.1 |
| 13-Nov-2007 |
bouyer | Sync with HEAD
|
1.16.8.3 |
| 23-Mar-2008 |
matt | sync with HEAD
|
1.16.8.2 |
| 09-Jan-2008 |
matt | sync with HEAD
|
1.16.8.1 |
| 08-Nov-2007 |
matt | sync with -HEAD
|
1.16.6.1 |
| 11-Nov-2007 |
joerg | Sync with HEAD.
|
1.17.6.1 |
| 02-Jan-2008 |
bouyer | Sync with HEAD
|
1.17.4.1 |
| 10-Dec-2007 |
yamt | - separate kernel va allocation (kernel_va_arena) from in-kernel fault handling (kernel_map). - add vmem bootstrap code. vmem doesn't rely on malloc anymore. - make kmem_alloc interrupt-safe. - kill kmem_map. make malloc a wrapper of kmem_alloc.
|
1.19.26.1 |
| 09-Jul-2012 |
matt | Add another KASSERT...
|
1.19.18.3 |
| 28-Apr-2009 |
skrll | Sync with HEAD.
|
1.19.18.2 |
| 03-Mar-2009 |
skrll | Sync with HEAD.
|
1.19.18.1 |
| 19-Jan-2009 |
skrll | Sync with HEAD.
|
1.19.10.3 |
| 11-Mar-2010 |
yamt | sync with head
|
1.19.10.2 |
| 20-Jun-2009 |
yamt | sync with head
|
1.19.10.1 |
| 04-May-2009 |
yamt | sync with head.
|
1.24.2.2 |
| 23-Jul-2009 |
jym | Sync with HEAD.
|
1.24.2.1 |
| 13-May-2009 |
jym | Sync with HEAD.
Commit is split, to avoid a "too many arguments" protocol error.
|
1.32.2.1 |
| 30-Apr-2010 |
uebayasi | Sync with HEAD.
|
1.33.6.1 |
| 05-Mar-2011 |
bouyer | Sync with HEAD
|
1.33.4.1 |
| 06-Jun-2011 |
jruoho | Sync with HEAD.
|
1.33.2.1 |
| 05-Mar-2011 |
rmind | sync with head
|
1.36.2.3 |
| 22-May-2014 |
yamt | sync with head.
for a reference, the tree before this commit was tagged as yamt-pagecache-tag8.
this commit was splitted into small chunks to avoid a limitation of cvs. ("Protocol error: too many arguments")
|
1.36.2.2 |
| 30-Oct-2012 |
yamt | sync with head
|
1.36.2.1 |
| 17-Apr-2012 |
yamt | sync with head
|
1.38.2.3 |
| 29-Apr-2012 |
mrg | sync to latest -current.
|
1.38.2.2 |
| 05-Apr-2012 |
mrg | sync to latest -current.
|
1.38.2.1 |
| 18-Feb-2012 |
mrg | merge to -current.
|
1.42.2.3 |
| 20-Apr-2013 |
bouyer | Pull up following revision(s) (requested by para in ticket #876): sys/kern/subr_kmem.c: revision 1.47 addresses PR/47512 properly return NULL for failed allocations not 0x8 with size checks enabled.
|
1.42.2.2 |
| 12-Aug-2012 |
martin | branches: 1.42.2.2.4; Pull up following revision(s) (requested by para in ticket #486): sys/kern/subr_kmem.c: revision 1.46 (via patch) split allocation lookup table to decrease overall memory used making allocator more flexible for allocations larger then 4kb move the encoded "size" under DEBUG back to the begining of allocated chunk
|
1.42.2.1 |
| 03-Apr-2012 |
riz | Pull up following revision(s) (requested by para in ticket #155): sys/kern/subr_vmem.c: revision 1.73 sys/kern/subr_kmem.c: revision 1.43 sys/rump/librump/rumpkern/vm.c: revision 1.124 make accounting for vm_inuse sane while here don't statically allocated for more caches then required adjust rump for static pool_cache count should have went in with subr_vmem 1.73 don't overallocated once we leave the caches
|
1.42.2.2.4.1 |
| 20-Apr-2013 |
bouyer | Pull up following revision(s) (requested by para in ticket #876): sys/kern/subr_kmem.c: revision 1.47 addresses PR/47512 properly return NULL for failed allocations not 0x8 with size checks enabled.
|
1.46.2.3 |
| 03-Dec-2017 |
jdolecek | update from HEAD
|
1.46.2.2 |
| 20-Aug-2014 |
tls | Rebase to HEAD as of a few days ago.
|
1.46.2.1 |
| 23-Jun-2013 |
tls | resync from head
|
1.50.4.1 |
| 18-May-2014 |
rmind | sync with head
|
1.51.2.1 |
| 10-Aug-2014 |
tls | Rebase.
|
1.60.4.3 |
| 28-Aug-2017 |
skrll | Sync with HEAD
|
1.60.4.2 |
| 19-Mar-2016 |
skrll | Sync with HEAD
|
1.60.4.1 |
| 22-Sep-2015 |
skrll | Sync with HEAD
|
1.60.2.1 |
| 03-Dec-2017 |
snj | Pull up following revision(s) (requested by mlelstv in ticket #1521): share/man/man9/kmem.9: revision 1.20 via patch share/man/man9/vmem.9: revision 1.16 sys/kern/subr_kmem.c: revision 1.62 sys/kern/subr_vmem.c: revision 1.94 fix vmem_alloc() to never return an error for VM_SLEEP requests, thus fixing kmem_alloc() to never return NULL for KM_SLEEP requests. instead these operations will retry forever, which was the intent.
|
1.62.4.1 |
| 21-Apr-2017 |
bouyer | Sync with HEAD
|
1.62.2.1 |
| 26-Apr-2017 |
pgoyette | Sync with HEAD
|
1.66.4.2 |
| 13-Apr-2020 |
martin | Mostly merge changes from HEAD upto 20200411
|
1.66.4.1 |
| 10-Jun-2019 |
christos | Sync with HEAD
|
1.66.2.2 |
| 26-Dec-2018 |
pgoyette | Sync with HEAD, resolve a few conflicts
|
1.66.2.1 |
| 06-Sep-2018 |
pgoyette | Sync with HEAD
Resolve a couple of conflicts (result of the uimin/uimax changes)
|
1.77.2.1 |
| 25-Jan-2020 |
ad | Sync with head.
|
1.80.2.1 |
| 03-Apr-2021 |
thorpej | Sync with HEAD.
|