History log of /src/sys/arch/amd64/stand/prekern/prekern.h |
Revision | | Date | Author | Comments |
1.25 |
| 21-Aug-2022 |
mlelstv | Adapt to pmap/bootspace migrations.
|
1.24 |
| 04-May-2021 |
khorben | prekern: add support for warning messages
As submitted on port-amd64@ (part 1/3)
Tested on NetBSD/amd64.
|
1.23 |
| 23-May-2020 |
maxv | branches: 1.23.6; Bump copyrights.
|
1.22 |
| 07-May-2020 |
maxv | Forgot to commit this file as part of elf.c::rev1.21 mm.c::rev1.27.
|
1.21 |
| 05-May-2020 |
maxv | Gather the section filtering in a single function, and add a sanity check when relocating, to make sure the section we're accessing is mappable.
Currently this check fails, because of the Xen section, which has RELAs but is an unmappable unallocated note.
Also improve the prekern ASSERTs while here.
|
1.20 |
| 20-Jun-2018 |
maxv | Add and use bootspace.smodule. Initialize it in locore/prekern to better hide the specifics from the "upper" layers. This allows for greater flexibility.
|
1.19 |
| 15-Jan-2018 |
christos | branches: 1.19.2; avoid typedef redefinitiones
|
1.18 |
| 26-Nov-2017 |
maxv | branches: 1.18.2; Add a PRNG for the prekern, based on SHA512. The formula is basically:
Y0 = SHA512(entropy-file, 256bit rdseed, 64bit rdtsc) Yn+1 = SHA512(256bit lowerhalf(Yn), 256bit rdseed, 64bit rdtsc)
On each round, random values are taken from the higher half of Yn. If rdseed is not available, rdrand is used.
The SHA1 checksum of entropy-file is verified. However, the rndsave_t::data field is not updated by the prekern, because the area is accessed via the read-only view we created in locore. I like this design, so it will have to be updated differently.
|
1.17 |
| 26-Nov-2017 |
maxv | Add rdrand.
|
1.16 |
| 21-Nov-2017 |
maxv | Clean up and add some ASSERTs.
|
1.15 |
| 15-Nov-2017 |
maxv | Small cleanup.
|
1.14 |
| 15-Nov-2017 |
maxv | Define MM_PROT_* locally.
|
1.13 |
| 15-Nov-2017 |
maxv | Support large pages on KASLR kernels, in a way that does not reduce randomness, but on the contrary that increases it.
The size of the kernel sub-blocks is changed to be 1MB. This produces a kernel with sections that are always < 2MB in size, that can fit a large page.
Each section is put in a 2MB physical chunk. In this chunk, there is a padding of approximately 1MB. The prekern uses a random offset aligned to sh_addralign, to shift the section in physical memory.
For example, physical memory layout created by the bootloader for .text.4 and .rodata.0: +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+ |+---------------+ |+---------------+ | || .text.4 | PAD || .rodata.0 | PAD | |+---------------+ |+---------------+ | +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+ PA PA+2MB PA+4MB
Then, physical memory layout, after having been shifted by the prekern: +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+ | P +---------------+ | +---------------+ | | A | .text.4 | PAD | PAD | .rodata.0 | PAD | | D +---------------+ | +---------------+ | +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+ PA PA+2MB PA+4MB
The kernel maps these 2MB physical chunks with 2MB large pages. Therefore, randomness is enforced at both the virtual and physical levels, and the resulting entropy is higher than that of our current implementaion until now.
The padding around the section is filled by the prekern. Not to consume too much memory, the sections that are smaller than PAGE_SIZE are mapped with normal pages - because there is no point in optimizing them. In these normal pages, the same shift is applied.
This change has two additional advantages: (a) the cache attacks based on the TLB are mostly mitigated, because even if you are able to determine that a given page-aligned range is mapped as executable you don't know where exactly within that range the section actually begins, and (b) given that we are slightly randomizing the physical layout we are making some rare physical attacks more difficult to conduct.
NOTE: after this change you need to update GENERIC_KASLR / prekern / bootloader.
|
1.12 |
| 14-Nov-2017 |
maxv | Add -Wstrict-prototypes, and fix each warning.
|
1.11 |
| 13-Nov-2017 |
maxv | Change the mapping logic: don't group sections of the same type into segments, and rather map each section independently at a random VA.
In particular, .data and .bss are not merged anymore and reside at different addresses.
|
1.10 |
| 13-Nov-2017 |
maxv | Link libkern in the prekern, and remove redefined functions.
|
1.9 |
| 11-Nov-2017 |
maxv | Modify the layout of the bootspace structure, in such a way that it can contain several kernel segments of the same type (eg several .text segments). Some parts are still a bit messy but will be cleaned up soon.
I cannot compile-test this change on i386, but it seems fine enough.
NOTE: you need to rebuild and reinstall a new prekern after this change.
|
1.8 |
| 10-Nov-2017 |
maxv | Implement memcpy, the builtin version does not work with variable sizes.
|
1.7 |
| 10-Nov-2017 |
maxv | Add cpuid and rdseed.
|
1.6 |
| 09-Nov-2017 |
maxv | Define utility functions as inlines in prekern.h.
|
1.5 |
| 09-Nov-2017 |
maxv | Fill in the page padding. Only .text is pre-filled by the ld script, but this will change in the future.
|
1.4 |
| 05-Nov-2017 |
maxv | Mprotect the segments in mm.c using bootspace, and remove the now unused fields of elfinfo.
|
1.3 |
| 29-Oct-2017 |
maxv | Randomize the kernel segments independently. That is to say, put text, rodata and data at different addresses (and in a random order).
To achieve that, the mapping order in the prekern is changed. Until now, we were creating the kernel map the following way: -> choose a random VA -> map [kernpa_start; kernpa_end[ at this VA -> parse the ELF structures from there -> determine where exactly the kernel segments are located -> relocate etc Now, we are doing: -> create a read-only view of [kernpa_start; kernpa_end[ -> from this view, compute the size of the "head" region -> choose a random VA in the HEAD window, and map the head there -> for each region in (text, rodata, data, boot) -> compute the size of the region from the RO view -> choose a random VA in the KASLR window -> map the region there -> relocate etc
Each time we map a region, we initialize its bootspace fields right away.
The "head" region must be put before the other regions in memory, because the kernel uses (headva + sh_offset) to get the addresses of the symbols, and the offset is unsigned.
Given that the head does not have an mcmodel constraint, its location is randomized in a window located below the KASLR window.
The rest of the regions being in the same window, we need to detect collisions.
Note that the module map is embedded in the "boot" region, and that therefore its location is randomized too.
|
1.2 |
| 29-Oct-2017 |
maxv | Add a fifth region, called "head". On kaslr kernels it contains the ELF Header and the ELF Section Headers. On normal kernels it is empty (the headers are in the "boot" region).
Note: if you're using GENERIC_KASLR, you also need to rebuild the prekern.
|
1.1 |
| 10-Oct-2017 |
maxv | Add the amd64 prekern. It is a kernel relocator used for Kernel ASLR (see tech-kern@). It works, but is not yet linked to the build system, because I can't build a distribution right now.
|
1.18.2.2 |
| 03-Dec-2017 |
jdolecek | update from HEAD
|
1.18.2.1 |
| 26-Nov-2017 |
jdolecek | file prekern.h was added on branch tls-maxphys on 2017-12-03 11:35:48 +0000
|
1.19.2.1 |
| 25-Jun-2018 |
pgoyette | Sync with HEAD
|
1.23.6.1 |
| 13-May-2021 |
thorpej | Sync with HEAD.
|