History log of /src/sys/arch/x86/include/cpufunc.h |
Revision | | Date | Author | Comments |
1.42 |
| 24-Oct-2020 |
mgorny | Issue 64-bit versions of *XSAVE* for 64-bit amd64 programs
When calling FXSAVE, XSAVE, FXRSTOR, ... for 64-bit programs on amd64 use the 64-suffixed variant in order to include the complete FIP/FDP registers in the x87 area.
The difference between the two variants is that the FXSAVE64 (new) variant represents FIP/FDP as 64-bit fields (union fp_addr.fa_64), while the legacy FXSAVE variant uses split fields: 32-bit offset, 16-bit segment and 16-bit reserved field (union fp_addr.fa_32). The latter implies that the actual addresses are truncated to 32 bits which is insufficient in modern programs.
The change is applied only to 64-bit programs on amd64. Plain i386 and compat32 continue using plain FXSAVE. Similarly, NVMM is not changed as I am not familiar with that code.
This is a potentially breaking change. However, I don't think it likely to actually break anything because the data provided by the old variant were not meaningful (because of the truncated pointer).
|
1.41 |
| 15-Jun-2020 |
msaitoh | Serialize rdtsc using with lfence, mfence or cpuid to read TSC more precisely.
x86/x86/tsc.c rev. 1.67 reduced cache problem and got big improvement, but it still has room. I measured the effect of lfence, mfence, cpuid and rdtscp. The impact to TSC skew and/or drift is:
AMD: mfence > rdtscp > cpuid > lfence-serialize > lfence = nomodify Intel: lfence > rdtscp > cpuid > nomodify
So, mfence is the best on AMD and lfence is the best on Intel. If it has no SSE2, we can use cpuid.
NOTE: - An AMD's document says DE_CFG_LFENCE_SERIALIZE bit can be used for serializing, but it's not so good. - On Intel i386(not amd64), it seems the improvement is very little. - rdtscp instruct can be used as serializing instruction + rdtsc, but it's not good as [lm]fence. Both Intel and AMD's document say that the latency of rdtscp is bigger than rdtsc, so I suspect the difference of the result comes from it.
|
1.40 |
| 14-Jun-2020 |
riastradh | Use static constant rather than stack memset buffer for zero fpregs.
|
1.39 |
| 02-May-2020 |
maxv | Modify the hotpatch mechanism, in order to make it much less ROP-friendly.
Currently x86_patch_window_open is a big problem, because it is a perfect function to inject/modify executable code with ROP.
- Remove x86_patch_window_open(), along with its x86_patch_window_close() counterpart. - Introduce a read-only link-set of hotpatch descriptor structures, which reference a maximum of two read-only hotpatch sources. - Modify x86_hotpatch() to open a window and call the new x86_hotpatch_apply() function in a hard-coded manner. - Modify x86_hotpatch() to take a name and a selector, and have x86_hotpatch_apply() resolve the descriptor from the name and the source from the selector, before hotpatching. - Move the error handling in a separate x86_hotpatch_cleanup() function, that gets called after we closed the window.
The resulting implementation is a bit complex and non-obvious. But it gains the following properties: the code executed in the hotpatch window is strictly hard-coded (no callback and no possibility to execute your own code in the window) and the pointers this code accesses are strictly read-only (no possibility to forge pointers to hotpatch an area that was not designated as hotpatchable at compile-time, and no possibility to choose what bytes to write other than the maximum of two read-only templates that were designated as valid for the given destination at compile-time).
With current CPUs this slightly improves a situation that is already pretty bad by definition on x86. Assuming CET however, this change closes a big hole and is kinda great.
The only ~problem there is, is that dtrace-fbt tries to hotpatch random places with random bytes, and there is just no way to make it safe. However dtrace is only in a module, that is rarely used and never compiled into the kernel, so it's not a big problem; add a shitty & vulnerable independent hotpatch window in it, and leave big XXXs. It looks like fbt is going to collapse soon anyway.
|
1.38 |
| 25-Apr-2020 |
bouyer | Merge the bouyer-xenpvh branch, bringing in Xen PV drivers support under HVM guests in GENERIC. Xen support can be disabled at runtime with boot -c disable hypervisor
|
1.37 |
| 30-Oct-2019 |
maxv | branches: 1.37.6; More inlined ASM.
|
1.36 |
| 07-Sep-2019 |
maxv | Convert rdmsr_locked and wrmsr_locked to inlines.
|
1.35 |
| 07-Sep-2019 |
maxv | Add a memory barrier on wrmsr, because some MSRs control memory access rights (we don't use them though). Also add barriers on fninit and clts for safety.
|
1.34 |
| 05-Jul-2019 |
maxv | branches: 1.34.2; More inlines, prerequisites for future changes. Also, remove fngetsw(), which was a duplicate of fnstsw().
|
1.33 |
| 03-Jul-2019 |
maxv | Inline x86_cpuid2(), prerequisite for future changes. Also, add "memory" on certain other inlines, to make sure GCC does not reorder.
|
1.32 |
| 30-May-2019 |
christos | use __asm
|
1.31 |
| 29-May-2019 |
maxv | Add PCID support in SVS. This avoids TLB flushes during kernel<->user transitions, which greatly reduces the performance penalty introduced by SVS.
We use two ASIDs, 0 (kern) and 1 (user), and use invpcid to flush pages in both ASIDs.
The read-only machdep.svs.pcid={0,1} sysctl is added, and indicates whether SVS+PCID is in use.
|
1.30 |
| 11-May-2019 |
christos | Undo previous, fixed in userland.
|
1.29 |
| 11-May-2019 |
christos | expose the {rd,wr}msr functions to userland and install the header for the benefit of cpuctl (fix the build).
|
1.28 |
| 09-May-2019 |
bouyer | sti/cli are not allowed on Xen, we have to clear/set a bit in the shared page. Revert x86_disable_intr/x86_enable_intr to plain function calls on XENPV. While there, clean up unused functions and macros, and change cli()/sti() macros to x86_disable_intr/x86_enable_intr. Makes Xen domU boot again (http://www-soc.lip6.fr/~bouyer/NetBSD-tests/xen/HEAD/)
|
1.27 |
| 04-May-2019 |
maxv | More inlined ASM. While here switch to proper types.
|
1.26 |
| 01-May-2019 |
maxv | Start converting the x86 CPU functions to inlined ASM. Matters for NVMM, where some are invoked millions of times.
|
1.25 |
| 01-May-2019 |
maxv | Remove unused functions and reorder a little.
|
1.24 |
| 22-Feb-2018 |
maxv | branches: 1.24.4; Improve the SVS initialization.
Declare x86_patch_window_open() and x86_patch_window_close(), and globalify x86_hotpatch().
Introduce svs_enable() in x86/svs.c, that does the SVS hotpatching.
Change svs_init() to take a bool. This function gets called twice; early when the system just booted (and nothing is initialized), lately when at least pmap_kernel has been initialized.
|
1.23 |
| 15-Oct-2017 |
maxv | Add setds and setes, will be useful in the future.
|
1.22 |
| 13-Dec-2016 |
kamil | branches: 1.22.8; Torn down KSTACK_CHECK_DR0, i386-only feature to detect stack overflow
This feature was intended to detect stack overflow with CPU Debug Registers (x86). It was never ported to other ports, neither amd64 and should be adapted for SMP...
Currently there might be better ways to detect stack overflows like page mapping protection. Since the number of Debug Registers is restricted (4 on x86), torn it down completely.
This interface introduced helper functions for Debug Registers, they will be replaced with the new <x86/dbregs.h> interface.
KSTACK_CHECK_DR0 was disabled by default and won't affect ordinary users.
Sponsored by <The NetBSD Foundation>
|
1.21 |
| 13-Dec-2016 |
kamil | Switch x86 CPU Debug Register types from vaddr_t to register_t
This is more opaque and appropriate type, as vaddr_t is meant to be used for vitual address value. Not all DR on x86 are used to represent virtual address (DR6 and DR7 are definitely not).
No functional change intended.
Change suggested by <christos>
Sponsored by <The NetBSD Foundation>
|
1.20 |
| 27-Nov-2016 |
kamil | Add accessors for available x86 Debug Registers
There are 8 Debug Registers on i386 (available at least since 80386) and 16 on AMD64. Currently DR4 and DR5 are reserved on both cpu-families and DR9-DR15 are still reserved on AMD64. Therefore add accessors for DR0-DR3, DR6-DR7 for all ports.
Debug Registers x86: * DR0-DR3 Debug Address Registers * DR4-DR5 Reserved * DR6 Debug Status Register * DR7 Debug Control Register * DR8-DR15 Reserved
Access the registers is available only from a kernel (ring 0) as there is needed top protected access. For this reason there is need to use special XEN functions to get and set the registers in the XEN3 kernels.
XEN specific functions as defined in NetBSD: - HYPERVISOR_get_debugreg() - HYPERVISOR_set_debugreg()
This code extends the existing rdr6() and ldr6() accessor for additional: - rdr0() & ldr0() - rdr1() & ldr1() - rdr2() & ldr2() - rdr3() & ldr3() - rdr7() & ldr7()
Traditionally accessors for DR6 were passing vaddr_t argument, while it's appropriate type for DR0-DR3, DR6-DR7 should be using u_long, however it's not a big deal. The resulting functionality should be equivalent so stick to this convention and use the vaddr_t type for all DR accessors.
There was already a function defined for rdr6() in XEN, but it had a nit on AMD64 as it was casting HYPERVISOR_get_debugreg() to u_int (32-bit on AMD64), truncating result. It still works for DR6, but for the sake of simplicity always return full 64-bit value.
New accessors duplicate functionality of the dr0() function available on i386 within the KSTACK_CHECK_DR0 option. dr0() is a specialized layer with logic to set appropriate types of interrupts, now accessors are designed to pass verbatim values from user-land (with simple sanity checks in the kernel). At the moment there are no plans to make possible to coexist KSTACK_CHECK_DR0 with debug registers for user applications (debuggers).
options KSTACK_CHECK_DR0 Detect kernel stack overflow using DR0 register. This option uses DR0 register exclusively so you can't use DR0 register for other purpose (e.g., hardware breakpoint) if you turn this on.
The KSTACK_CHECK_DR0 functionality was designed for i386 and never ported to amd64.
Code tested on i386 and amd64 with kernels: GENERIC, XEN3_DOMU, XEN3_DOM0.
Sponsored by <The NetBSD Foundation>
|
1.19 |
| 05-Jan-2016 |
hannken | branches: 1.19.2; Adapt prototypes and usage of rdmsr_locked() and wrmsr_locked() to their implementation. Both functions don't take the passcode as argument.
As wrmsr_locked() no longer writes the passcode to the msr the erratum 721 on my Opteron 2356 really gets patched and cc1 no longer crashes with SIGSEGV.
|
1.18 |
| 25-Feb-2014 |
dsl | branches: 1.18.4; 1.18.6; 1.18.8; Add support for saving the AVX-256 ymm registers during FPU context switches. Add support for the forthcoming AVX-512 registers. Code compiled with -mavx seems to work, but I've not tested context switches with live ymm registers. There is a small cost on fork/exec (a larger area is copied/zerod), but I don't think the ymm registers are read/written unless they have been used. The code use XSAVE on all cpus, I'm not brave enough to enable XSAVEOPT.
|
1.17 |
| 13-Feb-2014 |
dsl | Check the argument types for the fpu asm functions.
|
1.16 |
| 12-Feb-2014 |
dsl | Change i386 to use x86/fpu.c instead of i386/isa/npx.c This changes the trap10 and trap13 code to call directly into fpu.c, removing all the code for T_ARITHTRAP, T_XMM and T_FPUNDA from i386/trap.c Not all of the code thate appeared to handle fpu traps was ever called! Most of the changes just replace the include of machine/npx.h with x86/fpu.h (or remove it entirely).
|
1.15 |
| 09-Feb-2014 |
dsl | Add x86_stmxcsr for amd64.
|
1.14 |
| 08-Dec-2013 |
dsl | Add some definitions for cpu 'extended state'. These are needed for support of the AVX SIMD instructions. Nothing yet uses them.
|
1.13 |
| 24-Sep-2011 |
jym | branches: 1.13.2; 1.13.8; 1.13.12; 1.13.14; 1.13.16; 1.13.22; Import rdmsr_safe(msr, *value) for x86 world. It allows reading MSRs in a safe way by handling the fault that might trigger for certain register <> CPU/arch combos.
Requested by Jukka. Patch adapted from one found in DragonflyBSD.
|
1.12 |
| 07-Jul-2010 |
chs | add the guts of TLS support on amd64. based on joerg's patch, reworked by me to support 32-bit processes as well. we now keep %fs and %gs loaded with the user values while in the kernel, which means we don't need to reload them when returning to user mode.
|
1.11 |
| 27-Jan-2009 |
christos | branches: 1.11.2; 1.11.4; 1.11.6; factor out common reset code.
|
1.10 |
| 19-Dec-2008 |
cegger | x86_patch() is not available on Xen. Make Xen kernels link again.
|
1.9 |
| 19-Dec-2008 |
ad | PR kern/40213 my i386 machine can't boot because of tsc
- Patch in atomic_cas_64() twice. The first patch is early and makes it the MP-atomic version available if we have cmpxchg8b. The second patch strips the lock prefix if ncpu==1.
- Fix the i486 atomic_cas_64() to not unconditionally enable interrupts.
|
1.8 |
| 30-Apr-2008 |
cegger | branches: 1.8.8; 1.8.10; AMD's APM Volume 2 says 'All control registers are 64bit in long mode'. Fix the CR0 prototype to match this (the asm implementation is correct though). OK ad
|
1.7 |
| 28-Apr-2008 |
martin | Remove clause 3 and 4 from TNF licenses
|
1.6 |
| 27-Apr-2008 |
ad | branches: 1.6.2; +lcr2
|
1.5 |
| 16-Apr-2008 |
cegger | branches: 1.5.2; - use aprint_*_dev and device_xname - use POSIX integer types
|
1.4 |
| 01-Jan-2008 |
yamt | branches: 1.4.6; add x86_cpuid2, which can specify ecx register.
|
1.3 |
| 15-Nov-2007 |
ad | branches: 1.3.6; Remove support for 80386 level CPUs. PR port-i386/36163.
|
1.2 |
| 26-Sep-2007 |
ad | branches: 1.2.2; 1.2.4; 1.2.6; 1.2.8; 1.2.10; 1.2.12; 1.2.14; Update copyright.
|
1.1 |
| 26-Sep-2007 |
ad | x86 changes for pcc and LKMs.
- Replace most inline assembly with proper functions. As a side effect this reduces the size of amd64 GENERIC by about 120kB, and i386 by a smaller amount. Nearly all of the inlines did something slow, or something that does not need to be fast. - Make curcpu() and curlwp functions proper, unless __GNUC__ && _KERNEL. In that case make them inlines. Makes curlwp LKM and preemption safe. - Make bus_space and bus_dma more LKM friendly. - Share a few more files between the ports. - Other minor changes.
|
1.2.14.3 |
| 09-Jan-2008 |
matt | sync with HEAD
|
1.2.14.2 |
| 06-Nov-2007 |
matt | sync with HEAD
|
1.2.14.1 |
| 26-Sep-2007 |
matt | file cpufunc.h was added on branch matt-armv6 on 2007-11-06 23:23:34 +0000
|
1.2.12.2 |
| 18-Feb-2008 |
mjf | Sync with HEAD.
|
1.2.12.1 |
| 19-Nov-2007 |
mjf | Sync with HEAD.
|
1.2.10.4 |
| 21-Jan-2008 |
yamt | sync with head
|
1.2.10.3 |
| 07-Dec-2007 |
yamt | sync with head
|
1.2.10.2 |
| 27-Oct-2007 |
yamt | sync with head.
|
1.2.10.1 |
| 26-Sep-2007 |
yamt | file cpufunc.h was added on branch yamt-lazymbuf on 2007-10-27 11:28:54 +0000
|
1.2.8.1 |
| 18-Nov-2007 |
bouyer | Sync with HEAD
|
1.2.6.3 |
| 03-Dec-2007 |
ad | Sync with HEAD.
|
1.2.6.2 |
| 09-Oct-2007 |
ad | Sync with head.
|
1.2.6.1 |
| 26-Sep-2007 |
ad | file cpufunc.h was added on branch vmlocking on 2007-10-09 13:38:41 +0000
|
1.2.4.2 |
| 06-Oct-2007 |
yamt | sync with head.
|
1.2.4.1 |
| 26-Sep-2007 |
yamt | file cpufunc.h was added on branch yamt-x86pmap on 2007-10-06 15:33:31 +0000
|
1.2.2.3 |
| 21-Nov-2007 |
joerg | Sync with HEAD.
|
1.2.2.2 |
| 02-Oct-2007 |
joerg | Sync with HEAD.
|
1.2.2.1 |
| 26-Sep-2007 |
joerg | file cpufunc.h was added on branch jmcneill-pm on 2007-10-02 18:27:49 +0000
|
1.3.6.1 |
| 02-Jan-2008 |
bouyer | Sync with HEAD
|
1.4.6.2 |
| 17-Jan-2009 |
mjf | Sync with HEAD.
|
1.4.6.1 |
| 02-Jun-2008 |
mjf | Sync with HEAD.
|
1.5.2.1 |
| 18-May-2008 |
yamt | sync with head.
|
1.6.2.3 |
| 11-Aug-2010 |
yamt | sync with head.
|
1.6.2.2 |
| 04-May-2009 |
yamt | sync with head.
|
1.6.2.1 |
| 16-May-2008 |
yamt | sync with head.
|
1.8.10.4 |
| 01-Jun-2015 |
sborrill | Pull up the following revisions(s) (requested by msaitoh in ticket #1969): sys/arch/x86/include/cpufunc.h: revision 1.13 sys/arch/amd64/amd64/cpufunc.S: revision 1.20-1.21 via patch sys/arch/i386/i386/cpufunc.S: revision 1.16-1.17, 1.21 via patch
Backport rdmsr_safe() to access MSR safely.
|
1.8.10.3 |
| 02-Feb-2009 |
snj | branches: 1.8.10.3.6; 1.8.10.3.10; Pull up following revision(s) (requested by ad in ticket #396): sys/arch/amd64/amd64/machdep.c: revision 1.122 sys/arch/i386/i386/machdep.c: revision 1.657 sys/arch/x86/include/cpufunc.h: revision 1.11 sys/arch/x86/x86/x86_machdep.c: revision 1.28 factor out common reset code.
|
1.8.10.2 |
| 02-Feb-2009 |
snj | Pull up following revision(s) (requested by bouyer in ticket #343): sys/arch/x86/x86/identcpu.c: revision 1.13 sys/arch/x86/include/cpufunc.h: revision 1.10 x86_patch() is not available on Xen. Make Xen kernels link again.
|
1.8.10.1 |
| 02-Feb-2009 |
snj | Pull up following revision(s) (requested by ad in ticket #343): common/lib/libc/arch/i386/atomic/atomic.S: revision 1.14 sys/arch/x86/include/cpufunc.h: revision 1.9 sys/arch/x86/x86/identcpu.c: revision 1.12 sys/arch/x86/x86/cpu.c: revision 1.60 sys/arch/x86/x86/patch.c: revision 1.15 PR kern/40213 my i386 machine can't boot because of tsc - Patch in atomic_cas_64() twice. The first patch is early and makes it the MP-atomic version available if we have cmpxchg8b. The second patch strips the lock prefix if ncpu==1. - Fix the i486 atomic_cas_64() to not unconditionally enable interrupts.
|
1.8.10.3.10.1 |
| 01-Jun-2015 |
sborrill | Pull up the following revisions(s) (requested by msaitoh in ticket #1969): sys/arch/x86/include/cpufunc.h: revision 1.13 sys/arch/amd64/amd64/cpufunc.S: revision 1.20-1.21 via patch sys/arch/i386/i386/cpufunc.S: revision 1.16-1.17, 1.21 via patch
Backport rdmsr_safe() to access MSR safely.
|
1.8.10.3.6.1 |
| 01-Jun-2015 |
sborrill | Pull up the following revisions(s) (requested by msaitoh in ticket #1969): sys/arch/x86/include/cpufunc.h: revision 1.13 sys/arch/amd64/amd64/cpufunc.S: revision 1.20-1.21 via patch sys/arch/i386/i386/cpufunc.S: revision 1.16-1.17, 1.21 via patch
Backport rdmsr_safe() to access MSR safely.
|
1.8.8.2 |
| 03-Mar-2009 |
skrll | Sync with HEAD.
|
1.8.8.1 |
| 19-Jan-2009 |
skrll | Sync with HEAD.
|
1.11.6.1 |
| 05-Mar-2011 |
rmind | sync with head
|
1.11.4.1 |
| 17-Aug-2010 |
uebayasi | Sync with HEAD.
|
1.11.2.1 |
| 24-Oct-2010 |
jym | Sync with HEAD
|
1.13.22.1 |
| 14-Jul-2016 |
snj | Pull up following revision(s) (requested by hannken in ticket #1361): sys/arch/x86/include/cpufunc.h: revision 1.19 sys/arch/x86/x86/errata.c: revision 1.23 Adapt prototypes and usage of rdmsr_locked() and wrmsr_locked() to their implementation. Both functions don't take the passcode as argument. As wrmsr_locked() no longer writes the passcode to the msr the erratum 721 on my Opteron 2356 really gets patched and cc1 no longer crashes with SIGSEGV.
|
1.13.16.1 |
| 18-May-2014 |
rmind | sync with head
|
1.13.14.1 |
| 14-Jul-2016 |
snj | Pull up following revision(s) (requested by hannken in ticket #1361): sys/arch/x86/include/cpufunc.h: revision 1.19 sys/arch/x86/x86/errata.c: revision 1.23 Adapt prototypes and usage of rdmsr_locked() and wrmsr_locked() to their implementation. Both functions don't take the passcode as argument. As wrmsr_locked() no longer writes the passcode to the msr the erratum 721 on my Opteron 2356 really gets patched and cc1 no longer crashes with SIGSEGV.
|
1.13.12.2 |
| 03-Dec-2017 |
jdolecek | update from HEAD
|
1.13.12.1 |
| 20-Aug-2014 |
tls | Rebase to HEAD as of a few days ago.
|
1.13.8.1 |
| 14-Jul-2016 |
snj | Pull up following revision(s) (requested by hannken in ticket #1361): sys/arch/x86/include/cpufunc.h: revision 1.19 sys/arch/x86/x86/errata.c: revision 1.23 Adapt prototypes and usage of rdmsr_locked() and wrmsr_locked() to their implementation. Both functions don't take the passcode as argument. As wrmsr_locked() no longer writes the passcode to the msr the erratum 721 on my Opteron 2356 really gets patched and cc1 no longer crashes with SIGSEGV.
|
1.13.2.1 |
| 22-May-2014 |
yamt | sync with head.
for a reference, the tree before this commit was tagged as yamt-pagecache-tag8.
this commit was splitted into small chunks to avoid a limitation of cvs. ("Protocol error: too many arguments")
|
1.18.8.1 |
| 06-Feb-2016 |
snj | Pull up following revision(s) (requested by hannken in ticket #1073): sys/arch/x86/x86/errata.c: revision 1.23 sys/arch/x86/include/cpufunc.h: revision 1.19 Adapt prototypes and usage of rdmsr_locked() and wrmsr_locked() to their implementation. Both functions don't take the passcode as argument. As wrmsr_locked() no longer writes the passcode to the msr the erratum 721 on my Opteron 2356 really gets patched and cc1 no longer crashes with SIGSEGV.
|
1.18.6.3 |
| 05-Feb-2017 |
skrll | Sync with HEAD
|
1.18.6.2 |
| 05-Dec-2016 |
skrll | Sync with HEAD
|
1.18.6.1 |
| 19-Mar-2016 |
skrll | Sync with HEAD
|
1.18.4.1 |
| 26-Jan-2016 |
snj | Pull up following revision(s) (requested by hannken in ticket #1073): sys/arch/x86/x86/errata.c: revision 1.23 sys/arch/x86/include/cpufunc.h: revision 1.19 Adapt prototypes and usage of rdmsr_locked() and wrmsr_locked() to their implementation. Both functions don't take the passcode as argument. As wrmsr_locked() no longer writes the passcode to the msr the erratum 721 on my Opteron 2356 really gets patched and cc1 no longer crashes with SIGSEGV.
|
1.19.2.1 |
| 07-Jan-2017 |
pgoyette | Sync with HEAD. (Note that most of these changes are simply $NetBSD$ tag issues.)
|
1.22.8.1 |
| 06-Mar-2018 |
martin | Pull up the following revisions, requested by maxv in ticket #603:
amd64/conf/kern.ldscript 1.25 (patch) amd64/conf/kern.ldscript.Xen 1.14 (patch) i386/conf/kern.ldscript 1.21 (patch) i386/conf/kern.ldscript.Xen 1.15 (patch) x86/include/cpufunc.h 1.24 (patch) x86/x86/patch.c 1.25 (partial) 1.26 (partial)
Backport x86_hotpatch.
|
1.24.4.2 |
| 13-Apr-2020 |
martin | Mostly merge changes from HEAD upto 20200411
|
1.24.4.1 |
| 10-Jun-2019 |
christos | Sync with HEAD
|
1.34.2.1 |
| 16-Oct-2019 |
martin | Pull up following revision(s) (requested by maxv in ticket #338):
sys/arch/x86/include/cpufunc.h: revision 1.35
Add a memory barrier on wrmsr, because some MSRs control memory access rights (we don't use them though). Also add barriers on fninit and clts for safety.
|
1.37.6.1 |
| 15-Apr-2020 |
bouyer | On amd64, always use the cmpxchg8b version of spllower. All x86_64 host should have it and we already rely on it in lock stubs. On i386, always use i686_mutex_spin_exit and cx8_spllower for Xen; Xen doesn't run on CPUs on CPUs lacking the required instructions anyway. Skip x86_patch only for XENPV, and adjust for changes in assembly functions. Tested on Xen PV and PVHVM, and on bare metal core i5.
|