| History log of /src/sys/dev/nvmm/nvmm.c |
| Revision | | Date | Author | Comments |
| 1.47 |
| 13-Sep-2022 |
riastradh | nvmm(4): Add suspend/resume support.
New MD nvmm_impl callbacks:
- .suspend_interrupt forces all VMs on all physical CPUs to exit. - .vcpu_suspend suspends an individual vCPU on a machine. - .machine_suspend suspends an individual machine. - .suspend suspends the whole system. - .resume resumes the whole system. - .machine_resume resumes an individual machine. - .vcpu_resume resumes an indidivudal vCPU on a machine.
Suspending nvmm:
1. causes new VM operations (ioctl and close) to block until resumed, 2. uses .suspend_interrupt to interrupt any concurrent and force them to return early, and then 3. uses the various suspend callbacks to suspend all vCPUs, machines, and the whole system -- all vCPUs before the machine they're on, and all machines before the system.
Resuming nvmm does the reverse of (3) -- resume system, resume each machine and then the vCPUs on that machine -- and then unblocks operations.
Implemented only for x86-vmx for now:
- suspend_interrupt triggers a TLB IPI to cause VM exits; - vcpu_suspend issues VMCLEAR to force any in-CPU state to be written to memory; - machine_suspend does nothing; - suspend does VMXOFF on all CPUs; - resume does VMXON on all CPUs; - machine_resume does nothing; and - vcpu_resume just marks each vCPU as valid but inactive so subsequent use will clear it and load it with vmptrld.
x86-svm left as an exercise for the reader.
|
| 1.46 |
| 07-Jul-2022 |
pgoyette | Only detach the cfdriver if we just attached it.
Report errno in message when fail to attach cdevsw
|
| 1.45 |
| 06-Jul-2022 |
riastradh | nvmm(4): Fix typo in previous.
Didn't trip over this in my test build because nvmm is a module, oops.
|
| 1.44 |
| 06-Jul-2022 |
riastradh | uvm(9): fo_mmap caller guarantees positive size.
No functional change intended, just sprinkling assertions to make it clearer.
|
| 1.43 |
| 12-Apr-2021 |
mrg | be sure to only access vcpu if it was initialised.
|
| 1.42 |
| 26-Mar-2021 |
reinoud | Implement nvmm_vcpu::stop, a race-free exit from nvmm_vcpu_run() without signals. This introduces a new kernel and userland NVMM version indicating this support.
Patch by Kamil Rytarowski <kamil@netbsd.org> and committed on his request.
|
| 1.41 |
| 08-Sep-2020 |
maxv | branches: 1.41.2; 1.41.4; nvmm: cosmetic changes
- Style. - Explicitly include ioccom.h.
|
| 1.40 |
| 05-Sep-2020 |
riastradh | Round of uvm.h cleanup.
The poorly named uvm.h is generally supposed to be for uvm-internal users only.
- Narrow it to files that actually need it -- mostly files that need to query whether curlwp is the pagedaemon, which should maybe be exposed by an external header.
- Use uvm_extern.h where feasible and uvm_*.h for things not exposed by it. We should split up uvm_extern.h but this will serve for now to reduce the uvm.h dependencies.
- Use uvm_stat.h and #ifdef UVMHIST uvm.h for files that use UVMHIST(ubchist), since ubchist is declared in uvm.h but the reference evaporates if UVMHIST is not defined, so we reduce header file dependencies.
- Make uvm_device.h and uvm_swap.h independently includable while here.
ok chs@
|
| 1.39 |
| 05-Sep-2020 |
maxv | nvmm: update copyright headers
|
| 1.38 |
| 04-Sep-2020 |
maxv | nvmm: more __read_mostly
|
| 1.37 |
| 29-Aug-2020 |
maxv | nvmm: explicitly include atomic.h
|
| 1.36 |
| 26-Aug-2020 |
maxv | nvmm: misc improvements
- use mach->ncpus to get the number of vcpus, now that we have it - don't forget to decrement mach->ncpus when a machine gets killed - add more __predict_false()
|
| 1.35 |
| 18-Aug-2020 |
maxv | nvmm: use relaxed atomics to read nmachines
|
| 1.34 |
| 18-Aug-2020 |
maxv | nvmm: localify a variable that doesn't need to be global
|
| 1.33 |
| 01-Aug-2020 |
maxv | Put the few x86-specific structures under #ifdef __x86_64__, for clarity.
|
| 1.32 |
| 03-Jul-2020 |
maxv | Print the backend name when attaching.
|
| 1.31 |
| 25-Jun-2020 |
maxv | Register NVMM as an actual pseudo-device. Without PMF handler, to explicitly disallow ACPI suspend if NVMM is running.
Should fix PR/55406.
|
| 1.30 |
| 24-May-2020 |
maxv | Gather the conditions to return from the VCPU loops in nvmm_return_needed(), and use it in nvmm_do_vcpu_run() as well. This fixes two undesired behaviors:
- When a VM initializes, the many nested page faults that need processing could cause the calling thread to occupy the CPU too much if we're unlucky and are only getting repeated nested page faults thousands of times in a row.
- When the emulator calls nvmm_vcpu_run() and immediately sends a signal to stop the VCPU, it's better to check signals earlier and leave right away, rather than doing a round of VCPU run that could increase the time spent by the emulator waiting for the return.
|
| 1.29 |
| 21-May-2020 |
maxv | Complete rev1.26: reset nvmm_impl to NULL in nvmm_fini().
|
| 1.28 |
| 09-May-2020 |
maxv | On Intel CPUs, CPUID leaf 0xB, too, provides topology information, so filter it correctly, to avoid inconsistencies if the host has SMT.
This fixes HaikuOS which fetches SMT information from there and would panic because of the inconsistencies.
|
| 1.27 |
| 30-Apr-2020 |
maxv | When the identification fails, print the reason.
|
| 1.26 |
| 26-Apr-2020 |
maxv | In nvmm_open(), make sure an implementation was found. This fixes an initialization bug triggerable in certain conditions.
If you build nvmm inside the kernel, AND have a cpu that is not supported, AND run nvmmctl (or qemu-nvmm, both being the only binaries in the "nvmm" group), you get a page fault.
This is because when nvmm is built inside the kernel, the kernel registers nvmm_cdevsw behind nvmm's back. The ioctl is therefore always accessible, and will hit NULL pointers if nvmm_init() failed.
Problem reported by Andrei M. on netbsd-users@, thanks.
|
| 1.25 |
| 28-Oct-2019 |
maxv | Add nram in struct nvmm_ctl_mach_info.
|
| 1.24 |
| 27-Oct-2019 |
maxv | Change the way root_owner works: consider the calling process as root_owner not if it has root privileges, but if the /dev/nvmm device was opened with write permissions. Introduce the undocumented nvmm_root_init() function to achieve that.
The goal is to simplify the logic and have more granularity, eg if we want a monitoring agent to access VMs but don't want to give this agent real root access on the system.
|
| 1.23 |
| 23-Oct-2019 |
maxv | Miscellaneous changes in NVMM, to address several inconsistencies and issues in the libnvmm API.
- Rename NVMM_CAPABILITY_VERSION to NVMM_KERN_VERSION, and check it in libnvmm. Introduce NVMM_USER_VERSION, for future use.
- In libnvmm, open "/dev/nvmm" as read-only and with O_CLOEXEC. This is to avoid sharing the VMs with the children if the process forks. In the NVMM driver, force O_CLOEXEC on open().
- Rename the following things for consistency: nvmm_exit* -> nvmm_vcpu_exit* nvmm_event* -> nvmm_vcpu_event* NVMM_EXIT_* -> NVMM_VCPU_EXIT_* NVMM_EVENT_INTERRUPT_HW -> NVMM_VCPU_EVENT_INTR NVMM_EVENT_EXCEPTION -> NVMM_VCPU_EVENT_EXCP Delete NVMM_EVENT_INTERRUPT_SW, unused already.
- Slightly reorganize the MI/MD definitions, for internal clarity.
- Split NVMM_VCPU_EXIT_MSR in two: NVMM_VCPU_EXIT_{RD,WR}MSR. Also provide separate u.rdmsr and u.wrmsr fields. This is more consistent with the other exit reasons.
- Change the types of several variables: event.type enum -> u_int event.vector uint64_t -> uint8_t exit.u.*msr.msr: uint64_t -> uint32_t exit.u.io.type: enum -> bool exit.u.io.seg: int -> int8_t cap.arch.mxcsr_mask: uint64_t -> uint32_t cap.arch.conf_cpuid_maxops: uint64_t -> uint32_t
- Delete NVMM_VCPU_EXIT_MWAIT_COND, it is AMD-only and confusing, and we already intercept 'monitor' so it is never armed.
- Introduce vmx_exit_insn() for NVMM-Intel, similar to svm_exit_insn(). The 'npc' field wasn't getting filled properly during certain VMEXITs.
- Introduce nvmm_vcpu_configure(). Similar to nvmm_machine_configure(), but as its name indicates, the configuration is per-VCPU and not per-VM. Migrate and rename NVMM_MACH_CONF_X86_CPUID to NVMM_VCPU_CONF_CPUID. This becomes per-VCPU, which makes more sense than per-VM.
- Extend the NVMM_VCPU_CONF_CPUID conf to allow triggering VMEXITs on specific leaves. Until now we could only mask the leaves. An uint32_t is added in the structure: uint32_t mask:1; uint32_t exit:1; uint32_t rsvd:30; The two first bits select the desired behavior on the leaf. Specifying zero on both resets the leaf to the default behavior. The new NVMM_VCPU_EXIT_CPUID exit reason is added.
|
| 1.22 |
| 06-Jul-2019 |
maxv | branches: 1.22.2; Localify two functions that are no longer used outside. Also return the error from the *_vcpu_run() functions, now that we commit the states in them (which can fail).
|
| 1.21 |
| 11-May-2019 |
maxv | branches: 1.21.2; Rework the machine configuration interface.
Provide three ranges in the conf space: <libnvmm:0-100>, <MI:100-200> and <MD:200-...>. Remove nvmm_callbacks_register(), and replace it by the conf op NVMM_MACH_CONF_CALLBACKS, handled by libnvmm. The callbacks are now per-machine, and the emulators should now do:
- nvmm_callbacks_register(&cbs); + nvmm_machine_configure(&mach, NVMM_MACH_CONF_CALLBACKS, &cbs);
This provides more granularity, for example if the process runs two VMs and wants different callbacks for each.
|
| 1.20 |
| 01-May-2019 |
maxv | Use the comm page to inject events, rather than ioctls, and commit them in vcpu_run. This saves a few syscalls and copyins.
For example on Windows 10, moving the mouse from the left to right sides of the screen generates ~500 events, which now don't result in syscalls.
The error handling is done in vcpu_run and it is less precise, but this doesn't matter a lot, and will be solved with future NVMM error codes.
|
| 1.19 |
| 28-Apr-2019 |
maxv | Modify the communication layer between the kernel NVMM driver and libnvmm: introduce a bidirectionnal "comm page", a page of memory shared between the kernel and userland, and used to transfer data in and out in a more performant manner than ioctls.
The comm page contains the VCPU state, plus three flags:
- "wanted": the states the kernel must get/set when requested via ioctls - "cached": the states that are in the comm page - "commit": the states the kernel must set in vcpu_run
The idea is to avoid performing expensive syscalls, by using the VCPU state cached, either explicitly or speculatively, in the comm page. For example, if the state is cached we do a direct 1->5 with no syscall:
+---------------------------------------------+ | Qemu | +---------------------------------------------+ | ^ | (0) nvmm_vcpu_getstate | (6) Done | | V | +---------------------------------------+ | libnvmm | +---------------------------------------+ | ^ | ^ (1) State | | (2) No | (3) Ioctl: | (5) Ok, state cached? | | | "please cache | fetched | | | the state" | V | | | +-----------+ | | | Comm Page |------+---------------+ +-----------+ | ^ | (4) "Alright | V babe" | +--------+ +-----| Kernel | +--------+
The main changes in behavior are:
- nvmm_vcpu_getstate(): won't emit a syscall if the state is already cached in the comm page, will just fetch from the comm page directly - nvmm_vcpu_setstate(): won't emit a syscall at all, will just cache the wanted state in the comm page - nvmm_vcpu_run(): will commit the to-be-set state in the comm page, as previously requested by nvmm_vcpu_setstate()
In addition to this, the kernel NVMM driver is changed to speculatively cache certain states known to be of interest, so that the future nvmm_vcpu_getstate() calls libnvmm or the emulator will perform will use the comm page rather than expensive syscalls. For example, if an I/O VMEXIT occurs, the I/O Assist in libnvmm will want GPRS+SEGS+CRS+MSRS, and now the kernel caches all of that in the comm page before returning to userland.
Overall, in a normal run of Windows 10, this saves several millions of syscalls. Eg on a 4CPU Intel with 4VCPUs, booting the Win10 install ISO goes from taking 1min35 to taking 1min16.
The libnvmm API is not changed, but the ABI is. If we changed the API it would be possible to save expensive memcpys on libnvmm's side. This will be avoided in a future version. The comm page can also be extended to implement future services.
|
| 1.18 |
| 27-Apr-2019 |
maxv | Mmh, fix nvmm_vcpu_create(), the cpuid is given, and must not be chosen from the free map. Looks like I forgot this after all my design rounds. While here reorder the initialization.
|
| 1.17 |
| 10-Apr-2019 |
maxv | Add the NVMM_CTL ioctl, always privileged regardless of the permissions of /dev/nvmm. We'll use it to provide a way for an admin to control the registered VMs in the kernel.
Add an associated wrapper in libnvmm.
|
| 1.16 |
| 08-Apr-2019 |
maxv | Switch to MODULE_CLASS_MISC, from pgoyette@.
|
| 1.15 |
| 08-Apr-2019 |
maxv | Don't forget to call (*machine_destroy) when killing VMs.
|
| 1.14 |
| 08-Apr-2019 |
maxv | Use the fd_clone approach, to avoid losing references to the registered VMs during fork(). We attach an nvmm_owner struct to the fd, reference it in each VM, and identify the process' VMs by just comparing the pointer.
|
| 1.13 |
| 07-Apr-2019 |
maxv | Don't allow unloading when there are still VMs registered, and don't allow auto-unloading at all. Not a big problem actually, because since I changed the module class it's not auto-loadable anymore.
|
| 1.12 |
| 28-Mar-2019 |
maxv | Move NVMM in the "any" class, so that it can be enabled in GENERIC. Add missing files in files.nvmm, and add NVMM (commented out) in the amd64 GENERIC. Remove the "caveats" section in the man page.
|
| 1.11 |
| 21-Mar-2019 |
maxv | Make it possible for an emulator to set the protection of the guest pages. For some reason I had initially concluded that it wasn't doable; verily it is, so let's do it.
The reserved 'flags' argument of nvmm_gpa_map() becomes 'prot' and takes mmap-like protection codes.
|
| 1.10 |
| 14-Mar-2019 |
maxv | Fail early if we're beyond the guest max ram.
|
| 1.9 |
| 07-Mar-2019 |
maxv | Rename the internal NVMM HVA table entries from "segment" to "hmapping", less confusing. Also fix the error handling in nvmm_hva_unmap().
|
| 1.8 |
| 18-Feb-2019 |
maxv | Ah, finally found you. Fix scheduling bug in NVMM.
When processing guest page faults, we were calling uvm_fault with preemption disabled. The thing is, uvm_fault may block, and if it does, we land in sleepq_block which calls mi_switch; so we get switched away while we explicitly asked not to be. From then on things could go really wrong.
Fix that by processing such faults in MI, where we have preemption enabled and are allowed to block.
A KASSERT in sleepq_block (or before) would have helped.
|
| 1.7 |
| 13-Feb-2019 |
maxv | Add Intel-VMX support in NVMM. This allows us to run hardware-accelerated VMs on Intel CPUs. Overall this implementation is fast and reliable, I am able to run NetBSD VMs with many VCPUs on a quad-core Intel i5.
NVMM-Intel applies several optimizations already present in NVMM-AMD, and has a code structure similar to it. No change was needed in the NVMM MI frontend, or in libnvmm.
Some differences exist against AMD:
- On Intel the ASID space is big, so we don't fall back to a shared ASID when there are more VCPUs executing than available ASIDs in the host, contrary to AMD. There are enough ASIDs for the maximum number of VCPUs supported by NVMM.
- On Intel there are two TLBs we need to take care of, one for the host (EPT) and one for the guest (VPID). Changes in EPT paging flush the host TLB, changes to the guest mode flush the guest TLB.
- On Intel there is no easy way to set/fetch the VTPR, so we intercept reads/writes to CR8 and maintain a software TPR, that we give to the virtualizer as if it was the effective TPR in the guest.
- On Intel, because of SVS, the host CR4 and LSTAR are not static, so we're forced to save them on each VMENTRY.
- There is extra Intel weirdness we need to take care of, for example the reserved bits in CR0 and CR4 when accesses trap.
While this implementation is functional and can already run many OSes, we likely have a problem on 32bit-PAE guests, because they require special care on Intel CPUs, and currently we don't handle that correctly; such guests may misbehave for now (without altering the host stability). I expect to fix that soon.
|
| 1.6 |
| 26-Jan-2019 |
maxv | Optimize: keep a per-VCPU buffer for the state, and copy in and out directly on it. The VCPUs are protected by mutexes, so nothing to worry about.
This saves two kmem_allocs in {get,set}state.
|
| 1.5 |
| 06-Jan-2019 |
maxv | Improvements and fixes in NVMM.
Kernel driver:
* Don't take an extra (unneeded) reference to the UAO.
* Provide npc for HLT. I'm not really happy with it right now, will likely be revisited.
* Add the INT_SHADOW, INT_WINDOW_EXIT and NMI_WINDOW_EXIT states. Provide them in the exitstate too.
* Don't take the TPR into account when processing INTs. The virtualizer can do that itself (Qemu already does).
* Provide a hypervisor signature in CPUID, and hide SVM.
* Ignore certain MSRs. One special case is MSR_NB_CFG in which we set NB_CFG_INITAPICCPUIDLO. Allow reads of MSR_TSC.
* If the LWP has pending signals or softints, leave, rather than waiting for a rescheduling to happen later. This reduces interrupt processing time in the guest (Qemu sends a signal to the thread, and now we leave right away). This could be improved even more by sending an actual IPI to the CPU, but I'll see later.
Libnvmm:
* Fix the MMU translation of large pages, we need to add the lower bits too.
* Change the IO and Mem structures to take a pointer rather than a static array. This provides more flexibility.
* Batch together the str+rep IO transactions. We do one big memory read/write, and then send the IO commands to the hypervisor all at once. This considerably increases performance.
* Decode MOVZX.
With these changes in place, Qemu+NVMM works. I can install NetBSD 8.0 in a VM with multiple VCPUs, connect to the network, etc.
|
| 1.4 |
| 15-Dec-2018 |
maxv | Invert the mapping logic.
Until now, the "owner" of the memory was the guest, and by calling nvmm_gpa_map(), the virtualizer was creating a view towards the guest memory.
Qemu expects the contrary: it wants the owner to be the virtualizer, and nvmm_gpa_map should just create a view from the guest towards the virtualizer's address space. Under this scheme, it is legal to have two GPAs that point to the same HVA.
Introduce nvmm_hva_map() and nvmm_hva_unmap(), that map/unamp the HVA into a dedicated UOBJ. Change nvmm_gpa_map() and nvmm_gpa_unmap() to just perform an enter into the desired UOBJ.
With this change in place, all the mapping-related problems in Qemu+NVMM are fixed.
|
| 1.3 |
| 25-Nov-2018 |
maxv | branches: 1.3.2; Appease the check: allow NVMM_MAX_RAM bytes of memory, and not just NVMM_MAX_RAM-1.
|
| 1.2 |
| 18-Nov-2018 |
maxv | Ah, should be UVM_ADV_RANDOM.
|
| 1.1 |
| 07-Nov-2018 |
maxv | Add NVMM - for NetBSD Virtual Machine Monitor -, a kernel driver that provides support for hardware-accelerated virtualization on NetBSD.
It is made of an MI frontend, to which MD backends can be plugged. One MD backend is implemented, x86-SVM, for x86 AMD CPUs.
We install
/usr/include/dev/nvmm/nvmm.h /usr/include/dev/nvmm/nvmm_ioctl.h /usr/include/dev/nvmm/{arch}/nvmm_{arch}.h
And the kernel module. For now, the only architecture where we do that is amd64 (arch=x86).
NVMM is not enabled by default in amd64-GENERIC, but is instead easily modloadable.
Sent to tech-kern@ a month ago. Validated with kASan, and optimized with tprof.
|
| 1.3.2.5 |
| 26-Jan-2019 |
pgoyette | Sync with HEAD
|
| 1.3.2.4 |
| 18-Jan-2019 |
pgoyette | Synch with HEAD
|
| 1.3.2.3 |
| 26-Dec-2018 |
pgoyette | Sync with HEAD, resolve a few conflicts
|
| 1.3.2.2 |
| 26-Nov-2018 |
pgoyette | Sync with HEAD, resolve a couple of conflicts
|
| 1.3.2.1 |
| 25-Nov-2018 |
pgoyette | file nvmm.c was added on branch pgoyette-compat on 2018-11-26 01:52:31 +0000
|
| 1.21.2.3 |
| 13-Apr-2020 |
martin | Mostly merge changes from HEAD upto 20200411
|
| 1.21.2.2 |
| 10-Jun-2019 |
christos | Sync with HEAD
|
| 1.21.2.1 |
| 11-May-2019 |
christos | file nvmm.c was added on branch phil-wifi on 2019-06-10 22:07:14 +0000
|
| 1.22.2.7 |
| 29-Aug-2020 |
martin | Pull up following revision(s) (requested by maxv in ticket #1068):
sys/dev/nvmm/x86/nvmm_x86_svm.c: revision 1.71 sys/dev/nvmm/nvmm.c: revision 1.34 sys/dev/nvmm/x86/nvmm_x86_svm.c: revision 1.72 sys/dev/nvmm/nvmm.c: revision 1.35 sys/dev/nvmm/nvmm.c: revision 1.36 sys/dev/nvmm/x86/nvmm_x86_svmfunc.S: revision 1.5 sys/dev/nvmm/nvmm.c: revision 1.37 sys/dev/nvmm/x86/nvmm_x86_vmxfunc.S: revision 1.5 sys/dev/nvmm/x86/nvmm_x86_vmx.c: revision 1.70 sys/dev/nvmm/x86/nvmm_x86_svm.c: revision 1.68 sys/dev/nvmm/x86/nvmm_x86.c: revision 1.15 sys/dev/nvmm/nvmm_ioctl.h: revision 1.10
Micro-optimize: use pushq instead of pushw. To avoid LCP stalls and unaligned stack accesses.
nvmm-x86: also flush the guest TLB when CR4.{PCIDE,SMEP} changes
nvmm: localify a variable that doesn't need to be global
nvmm: use relaxed atomics to read nmachines
nvmm-x86-svm: dedup code
nvmm-x86: hide more CPUID flags, mostly related to perf monitors
nvmm: misc improvements - use mach->ncpus to get the number of vcpus, now that we have it - don't forget to decrement mach->ncpus when a machine gets killed - add more __predict_false()
nvmm-x86-svm: don't forget to intercept INVD INVD executed in the guest can be dangerous for the host, due to CPU caches being flushed without write-back.
nvmm: slightly clarify
nvmm: explicitly include atomic.h
|
| 1.22.2.6 |
| 18-Aug-2020 |
martin | Pull up following revision(s) (requested by maxv in ticket #1055):
sys/dev/nvmm/nvmm.h: revision 1.13 sys/dev/nvmm/nvmm.h: revision 1.14 sys/dev/nvmm/nvmm.c: revision 1.33 sys/dev/nvmm/x86/nvmm_x86_vmx.c: revision 1.67 sys/dev/nvmm/nvmm_internal.h: revision 1.17 sys/dev/nvmm/x86/nvmm_x86_svm.c: revision 1.67 sys/dev/nvmm/x86/nvmm_x86.c: revision 1.10
Put the few x86-specific structures under #ifdef __x86_64__, for clarity.
Make it easier to understand what's going on, no functional change.
Add new field definitions.
Add new field definitions, and intercept everything, for future-proofness.
Add CTASSERT.
|
| 1.22.2.5 |
| 02-Aug-2020 |
martin | Pull up following revision(s) (requested by maxv in ticket #1032):
sys/dev/nvmm/x86/nvmm_x86_vmx.c: revision 1.60 (patch) sys/dev/nvmm/x86/nvmm_x86_vmx.c: revision 1.61 (patch) sys/dev/nvmm/nvmm.c: revision 1.30 sys/dev/nvmm/nvmm.c: revision 1.31 sys/dev/nvmm/nvmm.c: revision 1.32 sys/dev/nvmm/nvmm_internal.h: revision 1.15 sys/dev/nvmm/nvmm_internal.h: revision 1.16 sys/dev/nvmm/files.nvmm: revision 1.3 sys/dev/nvmm/x86/nvmm_x86_svm.c: revision 1.62 (patch) sys/dev/nvmm/x86/nvmm_x86_svm.c: revision 1.63 (patch) sys/dev/nvmm/x86/nvmm_x86_vmx.c: revision 1.59 (patch) sys/modules/nvmm/nvmm.ioconf: revision 1.2
Gather the conditions to return from the VCPU loops in nvmm_return_needed(), and use it in nvmm_do_vcpu_run() as well. This fixes two undesired behaviors:
- When a VM initializes, the many nested page faults that need processing could cause the calling thread to occupy the CPU too much if we're unlucky and are only getting repeated nested page faults thousands of times in a row.
- When the emulator calls nvmm_vcpu_run() and immediately sends a signal to stop the VCPU, it's better to check signals earlier and leave right away, rather than doing a round of VCPU run that could increase the time spent by the emulator waiting for the return.
style
Register NVMM as an actual pseudo-device. Without PMF handler, to explicitly disallow ACPI suspend if NVMM is running.
Should fix PR/55406.
Print the backend name when attaching.
|
| 1.22.2.4 |
| 21-May-2020 |
martin | Pull up following revision(s) (requested by maxv in ticket #919):
sys/dev/nvmm/x86/nvmm_x86.c: revision 1.9 sys/dev/nvmm/x86/nvmm_x86_svm.c: revision 1.60 sys/dev/nvmm/x86/nvmm_x86_svm.c: revision 1.61 sys/dev/nvmm/x86/nvmm_x86_vmx.c: revision 1.56 sys/dev/nvmm/x86/nvmm_x86_vmx.c: revision 1.57 sys/dev/nvmm/x86/nvmm_x86_vmx.c: revision 1.58 sys/dev/nvmm/nvmm.c: revision 1.29
Improve the CPUID emulation of basic leaves: - Hide DCA and PQM, they cannot be used in guests. - On Intel, explicitly handle each basic leaf until 0x16. - On AMD, explicitly handle each basic leaf until 0x0D.
Respect the convention for the hypervisor information: return the highest hypervisor leaf in 0x40000000.EAX.
Improve the CPUID emulation on nvmm-intel: limit the highest basic and hypervisor leaves.
Complete rev1.26: reset nvmm_impl to NULL in nvmm_fini().
|
| 1.22.2.3 |
| 13-May-2020 |
martin | Pull up following revision(s) (requested by maxv in ticket #898):
sys/dev/nvmm/x86/nvmm_x86_svm.c: revision 1.59 sys/dev/nvmm/nvmm_internal.h: revision 1.14 sys/dev/nvmm/x86/nvmm_x86_vmx.c: revision 1.53 sys/dev/nvmm/x86/nvmm_x86_vmx.c: revision 1.54 sys/dev/nvmm/x86/nvmm_x86_vmx.c: revision 1.55 sys/dev/nvmm/nvmm.c: revision 1.27 sys/dev/nvmm/nvmm.c: revision 1.28
When the identification fails, print the reason.
If we were processing a software int/excp, and got a VMEXIT in the middle, we must also reflect the instruction length, otherwise the next VMENTER fails and Qemu shuts the guest down.
On Intel CPUs, CPUID leaf 0xB, too, provides topology information, so filter it correctly, to avoid inconsistencies if the host has SMT.
This fixes HaikuOS which fetches SMT information from there and would panic because of the inconsistencies.
|
| 1.22.2.2 |
| 27-Apr-2020 |
martin | Pull up following revision(s) (requested by maxv in ticket #857):
sys/dev/nvmm/nvmm.c: revision 1.26
In nvmm_open(), make sure an implementation was found. This fixes an initialization bug triggerable in certain conditions.
If you build nvmm inside the kernel, AND have a cpu that is not supported, AND run nvmmctl (or qemu-nvmm, both being the only binaries in the "nvmm" group), you get a page fault.
This is because when nvmm is built inside the kernel, the kernel registers nvmm_cdevsw behind nvmm's back. The ioctl is therefore always accessible, and will hit NULL pointers if nvmm_init() failed.
Problem reported by Andrei M. on netbsd-users@, thanks.
|
| 1.22.2.1 |
| 10-Nov-2019 |
martin | Pull up following revision(s) (requested by maxv in ticket #405):
usr.sbin/nvmmctl/nvmmctl.8: revision 1.2 lib/libnvmm/libnvmm.3: revision 1.24 sys/dev/nvmm/nvmm.h: revision 1.11 lib/libnvmm/libnvmm.3: revision 1.25 sys/dev/nvmm/x86/nvmm_x86.h: revision 1.16 sys/dev/nvmm/nvmm.h: revision 1.12 sys/dev/nvmm/x86/nvmm_x86.h: revision 1.17 tests/lib/libnvmm/h_mem_assist.c: revision 1.12 sys/dev/nvmm/x86/nvmm_x86.h: revision 1.18 share/mk/bsd.hostprog.mk: revision 1.82 lib/libnvmm/libnvmm.c: revision 1.15 distrib/sets/lists/base/md.amd64: revision 1.281 tests/lib/libnvmm/h_mem_assist.c: revision 1.13 lib/libnvmm/libnvmm.c: revision 1.16 tests/lib/libnvmm/h_mem_assist.c: revision 1.14 lib/libnvmm/libnvmm_x86.c: revision 1.32 lib/libnvmm/libnvmm.c: revision 1.17 tests/lib/libnvmm/h_mem_assist.c: revision 1.15 lib/libnvmm/libnvmm_x86.c: revision 1.33 lib/libnvmm/libnvmm.c: revision 1.18 usr.sbin/nvmmctl/Makefile: revision 1.1 tests/lib/libnvmm/h_mem_assist_asm.S: revision 1.7 tests/lib/libnvmm/h_mem_assist.c: revision 1.16 lib/libnvmm/libnvmm_x86.c: revision 1.34 usr.sbin/nvmmctl/Makefile: revision 1.2 tests/lib/libnvmm/h_mem_assist_asm.S: revision 1.8 tests/lib/libnvmm/h_mem_assist.c: revision 1.17 sys/dev/nvmm/nvmm_internal.h: revision 1.13 lib/libnvmm/libnvmm_x86.c: revision 1.35 lib/libnvmm/libnvmm_x86.c: revision 1.36 usr.sbin/postinstall/postinstall.in: revision 1.8 lib/libnvmm/libnvmm_x86.c: revision 1.37 lib/libnvmm/libnvmm_x86.c: revision 1.38 lib/libnvmm/libnvmm_x86.c: revision 1.39 usr.sbin/Makefile: revision 1.282 lib/libnvmm/nvmm.h: revision 1.13 lib/libnvmm/nvmm.h: revision 1.14 lib/libnvmm/nvmm.h: revision 1.15 sys/dev/nvmm/nvmm.c: revision 1.23 lib/libnvmm/nvmm.h: revision 1.16 sys/dev/nvmm/nvmm.c: revision 1.24 lib/libnvmm/nvmm.h: revision 1.17 sys/dev/nvmm/nvmm.c: revision 1.25 tests/lib/libnvmm/h_io_assist.c: revision 1.9 etc/MAKEDEV.tmpl: revision 1.209 tests/lib/libnvmm/h_io_assist.c: revision 1.10 tests/lib/libnvmm/h_io_assist.c: revision 1.11 etc/group: revision 1.35 distrib/sets/lists/man/mi: revision 1.1660 sys/dev/nvmm/x86/nvmm_x86_vmx.c: revision 1.40 sys/dev/nvmm/x86/nvmm_x86_vmx.c: revision 1.41 sys/dev/nvmm/x86/nvmm_x86_vmx.c: revision 1.42 sys/dev/nvmm/x86/nvmm_x86_vmx.c: revision 1.43 sys/dev/nvmm/x86/nvmm_x86_vmx.c: revision 1.44 sys/dev/nvmm/x86/nvmm_x86_svm.c: revision 1.51 sys/dev/nvmm/nvmm_ioctl.h: revision 1.8 sys/dev/nvmm/x86/nvmm_x86_svm.c: revision 1.52 sys/dev/nvmm/nvmm_ioctl.h: revision 1.9 sys/dev/nvmm/x86/nvmm_x86_svm.c: revision 1.53 usr.sbin/nvmmctl/nvmmctl.c: revision 1.1 lib/libnvmm/libnvmm.3: revision 1.20 distrib/sets/lists/debug/md.amd64: revision 1.106 lib/libnvmm/libnvmm.3: revision 1.21 lib/libnvmm/libnvmm.3: revision 1.22 usr.sbin/nvmmctl/nvmmctl.8: revision 1.1 lib/libnvmm/libnvmm.3: revision 1.23
Fix incorrect parsing: the R/M field uses a special GPR map when the address size is 16 bits, regardless of the actual operating mode. With this special map there can be two registers referenced at once, and also disp16-only. Implement this special behavior, and add associated tests. While here simplify a few things. With this in place, the Windows 95 installer initializes correctly. Part of PR/54611. add missing initializer Implement XCHG, add associated tests, and add comments to explain. With this in place the Windows 95 installer completes successfuly. Part of PR/54611. Improve nvmm_vcpu_dump(). Put back 'default', because llvm apparently doesn't realize that all cases are covered in the switch. Miscellaneous changes in NVMM, to address several inconsistencies and issues in the libnvmm API. - Rename NVMM_CAPABILITY_VERSION to NVMM_KERN_VERSION, and check it in libnvmm. Introduce NVMM_USER_VERSION, for future use. - In libnvmm, open "/dev/nvmm" as read-only and with O_CLOEXEC. This is to avoid sharing the VMs with the children if the process forks. In the NVMM driver, force O_CLOEXEC on open(). - Rename the following things for consistency: nvmm_exit* -> nvmm_vcpu_exit* nvmm_event* -> nvmm_vcpu_event* NVMM_EXIT_* -> NVMM_VCPU_EXIT_* NVMM_EVENT_INTERRUPT_HW -> NVMM_VCPU_EVENT_INTR NVMM_EVENT_EXCEPTION -> NVMM_VCPU_EVENT_EXCP Delete NVMM_EVENT_INTERRUPT_SW, unused already. - Slightly reorganize the MI/MD definitions, for internal clarity. - Split NVMM_VCPU_EXIT_MSR in two: NVMM_VCPU_EXIT_{RD,WR}MSR. Also provide separate u.rdmsr and u.wrmsr fields. This is more consistent with the other exit reasons. - Change the types of several variables: event.type enum -> u_int event.vector uint64_t -> uint8_t exit.u.*msr.msr: uint64_t -> uint32_t exit.u.io.type: enum -> bool exit.u.io.seg: int -> int8_t cap.arch.mxcsr_mask: uint64_t -> uint32_t cap.arch.conf_cpuid_maxops: uint64_t -> uint32_t - Delete NVMM_VCPU_EXIT_MWAIT_COND, it is AMD-only and confusing, and we already intercept 'monitor' so it is never armed. - Introduce vmx_exit_insn() for NVMM-Intel, similar to svm_exit_insn(). The 'npc' field wasn't getting filled properly during certain VMEXITs. - Introduce nvmm_vcpu_configure(). Similar to nvmm_machine_configure(), but as its name indicates, the configuration is per-VCPU and not per-VM. Migrate and rename NVMM_MACH_CONF_X86_CPUID to NVMM_VCPU_CONF_CPUID. This becomes per-VCPU, which makes more sense than per-VM. - Extend the NVMM_VCPU_CONF_CPUID conf to allow triggering VMEXITs on specific leaves. Until now we could only mask the leaves. An uint32_t is added in the structure: uint32_t mask:1; uint32_t exit:1; uint32_t rsvd:30; The two first bits select the desired behavior on the leaf. Specifying zero on both resets the leaf to the default behavior. The new NVMM_VCPU_EXIT_CPUID exit reason is added. Three changes in libnvmm: - Add 'mach' and 'vcpu' backpointers in the nvmm_io and nvmm_mem structures. - Rename 'nvmm_callbacks' to 'nvmm_assist_callbacks'. - Rename and migrate NVMM_MACH_CONF_CALLBACKS to NVMM_VCPU_CONF_CALLBACKS, it now becomes per-VCPU. Update the libnvmm man page: - Sync the naming with reality. - Replace "relevant" by "desired" and "virtualizer" by "emulator", closer to what I meant. - Add a "VCPU Configuration" section. - Add a "Machine Ownership" section. Add the "nvmm" group, and make nvmm_init() public. Sent to tech-kern@ a few days ago. Use the new PTE naming, and define CR3_FRAME_* separately. No functional change. Add a new VCPU conf option, that allows userland to request VMEXITs after a TPR change. This is supported on all Intel CPUs, and not-too-old AMD CPUs. The reason for wanting this option is that certain OSes (like Win10 64bit) manage interrupt priority in hardware via CR8 directly, and for these OSes, the emulator may want to sync its internal TPR state on each change. Add two new fields in cap.arch, to report the conf capabilities. Report TPR only on Intel for now, not AMD, because I don't have a recent AMD CPU on which to test. Mask CPUID leaf 0x0A on Intel, because we don't want the guest to try (and fail) to probe the PMC MSRs. This avoids "Unexpected WRMSR" warnings in qemu-nvmm. Add PCID support in the guests. This speeds up most 64bit guests, because since Meltdown, everybody uses PCID (including NetBSD). Change the way root_owner works: consider the calling process as root_owner not if it has root privileges, but if the /dev/nvmm device was opened with write permissions. Introduce the undocumented nvmm_root_init() function to achieve that. The goal is to simplify the logic and have more granularity, eg if we want a monitoring agent to access VMs but don't want to give this agent real root access on the system. A few changes: - Use smaller types in struct nvmm_capability. - Use smaller type for nvmm_io.port. - Switch exitstate to a compacted structure. Add nram in struct nvmm_ctl_mach_info. Add nvmmctl, with two commands for now. Macro tidyness. Sort SEE ALSO. should be fork(2), noticed by wiz Add debug entry for newly introduced nvmmctl utility. Annotate a covering switch as such to avoid warnings about missing returns. Forgot to put nvmmctl in the "nvmm" group. Add nvmm group.
|
| 1.41.4.2 |
| 17-Apr-2021 |
thorpej | Sync with HEAD.
|
| 1.41.4.1 |
| 03-Apr-2021 |
thorpej | Sync with HEAD.
|
| 1.41.2.1 |
| 03-Apr-2021 |
thorpej | Sync with HEAD.
|