1 .. SPDX-License-Identifier: GPL-2.0
3 ==============================
4 Running nested guests with KVM
5 ==============================
7 A nested guest is the ability to run a guest inside another guest (it
8 can be KVM-based or a different hypervisor). The straightforward
9 example is a KVM guest that in turn runs on a KVM guest (the rest of
10 this document is built on this example)::
12 .----------------. .----------------.
15 | (Nested Guest) | | (Nested Guest) |
17 |----------------'--'----------------|
19 | L1 (Guest Hypervisor) |
22 .------------------------------------------------------.
23 | L0 (Host Hypervisor) |
25 |------------------------------------------------------|
26 | Hardware (with virtualization extensions) |
27 '------------------------------------------------------'
31 - L0 – level-0; the bare metal host, running KVM
33 - L1 – level-1 guest; a VM running on L0; also called the "guest
34 hypervisor", as it itself is capable of running KVM.
36 - L2 – level-2 guest; a VM running on L1, this is the "nested guest"
38 .. note:: The above diagram is modelled after the x86 architecture;
39 s390x, ppc64 and other architectures are likely to have
40 a different design for nesting.
42 For example, s390x always has an LPAR (LogicalPARtition)
43 hypervisor running on bare metal, adding another layer and
44 resulting in at least four levels in a nested setup — L0 (bare
45 metal, running the LPAR hypervisor), L1 (host hypervisor), L2
46 (guest hypervisor), L3 (nested guest).
48 This document will stick with the three-level terminology (L0,
49 L1, and L2) for all architectures; and will largely focus on
56 There are several scenarios where nested KVM can be useful, to name a
59 - As a developer, you want to test your software on different operating
60 systems (OSes). Instead of renting multiple VMs from a Cloud
61 Provider, using nested KVM lets you rent a large enough "guest
62 hypervisor" (level-1 guest). This in turn allows you to create
63 multiple nested guests (level-2 guests), running different OSes, on
64 which you can develop and test your software.
66 - Live migration of "guest hypervisors" and their nested guests, for
67 load balancing, disaster recovery, etc.
69 - VM image creation tools (e.g. ``virt-install``, etc) often run
70 their own VM, and users expect these to work inside a VM.
72 - Some OSes use virtualization internally for security (e.g. to let
73 applications run safely in isolation).
76 Enabling "nested" (x86)
77 -----------------------
79 From Linux kernel v4.20 onwards, the ``nested`` KVM parameter is enabled
80 by default for Intel and AMD. (Though your Linux distribution might
81 override this default.)
83 In case you are running a Linux kernel older than v4.19, to enable
84 nesting, set the ``nested`` KVM module parameter to ``Y`` or ``1``. To
85 persist this setting across reboots, you can add it in a config file, as
88 1. On the bare metal host (L0), list the kernel modules and ensure that
93 kvm 435079 1 kvm_intel
95 2. Show information for ``kvm_intel`` module::
97 $ modinfo kvm_intel | grep -i nested
100 3. For the nested KVM configuration to persist across reboots, place the
101 below in ``/etc/modprobed/kvm_intel.conf`` (create the file if it
104 $ cat /etc/modprobe.d/kvm_intel.conf
105 options kvm-intel nested=y
107 4. Unload and re-load the KVM Intel module::
109 $ sudo rmmod kvm-intel
110 $ sudo modprobe kvm-intel
112 5. Verify if the ``nested`` parameter for KVM is enabled::
114 $ cat /sys/module/kvm_intel/parameters/nested
117 For AMD hosts, the process is the same as above, except that the module
121 Additional nested-related kernel parameters (x86)
122 -------------------------------------------------
124 If your hardware is sufficiently advanced (Intel Haswell processor or
125 higher, which has newer hardware virt extensions), the following
126 additional features will also be enabled by default: "Shadow VMCS
127 (Virtual Machine Control Structure)", APIC Virtualization on your bare
128 metal host (L0). Parameters for Intel hosts::
130 $ cat /sys/module/kvm_intel/parameters/enable_shadow_vmcs
133 $ cat /sys/module/kvm_intel/parameters/enable_apicv
136 $ cat /sys/module/kvm_intel/parameters/ept
139 .. note:: If you suspect your L2 (i.e. nested guest) is running slower,
140 ensure the above are enabled (particularly
141 ``enable_shadow_vmcs`` and ``ept``).
144 Starting a nested guest (x86)
145 -----------------------------
147 Once your bare metal host (L0) is configured for nesting, you should be
148 able to start an L1 guest with::
150 $ qemu-kvm -cpu host [...]
152 The above will pass through the host CPU's capabilities as-is to the
153 guest, or for better live migration compatibility, use a named CPU
154 model supported by QEMU. e.g.::
156 $ qemu-kvm -cpu Haswell-noTSX-IBRS,vmx=on
158 then the guest hypervisor will subsequently be capable of running a
159 nested guest with accelerated KVM.
162 Enabling "nested" (s390x)
163 -------------------------
165 1. On the host hypervisor (L0), enable the ``nested`` parameter on
169 $ modprobe kvm nested=1
171 .. note:: On s390x, the kernel parameter ``hpage`` is mutually exclusive
172 with the ``nested`` parameter — i.e. to be able to enable
173 ``nested``, the ``hpage`` parameter *must* be disabled.
175 2. The guest hypervisor (L1) must be provided with the ``sie`` CPU
176 feature — with QEMU, this can be done by using "host passthrough"
177 (via the command-line ``-cpu host``).
179 3. Now the KVM module can be loaded in the L1 (guest hypervisor)::
184 Live migration with nested KVM
185 ------------------------------
187 Migrating an L1 guest, with a *live* nested guest in it, to another
188 bare metal host, works as of Linux kernel 5.3 and QEMU 4.2.0 for
189 Intel x86 systems, and even on older versions for s390x.
191 On AMD systems, once an L1 guest has started an L2 guest, the L1 guest
192 should no longer be migrated or saved (refer to QEMU documentation on
193 "savevm"/"loadvm") until the L2 guest shuts down. Attempting to migrate
194 or save-and-load an L1 guest while an L2 guest is running will result in
195 undefined behavior. You might see a ``kernel BUG!`` entry in ``dmesg``, a
196 kernel 'oops', or an outright kernel panic. Such a migrated or loaded L1
197 guest can no longer be considered stable or secure, and must be restarted.
198 Migrating an L1 guest merely configured to support nesting, while not
199 actually running L2 guests, is expected to function normally even on AMD
200 systems but may fail once guests are started.
202 Migrating an L2 guest is always expected to succeed, so all the following
203 scenarios should work even on AMD systems:
205 - Migrating a nested guest (L2) to another L1 guest on the *same* bare
208 - Migrating a nested guest (L2) to another L1 guest on a *different*
211 - Migrating a nested guest (L2) to a bare metal host.
213 Reporting bugs from nested setups
214 -----------------------------------
216 Debugging "nested" problems can involve sifting through log files across
217 L0, L1 and L2; this can result in tedious back-n-forth between the bug
218 reporter and the bug fixer.
220 - Mention that you are in a "nested" setup. If you are running any kind
221 of "nesting" at all, say so. Unfortunately, this needs to be called
222 out because when reporting bugs, people tend to forget to even
223 *mention* that they're using nested virtualization.
225 - Ensure you are actually running KVM on KVM. Sometimes people do not
226 have KVM enabled for their guest hypervisor (L1), which results in
227 them running with pure emulation or what QEMU calls it as "TCG", but
228 they think they're running nested KVM. Thus confusing "nested Virt"
229 (which could also mean, QEMU on KVM) with "nested KVM" (KVM on KVM).
231 Information to collect (generic)
232 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
234 The following is not an exhaustive list, but a very good starting point:
236 - Kernel, libvirt, and QEMU version from L0
238 - Kernel, libvirt and QEMU version from L1
240 - QEMU command-line of L1 -- when using libvirt, you'll find it here:
241 ``/var/log/libvirt/qemu/instance.log``
243 - QEMU command-line of L2 -- as above, when using libvirt, get the
244 complete libvirt-generated QEMU command-line
246 - ``cat /sys/cpuinfo`` from L0
248 - ``cat /sys/cpuinfo`` from L1
254 - Full ``dmesg`` output from L0
256 - Full ``dmesg`` output from L1
258 x86-specific info to collect
259 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
261 Both the below commands, ``x86info`` and ``dmidecode``, should be
262 available on most Linux distributions with the same name:
264 - Output of: ``x86info -a`` from L0
266 - Output of: ``x86info -a`` from L1
268 - Output of: ``dmidecode`` from L0
270 - Output of: ``dmidecode`` from L1
272 s390x-specific info to collect
273 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
275 Along with the earlier mentioned generic details, the below is
278 - ``/proc/sysinfo`` from L1; this will also include the info from L0