Files
the_information_nexus/tech_docs/linux/pci_passthrough.md

36 KiB
Raw Blame History

Simple PCI(e) Passthrough Setup Guide

What is PCI(e) Passthrough?

PCI(e) passthrough lets you give a virtual machine (VM) direct control over a physical device like a graphics card or network card. The VM gets native performance, but the host can't use that device anymore.

Before You Start - Check These Requirements

1. Hardware Check

  • CPU: Must support IOMMU (Intel VT-d or AMD-Vi)
  • Motherboard: Must support IOMMU
  • Device: The PCI device you want to pass through

2. Quick Compatibility Test

Run this command to see if IOMMU is working:

dmesg | grep -e DMAR -e IOMMU -e AMD-Vi

You should see something like "DMAR: IOMMU enabled" or "AMD-Vi: Interrupt remapping enabled"

Step-by-Step Setup

Step 1: Enable IOMMU in BIOS

  1. Reboot and enter BIOS/UEFI settings
  2. Look for and enable:
    • Intel: "VT-d" or "Intel Virtualization Technology for Directed I/O"
    • AMD: "AMD-Vi" or "IOMMU" (often enabled by default)
  3. Save and exit

Step 2: Enable IOMMU in Linux (Intel only)

If you have an Intel CPU with older kernel, add this to your kernel command line:

intel_iommu=on

Optional performance boost: Add this for both Intel and AMD:

iommu=pt

Step 3: Load Required Kernel Modules

Add these lines to /etc/modules:

vfio
vfio_iommu_type1
vfio_pci

Then update initramfs:

update-initramfs -u -k all

Step 4: Find Your Device

List all PCI devices:

lspci -nn

Look for your device and note its ID (like 01:00.0) and vendor:device codes (like [10de:1d01]).

Step 5: Reserve Device for Passthrough

Create /etc/modprobe.d/vfio.conf and add:

options vfio-pci ids=VENDOR:DEVICE

Replace VENDOR:DEVICE with your actual codes (e.g., 10de:1d01 for an NVIDIA GPU).

Alternative method - Blacklist the host driver:

# For NVIDIA GPUs
echo "blacklist nouveau" >> /etc/modprobe.d/blacklist.conf

# For AMD GPUs  
echo "blacklist amdgpu" >> /etc/modprobe.d/blacklist.conf
echo "blacklist radeon" >> /etc/modprobe.d/blacklist.conf

# For Intel integrated graphics
echo "blacklist i915" >> /etc/modprobe.d/blacklist.conf

Step 6: Update and Reboot

update-initramfs -u -k all
reboot

Step 7: Verify Setup

Check that your device is ready:

lspci -nnk

Look for your device - it should show Kernel driver in use: vfio-pci or no driver listed.

Step 8: Add Device to VM

Via Web Interface:

  1. Go to your VM's Hardware tab
  2. Click "Add" → "PCI Device"
  3. Select your device
  4. For GPUs, check "Primary GPU" and "PCIe" options

Via Command Line:

qm set VMID -hostpci0 01:00.0,pcie=on,x-vga=on

GPU-Specific Tips

For Best GPU Compatibility:

  • Use q35 machine type
  • Use OVMF (UEFI) instead of SeaBIOS if your GPU supports it
  • Enable PCIe mode instead of PCI

Common GPU Issues:

Black screen in VM console: This is normal! Connect a monitor directly to the GPU or use remote desktop software inside the VM.

Error 43 (NVIDIA): Try these kernel parameters:

echo "options kvm ignore_msrs=1" > /etc/modprobe.d/kvm.conf

Audio crackling: Enable MSI in the guest:

echo "options snd-hda-intel enable_msi=1" >> /etc/modprobe.d/snd-hda-intel.conf

Troubleshooting

Device Not Available

  • Check IOMMU groups: pvesh get /nodes/NODENAME/hardware/pci --pci-class-blacklist ""
  • Try moving the card to a different PCI slot
  • Ensure the device isn't being used by the host

System Won't Boot

If you have issues after changes:

  1. Boot from recovery
  2. Remove the problematic config files
  3. Run update-initramfs -u -k all
  4. Reboot

Still Not Working?

  • Update your motherboard BIOS to the latest version
  • Check if your specific hardware combination is supported
  • Consider trying the ACS override patch (advanced users only)

Quick Reference Commands

Task Command
Check IOMMU status dmesg | grep -e DMAR -e IOMMU
List PCI devices lspci -nn
Check device driver lspci -nnk
Update initramfs update-initramfs -u -k all
Add device to VM qm set VMID -hostpci0 01:00.0

Remember: After any module or kernel parameter changes, always run update-initramfs -u -k all and reboot!


PCI(e) Passthrough Jump to navigationJump to search

Contents General Requirements Host Device Passthrough SR-IOV Mediated Devices (vGPU, GVT-g) Use in Clusters vIOMMU (emulated IOMMU) See Also PCI(e) passthrough is a mechanism to give a virtual machine control over a PCI device from the host. This can have some advantages over using virtualized hardware, for example lower latency, higher performance, or more features (e.g., offloading).

But, if you pass through a device to a virtual machine, you cannot use that device anymore on the host or in any other VM.

Note that, while PCI passthrough is available for i440fx and q35 machines, PCIe passthrough is only available on q35 machines. This does not mean that PCIe capable devices that are passed through as PCI devices will only run at PCI speeds. Passing through devices as PCIe just sets a flag for the guest to tell it that the device is a PCIe device instead of a "really fast legacy PCI device". Some guest applications benefit from this.

General Requirements Since passthrough is performed on real hardware, it needs to fulfill some requirements. A brief overview of these requirements is given below, for more information on specific devices, see PCI Passthrough Examples.

Hardware Your hardware needs to support IOMMU (I/O Memory Management Unit) interrupt remapping, this includes the CPU and the motherboard.

Generally, Intel systems with VT-d and AMD systems with AMD-Vi support this. But it is not guaranteed that everything will work out of the box, due to bad hardware implementation and missing or low quality drivers.

Further, server grade hardware has often better support than consumer grade hardware, but even then, many modern system can support this.

Please refer to your hardware vendor to check if they support this feature under Linux for your specific setup.

Determining PCI Card Address The easiest way is to use the GUI to add a device of type "Host PCI" in the VMs hardware tab. Alternatively, you can use the command line.

You can locate your card using

lspci Configuration Once you ensured that your hardware supports passthrough, you will need to do some configuration to enable PCI(e) passthrough.

IOMMU You will have to enable IOMMU support in your BIOS/UEFI. Usually the corresponding setting is called IOMMU or VT-d, but you should find the exact option name in the manual of your motherboard.

With AMD CPUs IOMMU is enabled by default. With recent kernels (6.8 or newer), this is also true for Intel CPUs. On older kernels, it is necessary to enable it on Intel CPUs via the kernel command line by adding:

intel_iommu=on IOMMU Passthrough Mode If your hardware supports IOMMU passthrough mode, enabling this mode might increase performance. This is because VMs then bypass the (default) DMA translation normally performed by the hyper-visor and instead pass DMA requests directly to the hardware IOMMU. To enable these options, add:

iommu=pt to the kernel commandline.

Kernel Modules You have to make sure the following modules are loaded. This can be achieved by adding them to /etc/modules.

Note Mediated devices passthrough If passing through mediated devices (e.g. vGPUs), the following is not needed. In these cases, the device will be owned by the appropriate host-driver directly.

vfio vfio_iommu_type1 vfio_pci After changing anything modules related, you need to refresh your initramfs. On Proxmox VE this can be done by executing:

update-initramfs -u -k all

To check if the modules are being loaded, the output of

lsmod | grep vfio

should include the four modules from above.

Finish Configuration Finally reboot to bring the changes into effect and check that it is indeed enabled.

dmesg | grep -e DMAR -e IOMMU -e AMD-Vi

should display that IOMMU, Directed I/O or Interrupt Remapping is enabled, depending on hardware and kernel the exact message can vary.

For notes on how to troubleshoot or verify if IOMMU is working as intended, please see the Verifying IOMMU Parameters section in our wiki.

It is also important that the device(s) you want to pass through are in a separate IOMMU group. This can be checked with a call to the Proxmox VE API:

pvesh get /nodes/{nodename}/hardware/pci --pci-class-blacklist ""

It is okay if the device is in an IOMMU group together with its functions (e.g. a GPU with the HDMI Audio device) or with its root port or PCI(e) bridge.

Note PCI(e) slots Some platforms handle their physical PCI(e) slots differently. So, sometimes it can help to put the card in a another PCI(e) slot, if you do not get the desired IOMMU group separation.

Note Unsafe interrupts For some platforms, it may be necessary to allow unsafe interrupts. For this add the following line in a file ending with .conf file in /etc/modprobe.d/:

options vfio_iommu_type1 allow_unsafe_interrupts=1 Please be aware that this option can make your system unstable.

GPU Passthrough Notes It is not possible to display the frame buffer of the GPU via NoVNC or SPICE on the Proxmox VE web interface.

When passing through a whole GPU or a vGPU and graphic output is wanted, one has to either physically connect a monitor to the card, or configure a remote desktop software (for example, VNC or RDP) inside the guest.

If you want to use the GPU as a hardware accelerator, for example, for programs using OpenCL or CUDA, this is not required.

Host Device Passthrough The most used variant of PCI(e) passthrough is to pass through a whole PCI(e) card, for example a GPU or a network card.

Host Configuration Proxmox VE tries to automatically make the PCI(e) device unavailable for the host. However, if this doesnt work, there are two things that can be done:

pass the device IDs to the options of the vfio-pci modules by adding

options vfio-pci ids=1234:5678,4321:8765 to a .conf file in /etc/modprobe.d/ where 1234:5678 and 4321:8765 are the vendor and device IDs obtained by:

lspci -nn

blacklist the driver on the host completely, ensuring that it is free to bind for passthrough, with

blacklist DRIVERNAME in a .conf file in /etc/modprobe.d/.

To find the drivername, execute

lspci -k

for example:

lspci -k | grep -A 3 "VGA"

will output something similar to

01:00.0 VGA compatible controller: NVIDIA Corporation GP108 [GeForce GT 1030] (rev a1) Subsystem: Micro-Star International Co., Ltd. [MSI] GP108 [GeForce GT 1030] Kernel driver in use: Kernel modules: Now we can blacklist the drivers by writing them into a .conf file:

echo "blacklist " >> /etc/modprobe.d/blacklist.conf For both methods you need to update the initramfs again and reboot after that.

Should this not work, you might need to set a soft dependency to load the gpu modules before loading vfio-pci. This can be done with the softdep flag, see also the manpages on modprobe.d for more information.

For example, if you are using drivers named :

echo "softdep pre: vfio-pci" >> /etc/modprobe.d/.conf

Verify Configuration To check if your changes were successful, you can use

lspci -nnk

and check your device entry. If it says

Kernel driver in use: vfio-pci or the in use line is missing entirely, the device is ready to be used for passthrough.

Note Mediated devices For mediated devices this line will differ as the device will be owned as the host driver directly, not vfio-pci.

VM Configuration When passing through a GPU, the best compatibility is reached when using q35 as machine type, OVMF (UEFI for VMs) instead of SeaBIOS and PCIe instead of PCI. Note that if you want to use OVMF for GPU passthrough, the GPU needs to have an UEFI capable ROM, otherwise use SeaBIOS instead. To check if the ROM is UEFI capable, see the PCI Passthrough Examples wiki.

Furthermore, using OVMF, disabling vga arbitration may be possible, reducing the amount of legacy code needed to be run during boot. To disable vga arbitration:

echo "options vfio-pci ids=, disable_vga=1" > /etc/modprobe.d/vfio.conf replacing the and with the ones obtained from:

lspci -nn

PCI devices can be added in the web interface in the hardware section of the VM. Alternatively, you can use the command line; set the hostpciX option in the VM configuration, for example by executing:

qm set VMID -hostpci0 00:02.0

or by adding a line to the VM configuration file:

hostpci0: 00:02.0 If your device has multiple functions (e.g., 00:02.0 and 00:02.1 ), you can pass them through all together with the shortened syntax 00:02`. This is equivalent with checking the All Functions` checkbox in the web interface.

There are some options to which may be necessary, depending on the device and guest OS:

x-vga=on|off marks the PCI(e) device as the primary GPU of the VM. With this enabled the vga configuration option will be ignored.

pcie=on|off tells Proxmox VE to use a PCIe or PCI port. Some guests/device combination require PCIe rather than PCI. PCIe is only available for q35 machine types.

rombar=on|off makes the firmware ROM visible for the guest. Default is on. Some PCI(e) devices need this disabled.

romfile=, is an optional path to a ROM file for the device to use. This is a relative path under /usr/share/kvm/.

Example An example of PCIe passthrough with a GPU set to primary:

qm set VMID -hostpci0 02:00,pcie=on,x-vga=on

PCI ID overrides You can override the PCI vendor ID, device ID, and subsystem IDs that will be seen by the guest. This is useful if your device is a variant with an ID that your guests drivers dont recognize, but you want to force those drivers to be loaded anyway (e.g. if you know your device shares the same chipset as a supported variant).

The available options are vendor-id, device-id, sub-vendor-id, and sub-device-id. You can set any or all of these to override your devices default IDs.

For example:

qm set VMID -hostpci0 02:00,device-id=0x10f6,sub-vendor-id=0x0000

SR-IOV Another variant for passing through PCI(e) devices is to use the hardware virtualization features of your devices, if available.

Note Enabling SR-IOV To use SR-IOV, platform support is especially important. It may be necessary to enable this feature in the BIOS/UEFI first, or to use a specific PCI(e) port for it to work. In doubt, consult the manual of the platform or contact its vendor.

SR-IOV (Single-Root Input/Output Virtualization) enables a single device to provide multiple VF (Virtual Functions) to the system. Each of those VF can be used in a different VM, with full hardware features and also better performance and lower latency than software virtualized devices.

Currently, the most common use case for this are NICs (Network Interface Card) with SR-IOV support, which can provide multiple VFs per physical port. This allows using features such as checksum offloading, etc. to be used inside a VM, reducing the (host) CPU overhead.

Host Configuration Generally, there are two methods for enabling virtual functions on a device.

sometimes there is an option for the driver module e.g. for some Intel drivers

max_vfs=4 which could be put file with .conf ending under /etc/modprobe.d/. (Do not forget to update your initramfs after that)

Please refer to your driver module documentation for the exact parameters and options.

The second, more generic, approach is using the sysfs. If a device and driver supports this you can change the number of VFs on the fly. For example, to setup 4 VFs on device 0000:01:00.0 execute:

echo 4 > /sys/bus/pci/devices/0000:01:00.0/sriov_numvfs

To make this change persistent you can use the sysfsutilsDebian package. After installation configure it via /etc/sysfs.conf or aFILE.conf in /etc/sysfs.d/.

VM Configuration After creating VFs, you should see them as separate PCI(e) devices when outputting them with lspci. Get their ID and pass them through like a normal PCI(e) device.

Mediated Devices (vGPU, GVT-g) Mediated devices are another method to reuse features and performance from physical hardware for virtualized hardware. These are found most common in virtualized GPU setups such as Intels GVT-g and NVIDIAs vGPUs used in their GRID technology.

With this, a physical Card is able to create virtual cards, similar to SR-IOV. The difference is that mediated devices do not appear as PCI(e) devices in the host, and are such only suited for using in virtual machines.

Host Configuration In general your cards driver must support that feature, otherwise it will not work. So please refer to your vendor for compatible drivers and how to configure them.

Intels drivers for GVT-g are integrated in the Kernel and should work with 5th, 6th and 7th generation Intel Core Processors, as well as E3 v4, E3 v5 and E3 v6 Xeon Processors.

To enable it for Intel Graphics, you have to make sure to load the module kvmgt (for example via /etc/modules) and to enable it on the Kernel commandline and add the following parameter:

i915.enable_gvt=1 After that remember to update the initramfs, and reboot your host.

VM Configuration To use a mediated device, simply specify the mdev property on a hostpciX VM configuration option.

You can get the supported devices via the sysfs. For example, to list the supported types for the device 0000:00:02.0 you would simply execute:

ls /sys/bus/pci/devices/0000:00:02.0/mdev_supported_types

Each entry is a directory which contains the following important files:

available_instances contains the amount of still available instances of this type, each mdev use in a VM reduces this.

description contains a short description about the capabilities of the type

create is the endpoint to create such a device, Proxmox VE does this automatically for you, if a hostpciX option with mdev is configured.

Example configuration with an Intel GVT-g vGPU (Intel Skylake 6700k):

qm set VMID -hostpci0 00:02.0,mdev=i915-GVTg_V5_4

With this set, Proxmox VE automatically creates such a device on VM start, and cleans it up again when the VM stops.

Use in Clusters It is also possible to map devices on a cluster level, so that they can be properly used with HA and hardware changes are detected and non root users can configure them. See Resource Mapping for details on that.

vIOMMU (emulated IOMMU) vIOMMU is the emulation of a hardware IOMMU within a virtual machine, providing improved memory access control and security for virtualized I/O devices. Using the vIOMMU option also allows you to pass through PCI(e) devices to level-2 VMs in level-1 VMs via Nested Virtualization. To pass through physical PCI(e) devices from the host to nested VMs, follow the PCI(e) passthrough instructions.

There are currently two vIOMMU implementations available: Intel and VirtIO.

Intel vIOMMU Intel vIOMMU specific VM requirements:

Whether you are using an Intel or AMD CPU on your host, it is important to set intel_iommu=on in the VMs kernel parameters.

To use Intel vIOMMU you need to set q35 as the machine type.

If all requirements are met, you can add viommu=intel to the machine parameter in the configuration of the VM that should be able to pass through PCI devices.

qm set VMID -machine q35,viommu=intel

QEMU documentation for VT-d

VirtIO vIOMMU This vIOMMU implementation is more recent and does not have as many limitations as Intel vIOMMU but is currently less used in production and less documentated.

With VirtIO vIOMMU there is no need to set any kernel parameters. It is also not necessary to use q35 as the machine type, but it is advisable if you want to use PCIe.

qm set VMID -machine q35,viommu=virtio


Introduction Yellowpin.svg Note: This is a collection of examples, workarounds, hacks, and specific issues for PCI(e) passthrough. For a step-by-step guide on how and what to do to pass through PCI(e) devices, see the docs or the wiki page generated from the docs PCI passthrough allows you to use a physical PCI device (graphics card, network card) inside a VM (KVM virtualization only).

If you "PCI passthrough" a device, the device is not available to the host anymore. Note that VMs with passed-through devices cannot be migrated.

Requirements This is a list of basic requirements adapted from the Arch wiki

CPU requirements Your CPU has to support hardware virtualization and IOMMU. Most new CPUs support this. AMD: CPUs from the Bulldozer generation and newer, CPUs from the K10 generation need a 890FX or 990FX motherboard. Intel: list of VT-d capable Intel CPUs Motherboard requirements Your motherboard needs to support IOMMU. Lists can be found on the Xen wiki and Wikipedia. Note that, as of writing, both these lists are incomplete and very out-of-date and most newer motherboards support IOMMU. GPU requirements The ROM of your GPU does not necessarily need to support UEFI, however, most modern GPUs do. If you GPU ROM supports UEFI, it is recommended to use OVMF (UEFI) instead of SeaBIOS. For a list of GPU ROMs, see Techpowerup's collection of GPU ROMs Verifying IOMMU parameters Verify IOMMU is enabled Reboot, then run:

dmesg | grep -e DMAR -e IOMMU There should be a line that looks like "DMAR: IOMMU enabled". If there is no output, something is wrong.

Verify IOMMU interrupt remapping is enabled It is not possible to use PCI passthrough without interrupt remapping. Device assignment will fail with 'Failed to assign device "[device name]": Operation not permitted' or 'Interrupt Remapping hardware not found, passing devices to unprivileged domains is insecure.'.

All systems using an Intel processor and chipset that have support for Intel Virtualization Technology for Directed I/O (VT-d), but do not have support for interrupt remapping will see such an error. Interrupt remapping support is provided in newer processors and chipsets (both AMD and Intel).

To identify if your system has support for interrupt remapping:

dmesg | grep 'remapping' If you see one of the following lines:

AMD-Vi: Interrupt remapping enabled DMAR-IR: Enabled IRQ remapping in x2apic mode ('x2apic' can be different on old CPUs, but should still work) then remapping is supported.

If your system doesn't support interrupt remapping, you can allow unsafe interrupts with:

echo "options vfio_iommu_type1 allow_unsafe_interrupts=1" > /etc/modprobe.d/iommu_unsafe_interrupts.conf Verify IOMMU isolation For working PCI passthrough, you need a dedicated IOMMU group for all PCI devices you want to assign to a VM.

When executing

pvesh get /nodes/{nodename}/hardware/pci --pci-class-blacklist ""

replacing {nodename} with the name of your node.

You should get a list similar to:

┌──────────┬────────┬──────────────┬────────────┬────────┬───────────────────────────────────────────────────────────────────┬... │ class │ device │ id │ iommugroup │ vendor │ device_name │ ╞══════════╪════════╪══════════════╪════════════╪════════╪═══════════════════════════════════════════════════════════════════╪ │ 0x010601 │ 0xa282 │ 0000:00:17.0 │ 5 │ 0x8086 │ 200 Series PCH SATA controller [AHCI mode] │ ├──────────┼────────┼──────────────┼────────────┼────────┼───────────────────────────────────────────────────────────────────┼ │ 0x010802 │ 0xa808 │ 0000:02:00.0 │ 12 │ 0x144d │ NVMe SSD Controller SM981/PM981/PM983 │ ├──────────┼────────┼──────────────┼────────────┼────────┼───────────────────────────────────────────────────────────────────┼ │ 0x020000 │ 0x15b8 │ 0000:00:1f.6 │ 11 │ 0x8086 │ Ethernet Connection (2) I219-V │ ├──────────┼────────┼──────────────┼────────────┼────────┼───────────────────────────────────────────────────────────────────┼ │ 0x030000 │ 0x5912 │ 0000:00:02.0 │ 2 │ 0x8086 │ HD Graphics 630 │ ├──────────┼────────┼──────────────┼────────────┼────────┼───────────────────────────────────────────────────────────────────┼ │ 0x030000 │ 0x1d01 │ 0000:01:00.0 │ 1 │ 0x10de │ GP108 [GeForce GT 1030] │ ├──────────┼────────┼──────────────┼────────────┼────────┼───────────────────────────────────────────────────────────────────┼ . . . To have separate IOMMU groups, your processor needs to have support for a feature called ACS (Access Control Services). Make sure you enable the corresponding setting in your BIOS for this.

If you don't have dedicated IOMMU groups, you can try moving the card to another PCI slot.

Should that not work, you can try using Alex Williamson's ACS override patch. However, this should be seen as a last option and is not without risks.

As of writing, the ACS patch is part of the Proxmox VE kernel and can be invoked via Editing the kernel command line. Add

pcie_acs_override=downstream to the kernel boot command line (grub or systemd-boot) options.

More information can be found at Alex Williamson's blog.

GPU passthrough Yellowpin.svg Note: See http://blog.quindorian.org/2018/03/building-a-2u-amd-ryzen-server-proxmox-gpu-passthrough.html/ if you like an article with a How-To approach. (NOTE: you usually do not need the ROM-file dumping mentioned at the end!) AMD RADEON 5xxx, 6xxx, 7xxx, NVIDIA GeForce 7, 8, GTX 4xx, 5xx, 6xx, 7xx, 9xx, 10xx, 15xx, 16xx, and RTX 20xx have been reported working. Anything newer should work as well. AMD Navi (5xxx(XT)/6xxx(XT)) suffer from the reset bug (see https://github.com/gnif/vendor-reset), and while dedicated users have managed to get them to run, they require a lot more effort and will probably not work entirely stable (see the AMD specific issues for workarounds). You might need to load some specific options in grub.cfg or other tuning values to get your configuration specifically working/stable Here's a good forum thread of Arch Linux: https://bbs.archlinux.org/viewtopic.php?id=162768 For starters, it's often helpful if the host doesn't try to use the GPU, which avoids issues with the host driver unbinding and re-binding to the device. Sometimes making sure the host BIOS POST messages are displayed on a different GPU is helpful too. This can sometimes be acomplished via BIOS settings, moving the card to a different slot or enabling/disabling legacy boot support.

Blacklisting drivers The following is a list of common drivers and how to blacklist them:

AMD GPUs echo "blacklist amdgpu" >> /etc/modprobe.d/blacklist.conf echo "blacklist radeon" >> /etc/modprobe.d/blacklist.conf NVIDIA GPUs echo "blacklist nouveau" >> /etc/modprobe.d/blacklist.conf echo "blacklist nvidia*" >> /etc/modprobe.d/blacklist.conf Intel GPUs echo "blacklist i915" >> /etc/modprobe.d/blacklist.conf Yellowpin.svg Note: If you are using an Intel iGPU and an Intel discrete GPU, blacklisting the Intel 'i915' drivers that the discrete GPU uses means the iGPU won't be able to use those drivers either. After blacklisting, you will need to reboot.

How to know if a graphics card is UEFI (OVMF) compatible Have a look at the requirements section. Chances are you are using the BIOS listed for your device on the Techpowerup GPU ROM list, which will say if it is UEFI compatible or not.

Alternatively, you can dump your ROM and use Alex Williams rom-parser tool:

Yellowpin.svg Note: You will want to run the following commands logged in as root user (by running su -) or by wrapping them with sudo sh -c "", otherwise the bash-redirects in the code-snippets below won't work Get and compile the software "rom-parser":

git clone https://github.com/awilliam/rom-parser cd rom-parser make Then dump the rom of you vga card:

cd /sys/bus/pci/devices/0000:01:00.0/ echo 1 > rom cat rom > /tmp/image.rom echo 0 > rom and test it with:

./rom-parser /tmp/image.rom The output should look like this:

Valid ROM signature found @0h, PCIR offset 190h PCIR: type 0, vendor: 10de, device: 1280, class: 030000 PCIR: revision 0, vendor revision: 1 Valid ROM signature found @f400h, PCIR offset 1ch PCIR: type 3, vendor: 10de, device: 1280, class: 030000 PCIR: revision 3, vendor revision: 0 EFI: Signature Valid Last image To be UEFI compatible, you need a "type 3" in the result.

The 'romfile' option Some motherboards can't pass through GPUs on the first PCI(e) slot by default, because its vBIOS is shadowed during boot up. You need to capture its vBIOS when it is working "normally" (i.e. installed in a different slot), then you can move the card to slot 1 and start the vm using the dumped vBIOS.

To dump the bios:

cd /sys/bus/pci/devices/0000:01:00.0/ echo 1 > rom cat rom > /usr/share/kvm/vbios.bin echo 0 > rom Then you can pass the vbios file (must be located in /usr/share/kvm/) with:

hostpci0: 01:00,x-vga=on,romfile=vbios.bin Tips Some Windows applications like GeForce Experience, Passmark Performance Test and SiSoftware Sandra can crash the VM. You need to add:

echo "options kvm ignore_msrs=1" > /etc/modprobe.d/kvm.conf If you see a lot of warning messages in your 'dmesg' system log, add the following instead:

echo "options kvm ignore_msrs=1 report_ignored_msrs=0" > /etc/modprobe.d/kvm.conf Nvidia Tips User have reported that NVIDIA Kepler K80 GPUs need this in vmid.conf:

args: -machine pc,max-ram-below-4g=1G Troubleshooting "BAR 3: can't reserve [mem]" error If you have this error when you try to use the card for a VM:

vfio-pci 0000:04:00.0: BAR 3: can't reserve [mem 0xca000000-0xcbffffff 64bit] you can try to add the following kernel command line option:

video=efifb:off Check out the documentation about editing the kernel command line.

WSLg (Windows Subsystem for Linux GUI) If GUI apps don't open in WSLg, see Windows 2022 guest best practices.

Black display in NoVNC/Spice If you are passing through a GPU and are getting a black screen, you might need to change your display settings in the Guest OS. On Windows, this can be done by pressing the "Super/Windows" and "P" key. Alternatively, if you are using the GPU for hardware accelerated computing and need no graphical output from it, you can deselect the "primary GPU" option and physically disconnect your GPU.

Spice Spice may give trouble when passing through a GPU as it presents a "virtual" PCI graphic card to the guest and some drivers have problems with that, even when both cards show up. It's always worth a try to disable SPICE and check again if something fails.

HDMI audio crackling/broken Some digital audio devices (usually added via GPU functions) may require MSI (Message Signaled Interrupts) to be enabled to function correctly. If you experience any issues, try changing MSI settings in the guest and rebooting the guest.

Linux guests usually enable MSI by themselves. To force use of MSI for GPU audio devices, use the following command and reboot:

echo "options snd-hda-intel enable_msi=1" >> /etc/modprobe.d/snd-hda-intel.conf Use 'lspci -vv' and check for the following line on your device to see if MSI is enabled:

Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+ If it says 'Enable+', MSI is working, 'Enable-' means it is supported but disabled, and if the line is missing, MSI is not supported by the PCIe hardware.

This can potentially also improve performance for other passthrough devices, including GPUs, but that depends on the hardware being used.

BIOS options Make sure you are using the most recent BIOS version for you motherboard. Often IOMMU groupings or passthrough support in general is improved in later versions.

Some general BIOS options that might need changing to allow passthrough to work:

IOMMU or VT-d: Set to 'Enabled' or equivalent, often 'Auto' is not the same 'Legacy boot' or CSM: For GPU passthrough it can help to disable this, but keep in mind that PVE has to be installed in UEFI mode, as it will not boot in BIOS mode without this enabled. The reason for disabling this is that it avoids legacy VGA initialization of installed GPUs, making them able to be re-initialized later, as required for passthrough. Most useful when trying to use passthrough in single GPU systems. 'Resizable BAR'/'Smart Access Memory': Some AMD GPUs (Vega and up) experience 'Code 43' in Windows guests if this is enabled on the host. It's not supported in VMs either way (yet), so the recommended setting is 'off'. Error 43 Error code 43 is a generic Windows driver error and can occur for a wide number of reasons. Things you can try troubleshooting include:

Finding out if the PCI device has a hardware fault Try passing the PCI device to a Linux VM Try plugging the PCI device into a different PCI slot or into a different machine Finding software issues Check the security event logs of your Windows VM Check the dmesg logs of your host machine Dump your vBIOS and check if it is working correctly. Try a different vbios (see the GPU requirements section) If your GPU supports resizable BAR/SAM and you have this option set in your BIOS, you might need to deactivate it or manually tweak your BAR using an udev rule (see Code 43 while Resizable Bar is turned on in the bios in the Arch wiki) Sometimes the issue is very hardware-dependent. You might find someone else who found a solution who has the same hardware. Try searching the internet with keywords containing your hardware, together with keywords like "Proxmox", "KVM", or "Qemu". Nvidia specific issues When passing through mobile- or vGPUs, it might be necessary to spoof the Vendor ID and Hardware ID as if the passed-through GPU were the desktop variant. Changing the IDs might also be needed to remove manufacturer-specific vendor ID variants that are not recognized otherwise.

The Vendor and Device ID can be added in the web interface under "Hardware" -> "PCI Device (hostpciX)" and then clicking on the "Advanced" checkbox.

Some software will also refuse to run when it detects that it is running in a VM. This should no longer be an issue with Nvidia drivers 465 and newer.

To find the Vendor ID and Device ID of the card installed on your host, run:

lspci -nn which will give you something similar to

01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP108 [GeForce GT 1030] [10de:1d01] (rev a1) Here, 0x10de is the Vendor ID and 0x1d01 the Device ID.

AMD specific issues Some AMD cards suffer from the "AMD reset bug" where the GPU does not correctly reset after power cycling. This can be remedied with the vendor-reset patch. See also Nick Sherlock's writeup on the issue.

USB passthrough If you need to pass through USB devices (keyboard, mouse), please follow the USB Physical Port Mapping wiki article.

vGPU If you want to split up one GPU into multiple vGPUs, see:

MxGPU with AMD S7150 NVIDIA vGPU