Add tech_docs/virtualization/vm_build_guide.md

This commit is contained in:
2025-06-30 09:01:18 +00:00
parent 782ef1258b
commit b3d3dd69e0

View File

@@ -0,0 +1,147 @@
To achieve as near-native performance as possible for your Linux and Windows VMs on Proxmox VE, we'll focus on leveraging paravirtualized drivers (VirtIO), CPU passthrough features, and optimal disk/network configurations.
Here are templates for both Linux and Windows VMs, emphasizing these performance considerations.
**Key Principles for Near-Native Performance:**
* **VirtIO Drivers:** Always use VirtIO for network, block storage, and SCSI controllers. These are paravirtualized and offer significantly better performance than emulated hardware.
* **CPU Type "Host":** Pass through the host CPU's exact features to the VM. This might limit live migration to hosts with identical CPUs but provides the best performance on the current host.
* **Direct I/O (IO Thread):** When possible, use `IO Thread` for disk devices to dedicate a thread for I/O operations, improving throughput.
* **Multiqueue Networking:** Utilize multiple queues for VirtIO network devices to distribute network processing across multiple vCPUs.
* **QEMU Guest Agent:** Install for better integration with Proxmox VE (graceful shutdowns, accurate resource reporting).
* **UEFI (OVMF) and Q35:** Modern firmware and chipset for better hardware emulation and PCIe support.
---
### **Linux VM Template for Near-Native Performance**
This template assumes a modern Linux distribution (e.g., Ubuntu Server 22.04+, Debian 11+, CentOS Stream 9+) that includes VirtIO drivers in its installer.
**VM ID:** (Choose a unique ID, e.g., 101)
**Name:** Linux-VM-Native
**General:**
* **Node:** (Your Proxmox VE Node Name)
* **VM ID:** `101`
* **Name:** `Linux-VM-Native`
* **Resource Pool:** (Optional, assign if desired)
**OS:**
* **Type:** `Linux`
* **Version:** `5.x - 2.6 Kernel` (or most recent Linux kernel version if available in dropdown)
**System:**
* **Graphic card:** `virtio-gl` (VirGL is virtual 3D, for graphical workloads) OR `std` (for server-only workloads)
* **SCSI Controller:** `VirtIO SCSI single` (Enables IO Thread per disk for max performance)
* **BIOS:** `OVMF (UEFI)`
* **Machine:** `q35`
* **Qemu Agent:** `Checked`
**Disks:**
* **Bus/Device:** `VirtIO Block` (This uses the `VirtIO SCSI single` controller created above)
* **Storage:** (Your fastest storage, e.g., NVMe ZFS pool, local SSD)
* **Disk size (GB):** (Allocate as needed)
* **Cache:** `No cache` (Default, safe balance of speed and data integrity)
* **Discard:** `Checked` (Requires storage and guest OS support for TRIM/UNMAP)
* **SSD Emulation:** `Checked` (Presents as SSD to guest, enabling TRIM where `Discard` is enabled)
* **IO Thread:** `Checked` (Crucial for performance with `VirtIO SCSI single`)
* **No backup:** (Unchecked, unless you want to exclude this disk from backups)
* **Skip replication:** (Unchecked, unless you want to exclude this disk from replication)
**CPU:**
* **Sockets:** `1` (or 2 if your application benefits from more virtual sockets and license permits)
* **Cores:** (Number of cores based on your host's physical cores. Do not exceed total physical cores of your host. Start with 2-4 and scale up if needed.)
* **Type:** `host` (Passes through all host CPU features for maximum performance)
* **Enable NUMA:** `Checked` (If your host is NUMA-enabled, check with `numactl --hardware`)
* **CPU Units:** `1024` (Default, or adjust for relative priority against other VMs)
* **CPU Limit:** (Leave blank initially, only set if you need to cap overall CPU usage for this VM)
**Memory:**
* **Memory (MB):** (Allocate generously, but leave ~1GB for the Proxmox host)
* **Minimum memory (MB):** (Same as Memory for fixed allocation, recommended for dedicated performance)
* **Ballooning Device:** `Unchecked` (For fixed memory allocation, to prevent host from reclaiming memory)
* **Shares:** (Default 1000, only adjust if using dynamic memory allocation/ballooning)
**Network:**
* **Bridge:** `vmbr0` (or your desired network bridge)
* **Model:** `VirtIO (paravirtualized)`
* **Firewall:** `Unchecked` (Generally handled by guest OS firewall for performance)
* **Multiqueue:** (Set to the number of `Cores` you allocated above, e.g., 4 if you have 4 cores)
* **MTU:** (Leave blank to inherit from bridge, or set specifically if needed for jumbo frames)
---
### **Windows VM Template for Near-Native Performance**
This template assumes a modern Windows Server or Windows 10/11 installation. You will need the **VirtIO drivers ISO** during the Windows installation process.
**VM ID:** (Choose a unique ID, e.g., 102)
**Name:** Windows-VM-Native
**General:**
* **Node:** (Your Proxmox VE Node Name)
* **VM ID:** `102`
* **Name:** `Windows-VM-Native`
* **Resource Pool:** (Optional, assign if desired)
**OS:**
* **Type:** `Microsoft Windows`
* **Version:** `10/2016/2019` (or `11/2022` if supported by your Proxmox VE version)
**System:**
* **Graphic card:** `virtio-gl` (VirGL is virtual 3D, for graphical workloads) OR `qxl` (if remote desktop/SPICE is primary display method)
* **SCSI Controller:** `VirtIO SCSI single` (Enables IO Thread per disk for max performance)
* **BIOS:** `OVMF (UEFI)`
* **Machine:** `q35`
* **Qemu Agent:** `Checked`
**Disks:**
* **Bus/Device:** `VirtIO Block` (This uses the `VirtIO SCSI single` controller created above)
* **Storage:** (Your fastest storage, e.g., NVMe ZFS pool, local SSD)
* **Disk size (GB):** (Allocate as needed)
* **Cache:** `No cache` (Default, safe balance of speed and data integrity)
* **Discard:** `Checked` (Requires storage and guest OS support for TRIM/UNMAP)
* **SSD Emulation:** `Checked` (Presents as SSD to guest, enabling TRIM where `Discard` is enabled)
* **IO Thread:** `Checked` (Crucial for performance with `VirtIO SCSI single`)
* **No backup:** (Unchecked, unless you want to exclude this disk from backups)
* **Skip replication:** (Unchecked, unless you want to exclude this disk from replication)
**CPU:**
* **Sockets:** `1` (or 2 if your application benefits from more virtual sockets and license permits)
* **Cores:** (Number of cores based on your host's physical cores. Do not exceed total physical cores of your host. Start with 2-4 and scale up if needed.)
* **Type:** `host` (Passes through all host CPU features for maximum performance)
* **Enable NUMA:** `Checked` (If your host is NUMA-enabled, check with `numactl --hardware`)
* **CPU Units:** `1024` (Default, or adjust for relative priority against other VMs)
* **CPU Limit:** (Leave blank initially, only set if you need to cap overall CPU usage for this VM)
**Memory:**
* **Memory (MB):** (Allocate generously, but leave ~1GB for the Proxmox host)
* **Minimum memory (MB):** (Same as Memory for fixed allocation, recommended for dedicated performance)
* **Ballooning Device:** `Unchecked` (For fixed memory allocation, as Windows ballooning can sometimes incur slowdown)
* **Shares:** (Default 1000, only adjust if using dynamic memory allocation/ballooning)
**Network:**
* **Bridge:** `vmbr0` (or your desired network bridge)
* **Model:** `VirtIO (paravirtualized)`
* **Firewall:** `Unchecked` (Generally handled by guest OS firewall for performance)
* **Multiqueue:** (Set to the number of `Cores` you allocated above, e.g., 4 if you have 4 cores)
* **MTU:** (Leave blank to inherit from bridge, or set specifically if needed for jumbo frames)
**Important Installation Step for Windows VMs:**
* When installing Windows, ensure you attach the **VirtIO drivers ISO** as a CD/DVD drive to the VM.
* During the "Where do you want to install Windows?" step, you will need to click "Load driver" and browse to the VirtIO CD-ROM to load the storage drivers (e.g., `viostor` for SCSI, `vioscsi` if using the VirtIO SCSI controller).
* After Windows is installed, install the network and other VirtIO drivers from the same ISO from within the guest OS.
---
**General Considerations for Both Templates:**
* **Host Hardware:** The performance of your VMs is ultimately limited by your host hardware (CPU, RAM, storage, network).
* **PCIe Passthrough:** If you're passing through an entire GPU or other PCIe devices, ensure all "PCIe Passthrough" requirements mentioned in the previous concise guide are met (BIOS settings, IOMMU, etc.).
* **CPU Overcommitment:** While Proxmox VE allows you to assign more virtual cores than physical cores (overcommitment), excessive overcommitment will degrade performance. Only overcommit if you are certain your VMs will not simultaneously demand full CPU resources.
* **Monitoring:** Regularly monitor your VM's performance (CPU, I/O, network) and the host's resource utilization to identify bottlenecks and fine-tune settings.
* **Guest OS Optimization:** After OS installation, ensure the guest OS itself is optimized (e.g., proper power plans, updated drivers, disable unnecessary services).
* **QEMU Guest Agent:** Crucial for guest integration. Install it in both Linux (`qemu-guest-agent`) and Windows (via VirtIO driver ISO).
By following these templates and adhering to the best practices, you can achieve a very high level of performance for your virtual machines on Proxmox VE.