Tools: Single GPU Passthrough on Linux: Running a VM Like It’s Bare Metal - Expert Insights
Why Bother?
What Exactly Is Single GPU Passthrough?
My Setup & How It Works
Behind the Scenes
Command Snippets
The Hard Parts
What I Gained (Besides Performance)
Is This for You? I used to believe that GPU passthrough required two graphics cards, one for the host, one for the VM. Turns out, that’s not the only way. A few months ago, I stumbled into the world of single GPU passthrough, and it completely changed how I think about virtual machines. Let me walk you through what it is, why I tried it, and what it taught me along the way. I’ve been using QEMU on Linux for a while, mostly for lightweight VMs. But even with decent specs, Windows in a VM always felt sluggish, until I started experimenting with hardware acceleration. One day, after tweaking some settings, I noticed the VM felt unusually smooth. It got me thinking: If the VM can already feel this responsive, could I give it full, exclusive access to my only GPU? I went down the rabbit hole. I found resources like the NlTESHADE YouTube channel and the RisingPrism GitLab repo, which gave me the conceptual push I needed. Keep Linux as my daily driver. Launch a Windows VM when needed, without rebooting. Get as close to bare‑metal performance as possible. Dual booting was out. Constantly restarting kills flow, and I wanted the flexibility of both operating systems at my fingertips. Instead of sharing your GPU between the host and the VM, you detach it from the host and hand it over completely to the VM. The VM gets direct hardware access, bypassing software emulation. The result? Near-native performance for gaming, rendering, or any heavy Windows‑only workload. When you shut down the VM, the GPU is handed back to your Linux host, and everything resumes as if nothing happened. I built this on Arch Linux (though the principles apply to any distro). The stack includes: No reboots. No dual boot. Just seamless switching. This is just a glimpse—don’t try to run these without proper configuration. Check if virtualization is supported
lscpu | grep Virtualization Load VFIO modulessudo modprobe vfio vfio-pci Find your GPU’s PCI IDlspci -nn | grep -i nvidia # or amd Bind the GPU to vfio-pciecho "10de 1f82" > /sys/bus/pci/drivers/vfio-pci/new_id`The real setup involves hooks, kernel parameters, and careful timing—but that’s what the full guide is for. This wasn’t a one‑hour project. I hit plenty of roadblocks: The VM would start, but the screen stayed black. Sometimes the GPU wouldn’t release back to Linux after shutdown. Driver conflicts (especially with NVIDIA). Windows failing to detect the GPU properly. Every issue forced me to understand my hardware and software at a deeper level. Single GPU passthrough taught me more than I expected: How PCI devices are managed by the kernel How driver binding and unbinding work The inner workings of QEMU hooks and libvirt It’s one thing to use Linux; it’s another to control exactly how your hardware behaves. Benchmark video of Windows 11 with Single GPU Passthrough My hardware is modest—nothing high‑end. Even so, the performance was impressive enough to convince me that this setup can replace dual booting on capable machines. If you’re tired of rebooting just to run a Windows‑only app or game, and you want to keep your Linux workflow intact, single GPU passthrough is worth the effort. Final WordsThis journey changed how I use my system. Instead of choosing between operating systems, I just switch environments, instantly, without sacrificing performance. This article is a high‑level overview. If you want the full step‑by‑step guide, including the exact configuration, scripts, and troubleshooting steps I used, head over to my website: 🔗 Single GPU Passthrough Guide Have you tried something similar? Got questions or war stories? Drop them in the comments, I’d love to hear about your experience. Templates let you quickly answer FAQs or store snippets for re-use. as well , this person and/or - QEMU / KVM: virtualization backbone- VFIO: kernel‑level GPU assignment- libvirt: management layer- OVMF: UEFI firmware for modern Windows VMs - Boot Linux normally.- Start the VM, a script unloads the GPU drivers from the host.- The GPU is passed through to the VM.- Windows boots and uses the GPU directly.- You work/play with native performance.- Shut down the VM, the GPU returns to Linux, and the host drivers reload.