A big part of my requirement is to establish two or more virtual HTPCs. For this to work, it's necessary to 'pass through' physical graphics cards to the virtual machines.
Here's how to do it;
prerequisites;
I've got this working with a Radeon HD5450 in my system. YMMV.
STEP 1: Configure network bridge
Boot up unRAID from a USB stick and access the Settings -> Network Settings page. Chose 'yes' from the 'setup bridge' drop-down and enter 'xenbr0' (without quotes) in the 'Bridge Name' field.
While in here, you might like to set up a static IP address for your unRAID server as well.
fig 1 - setting up xen bridge |
STEP 2: Identify Passthrough Devices
To passthrough a device, it's first necessary to identify it by its pci identifer. You do this by issuing the 'lspci' command. This is done at the linux command prompt on the unRAID server.
You access the command prompt either directly on the server computer itself (having entered 'root' as the username following booting) or by accessing your server from another computer via SSH.
Entering 'lspci' will present you with a (long) list of all the PCI devices attached to your server. The format is an initial identification number in the form nn:nn.n followed by a text string describing the device.
Linux recognizes devices using a BDF code, which stands for Bus, Device, Function. You will commonly see it in the format of BB:DD.f, and it is how we point to devices when we want Linux to do something with them.
You need to identify the device or devices you wish to pass through and take note of the relevant BDF code numbers.
You will see that in my case, below, I have identified and highlighted 05:00.0 and 05:00.1 as my GPU and HDMI audio device respectively. (I happen to have a couple of 5450 cards installed but this first one is the one I need).
fig 2 - lspci output |
STEP 3: Make device assignable
Before we can assign a device to a VM, we must first make it assignable by essentially removing it from access by unRAID (the host or dom0, in Xen terms).
We do this by editing the file on the unRAID flash drive that configures the unRAID boot menu. This file is in the SYSLINUX folder on the UNRAID USB drive. It's name is 'syslinux.cfg'.
You can edit this on any computer either by mounting the /flash directory when connected to your unRAID server or popping the drive into any computer. Here's my edited file;
default /syslinux/menu.c32
menu title Lime Technology
prompt 0
timeout 50
label unRAID OS
kernel /bzimage
append initrd=/bzroot
label unRAID OS Safe Mode (no plugins)
kernel /bzimage
append initrd=/bzroot unraidsafemode
label Memtest86+
kernel /memtest
label Xen/unRAID OS
menu default
kernel /syslinux/mboot.c32
append /xen dom0_mem=2097152 --- /bzimage xen-pciback.hide=(05:00.0)(05:00.1) --- /bzroot
label Xen/unRAID OS Safe Mode (no plugins)
kernel /syslinux/mboot.c32
append /xen dom0_mem=2097152 --- /bzimage --- /bzroot unraidsafemode
The main change here is in the block headed 'label Xen/unRAID OS'. This is the block that displays the Xen option on the unRAID boot menu and determines what happens than that menu option is selected.
If you compare this to the standard file that comes with unRID 6.0b3, you'll see that I've added the following;
xen-pciback.hide=(05:00.0)(05:00.1)
This essentially tells Xen to hide the specified devices from unRAID and make them assignable. The ids in the brackets are the same as we picked up from the 'lspci' command in step 2. Your IDs will most likely be different and if you have more devices to pass through (such as SATA controllers, USB controllers etc.) this is where they would be added.
While editing this file, you might like to make Xen the default boot option. You do this by moving 'menu default' from the 'label unRAID OS' block to the Xen block. I've done this in the example above.
STEP 4: Confirm
Now, reboot unRAID and issue the following at the command prompt;
xl pci-assignable-list
This command, which shows the PCI devices available for assignment to Xen VMs, should now list the pci ids we set in the syslinux.
If it doesn't, retrace your steps. Otherwise, onwards and upwards.
STEP 5: Set up the VM
In my case, I've used a Windows 7 image I had previously made when playing with KVM. I created a 'Xen' share in unRAID and copied the image there. If you don't have such an image, you'll need to source or create one. There's lots of threads on the unRAID forums about this at the moment.
Note: I'm testing using the free version of unRAID so don't have a cache disk. It would be better to set up the VM image files on a cache disk and if you do, your paths will be different to mine.
With the VM image file in place, it's then necessary to create a Xen config file for the VM. Here's mine;
kernel = 'hvmloader'
builder = 'hvm'
vcpus = '2'
memory = '4096'
device_model_version="qemu-xen-traditional"
disk = [
'file:/mnt/user/Xen/Win7_TVServer.img,hda,w'
]
name = 'windows'
vif = [ 'mac=00:16:3E:51:20:4C,bridge=xenbr0,model=e1000' ]
on_poweroff = 'destroy'
on_reboot = 'restart'
on_crash = 'restart'
boot = 'c'
acpi = '1'
apic = '1'
viridian = '1'
xen_platform_pci='1'
sdl = '0'
vnc = '1'
vnclisten = '0.0.0.0'
vncpasswd = ''
stdvga = '0'
usb = '1'
usbdevice = 'tablet'
pci = ['05:00.0','05:00.1']
I've saved this as 'win7.cfg' beside my VM image in /mnt/user/Xen.
You'll need to configure things like vpus, memory etc. to taste but note in particular the entry in the disk settings that points to the VM image and the pci listing at the end. Recognise those numbers? This is where we pass the specific devices we have made assignable into the VM itself.
STEP 6: Start the VM
In my case, I start up the VM by navigating to the Xen folder and issuing the create command;
cd /mnt/user/Xen
xl create win7.cfg
You should see some output relating to the startup process. Here's mine for reference;
root@Tower:~# cd /mnt/user/Xen
root@Tower:/mnt/user/Xen# xl create win7.cfg
Parsing config from win7.cfg
WARNING: ignoring "kernel" directive for HVM guest. Use "firmware_override" instead if you really want a non-default firmware
xc: info: VIRTUAL MEMORY ARRANGEMENT:
Loader: 0000000000100000->000000000019ec84
Modules: 0000000000000000->0000000000000000
TOTAL: 0000000000000000->00000000ff800000
ENTRY ADDRESS: 0000000000100000
xc: info: PHYSICAL MEMORY ALLOCATION:
4KB PAGES: 0x0000000000000200
2MB PAGES: 0x00000000000003fb
1GB PAGES: 0x0000000000000002
Daemon running with PID 2036
root@Tower:/mnt/user/Xen#
Provided that you see no errors, you should then confirm that the VM is running by issuing the 'xl list' command;
root@Tower:~# xl list
Name ID Mem VCPUs State Time(s)
Domain-0 0 2048 8 r----- 2911.7
windows 1 4091 2 -b---- 328.1
Right now, you likely cannot see anything but your VM should be booting. I have a previous blog entry that describes how you can use a VNC client on a remote machine to connect via an SSH tunnel to the VM. For this to work you'll need to have 'vnc' and 'vnclisten' lines of the syslinux.cfg file as I have at step 5, above. IronicBadger's blog post is also useful (but again, Mac focused). Here's my Windows machine booting as seen via VNC on my Mac;
fig 3 - booting windows as seen in VNC |
Once booted, you can confirm that the passthrough has worked by checking out Windows device manager.
fig 4: windows device manager on VM |
The display adapter with the warning icon is the passed-through adapter with no drivers. You can now download windows drivers for your device and install them as you usually would. This should give you video from the VM on the card you passed through.
Note, there's an additional issue when rebooting a VM with GPU passthrough which is described well here. I reproduce the relevant section below;
However, most cards do not support FLR (Function Level Reset), which is important because it prevents the card from operating properly without the right steps.
Essentially, unlike a physical computer, when the virtual machine is shut down or restarted, it does not change the power supplied to the graphics card. This means the graphics card is never re-initialized, so FLR was invented. However, because most graphics cards do not support FLR you have to mimic this effect by ejecting the card after every reboot, otherwise you will see a severe performance degradation.
In addition, if you do not reset the card, and it is not fresh when you attempt to install or uninstall drivers, the process may fail leaving your system crippled, plus a BSOD is likely to appear.
So, my recommendation when dealing with GPU drivers and passthrough, always reboot the entire machine, or take extra care to reset the card before making any changes.
After the installation, if you reboot the HVM you can use the Eject USB Media icon in the taskbar at the lower right hand corner of the screen to eject the card which will attempt a manual reset. You will loose video for a few seconds as the card re-initializes. This should fix performance on reboot.
To get around this and have a well automated system, you'll need to follow the steps in this blog post to ensure you get a seamless experience when rebooting your VM.
2 comments:
Did you install the gplpv drivers as well?
Thanks!
Yes, but they are not required for GPU passthrough - they boost storage and network speeds.
Post a Comment