My macOS Big Sur / Proxmox setup

I thought it might be helpful for people following my guide for installing macOS Big Sur on Proxmox if I described my setup and how I’m using macOS.

Proxmox hardware specs

  • Motherboard: Asrock EP2C602
  • RAM: 64GB
  • CPU: 2 x Intel E5-2687W v2 for a total of 16 cores / 32 threads
  • Storage
    • ASUS Hyper M.2 X16 PCIe 4.0 X4 Expansion Card
      • Samsung 970 Evo 1TB NVMe SSD for macOS
      • Samsung 950 Pro 512GB NVMe SSD
    • 38TB of spinning disks in various ZFS configurations
    • 1TB SATA SSD for Proxmox’s root device
  • Graphics
    • EVGA GeForce GTX 1060 6GB
    • AMD Radeon R9 280X (HD 7970 / Tahiti XTL) (not currently installed)
    • AMD Sapphire Radeon RX 580 Pulse 8GB (11265-05-20G)
  • IO
    • 2x onboard Intel C600 USB 2 controllers
    • Inateck USB 3 PCIe card (Fresco Logic FL1100 chipset)
    • 2x onboard Intel 82574L gigabit network ports
  • Case
    • Lian Li PC-X2000F full tower (sadly, long discontinued!)
    • Lian Li EX-H35 HDD Hot Swap Module (to add 3 x 3.5″ drive bays into 3 of the 4 x 5.25″ drive mounts), with Lian Li BZ-503B filter door, and Lian Li BP3SATA hot swap backplane. Note that because of the sideways-mounted 5.25″ design on this case, the door will fit flush with the left side of the case, while the unfiltered exhaust fan sits some 5-10mm proud of the right side of the case.
  • CPU cooler
    • 2 x Noctua NH-U14S coolers
  • Power
    • EVGA SuperNOVA 750 G2 750W

My Proxmox machine is my desktop computer, so I pass most of this hardware straight through to the macOS Big Sur VM that I use as my daily-driver machine. I pass through both USB 2 controllers, the USB 3 controller, an NVMe SSD, and one of the gigabit network ports, plus the RX 580 graphics card.

Attached to the USB controllers I pass through to macOS are a Bluetooth adapter, keyboard, Logitech trackball dongle, and sometimes a DragonFly Black USB DAC for audio (my host motherboard has no audio onboard).

Once macOS boots, this leaves no USB ports dedicated to the host, so no keyboard for the host! I normally manage my Proxmox host from a guest, or if all of my guests are down I use SSH from a laptop or smartphone instead (JuiceSSH on Android works nicely for running qm start 100 to boot up my macOS VM if I accidentally shut it down).

On High Sierra, I used to use the GTX 750 Ti, then later the GTX 1060, to drive two displays (one of them 4k@60Hz over DisplayPort) which both worked flawlessly. However NVIDIA drivers are not available for Big Sur, so now I’m back with AMD.

My old AMD R9 280X had some support in Catalina, but I got flashing video corruption on parts of the screen intermittently, and it wasn’t stable on the host either, triggering DMAR warnings that suggest that it tries to read memory it doesn’t own. This all came to a head after upgrading 10.15.4, it just went to a black screen 75% of the way through boot and the system log shows the GPU driver crashed and didn’t return.

Now I’m using the Sapphire Radeon Pulse RX 580 8GB as suggested by Passthrough Post. This one is well supported by macOS, Apple even recommends it on their website. Newer GPUs than this, and siblings of this GPU, suffer from the AMD reset bug that makes them a pain in the ass to pass through. You can now use the vendor-reset module to fix some of these.

Despite good experiences reported by other users, I’m still getting reset-bug-like behaviour on some guest boots, which causes a hang at the 75% mark on the progress bar just as the graphics would be initialised (screen does not go to black). At the same time this is printed to dmesg:

pcieport 0000:00:02.0: AER: Uncorrected (Non-Fatal) error received: 0000:00:02.0
pcieport 0000:00:02.0: AER: PCIe Bus Error: severity=Uncorrected (Non-Fatal), type=Transaction Layer, (Requester ID)
pcieport 0000:00:02.0: AER: device [8086:3c04] error status/mask=00004000/00000000
pcieport 0000:00:02.0: AER: [14] CmpltTO (First)
pcieport 0000:00:02.0: AER: Device recovery successful

This eventually hangs the host. It looks like the host needs to be power-cycled between guest boots for the RX 580 to be fully reset.

On this motherboard is a third SATA controller, a Marvell SE9230, but enabling this in the ASRock UEFI setup causes it to throw a ton of DMAR errors and kill the host, so avoid using it.

My ASUS Hyper M.2 X16 PCIe 4.0 X4 Expansion Card allows motherboards that support PCIe-slot bifurcation to add up to 4 NVMe SSDs to a single x16 slot, which is wonderful for expanding storage. Note that bifurcation support is an absolute requirement with this card, if your system doesn’t support it you need to use one that has a PCIe switch chip onboard instead.

Motherboards which support PCIe bifurcation will allow you to split a slot into multiple channels, like mine does here for PCIE slot 6, creating four x4 channels.

What I use it for

I’m using my Big Sur VM for developing software (IntelliJ / XCode), watching videos (YouTube / mpv), playing music, editing photos with Photoshop and Lightroom, editing video with DaVinci Resolve, buying apps on the App Store, syncing data with iCloud, and more. That all works trouble-free. I don’t use any of the Apple apps that are known to be troublesome on Hackintosh (iMessage etc), so I’m not sure if those are working or not.

VM configuration

Here’s my VM’s Proxmox configuration, with discussion to follow:

agent: 1
args: -device isa-applesmc,osk="..." -smbios type=2 cpu host,kvm=on,vendor=GenuineIntel,+kvm_pv_unhalt,+kvm_pv_eoi,+hypervisor,+invtsc,+pdpe1gb,check -smp 32,sockets=2,cores=8,threads=2 -device 'pcie-root-port,id=ich9-pcie-port-6,addr=10.1,x-speed=16,x-width=32,multifunction=on,bus=pcie.0,port=6,chassis=6' -device 'vfio-pci,host=0000:0a:00.0,id=hostpci5,bus=ich9-pcie-port-6,addr=0x0,x-pci-device-id=0x10f6,x-pci-sub-vendor-id=0x0000,x-pci-sub-device-id=0x0000' balloon: 0 bios: ovmf boot: order=hostpci4
cores: 16 cpu: Penryn efidisk0: vms:vm-100-disk-1,size=128K hostpci0: 03:00,pcie=1,x-vga=on hostpci1: 00:1a.0,pcie=1 hostpci2: 00:1d.0,pcie=1 hostpci3: 82:00.0,pcie=1 hostpci4: 81:00.0,pcie=1 # hostpci5: 0a:00.0,pcie=1
hugepages: 1024 machine: q35 memory: 40960 name: Big Sur net0: vmxnet3=2B:F9:52:54:FE:8A,bridge=vmbr0 numa: 1 onboot: 1 ostype: other scsihw: virtio-scsi-pci smbios1: uuid=42c28f01-4b4e-4ef8-97ac-80dea43c0bcb sockets: 2 tablet: 0 vga: none
hookscript: local:snippets/
Enabling the agent enables Proxmox’s “shutdown” button to ask macOS to perform an orderly shutdown.
OpenCore now allows “cpu” to be set to “host” to pass through all supported CPU features automatically. The OC config causes the CPU to masquerade as Penryn to macOS to keep macOS happy.
I’m passing through all 32 of my host threads to macOS. Proxmox’s configuration format doesn’t natively support setting a thread count, so I had to add my topology manually here by adding “-smp 32,sockets=2,cores=8,threads=2”.
For an explanation of all that “-device” stuff on the end, read the “net0” section below.
I’m passing through 6 PCIe devices, which is now natively supported by the latest version of Proxmox 6. From first to last I have my graphics card, two USB 2 controllers, my NVMe storage, a USB 3 controller, and one gigabit network card.
I’ve enabled 1GB hugepages on Proxmox, so I’m asking for 1024MB hugepages here. More details on that further down.
40 gigabytes, baby!
I usually have this emulated network card disabled in Big Sur’s network settings, and use my passthrough Intel 82574L instead.
Although Big Sur has a driver for the Intel 82574L, the driver doesn’t match the exact PCI ID of the card I have, so the driver doesn’t load and the card remains inactive. Luckily we can edit the network card’s PCI ID to match what macOS is expecting. Here’s the card I’m using
# lspci -nn | grep 82574L

0b:00.0 Ethernet controller [0200]: Intel Corporation 82574L Gigabit Network Connection [8086:10d3]

My PCI ID here is 8086:10d3. If you check the file /System/Library/Extensions/IONetworkingFamily.kext/Contents/PlugIns/Intel82574L.kext/Contents/Info.plist, you can see the device IDs that macOS supports with this driver:

<string>0x104b8086 0x10f68086</string>
<string>0x00008086 0x00000000</string>

Let’s make the card pretend to be that 8086:10f6 (primary ID) 0000:0000 (sub ID) card. To do this we need to edit some hostpci device properties that Proxmox doesn’t support, so we need to move the hostpci device’s definition into the “args” where we can edit it.

First make sure the hostpci entry for the network card in the VM’s config is the one with the highest index, then run qm showcmd <your VM ID> --pretty. Find the two lines that define that card:

-device 'pcie-root-port,id=ich9-pcie-port-6,addr=10.1,x-speed=16,x-width=32,multifunction=on,bus=pcie.0,port=6,chassis=6' \
-device 'vfio-pci,host=0000:0a:00.0,id=hostpci5,bus=ich9-pcie-port-6,addr=0x0' \

Copy those two lines, remove the trailing backslashes and combine them into one line, and add them to the end of your “args” line. Now we can edit the second -device to add the fake PCI IDs (new text in bold):

-device 'pcie-root-port,id=ich9-pcie-port-6,addr=10.1,x-speed=16,x-width=32,multifunction=on,bus=pcie.0,port=6,chassis=6' -device 'vfio-pci,host=0000:0a:00.0,id=hostpci5,bus=ich9-pcie-port-6,addr=0x0,x-pci-device-id=0x10f6,x-pci-sub-vendor-id=0x0000,x-pci-sub-device-id=0x0000'

Now comment out the “hostpci5” line for the network card, since we’re manually defining it through the args instead. Now macOS should see this card’s ID as if it was one of the supported cards, and everything works!

I need to set this to “none”, since otherwise the crappy emulated video card would become the primary video adapter, and I only want my passthrough card to be active.
This is a new feature in Proxmox 5.4 that allows a script to be run at various points in the VM lifecycle.
In recent kernel versions, some devices like my USB controllers are grabbed by the host kernel very early during boot, before vfio can claim them. This means that I need to manually release those devices in order to start the VM. I created /var/lib/vz/snippets/ with this content (and marked it executable with chmod +x): 
#!/usr/bin/env bash

if [ "$2" == "pre-start" ]
# First release devices from their current driver (by their PCI bus IDs)
echo 0000:00:1d.0 > /sys/bus/pci/devices/0000:00:1d.0/driver/unbind
echo 0000:00:1a.0 > /sys/bus/pci/devices/0000:00:1a.0/driver/unbind
echo 0000:81:00.0 > /sys/bus/pci/devices/0000:81:00.0/driver/unbind
echo 0000:82:00.0 > /sys/bus/pci/devices/0000:82:00.0/driver/unbind
echo 0000:0a:00.0 > /sys/bus/pci/devices/0000:0a:00.0/driver/unbind

# Then attach them by ID to VFIO
echo 8086 1d2d > /sys/bus/pci/drivers/vfio-pci/new_id
echo 8086 1d26 > /sys/bus/pci/drivers/vfio-pci/new_id
echo 1b73 1100 > /sys/bus/pci/drivers/vfio-pci/new_id
echo 144d a802 > /sys/bus/pci/drivers/vfio-pci/new_id
echo 8086 10d3 > /sys/bus/pci/drivers/vfio-pci/new_id

Guest file storage

The macOS VM’s primary storage is the passthrough Samsung 970 Evo 1TB NVMe SSD, which can be installed onto and used in Big Sur with no issues. TRIM is supported and enabled automatically.

For secondary storage, my Proxmox host exports a number of directories over the AFP network protocol using netatalk. 

Proxmox 5

Debian Stretch’s version of the netatalk package is seriously out of date (and I’ve had file corruption issues with old versions), so I installed netatalk onto Proxmox from source instead following these directions:

My configure command ended up being:

./configure --with-init-style=debian-systemd --without-libevent --without-tdb --with-cracklib --enable-krbV-uam --with-pam-confdir=/etc/pam.d --with-dbus-daemon=/usr/bin/dbus-daemon --with-dbus-sysconf-dir=/etc/dbus-1/system.d --with-tracker-pkgconfig-version=1.0

Netatalk is configured in /usr/local/etc/afp.conf like so:

; Netatalk 3.x configuration file


path = /tank/downloads
rwlist = nick ; List of usernames with rw permissions on this share

path = /tank/isos
rwlist = nick

When connecting to the fileshare from macOS, you connect with a URL like “afp://proxmox”, then specify the name and password of the unix user you’re authenticating as (here, “nick”), and that user’s account will be used for all file permissions checks.

Proxmox 6

Proxmox 6’s prebuilt version of Netatalk is good now, so I backed up my afp.conf, removed my old version that was installed from source (with “make uninstall”, note that this erases afp.conf!), and apt-installed the netatalk package instead. The configuration is now found at /etc/netatalk/afp.conf.

Proxmox configuration

Passthrough of PCIe devices requires a bit of configuration on Proxmox’s side, much of which is described in their manual. Here’s what I ended up with:


GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on rootdelay=10"

Note that Proxmox 6 will be booted using systemd-boot rather than GRUB if you are using a ZFS root volume and booting using UEFI. If you’re using systemd-boot you need to create this file instead:

/etc/kernel/cmdline (Proxmox 6 when using systemd-boot)

root=ZFS=rpool/ROOT/pve-1 boot=zfs intel_iommu=on rootdelay=10




blacklist nouveau
blacklist nvidia
blacklist nvidiafb
blacklist snd_hda_codec_hdmi
blacklist snd_hda_intel
blacklist snd_hda_codec
blacklist snd_hda_core
blacklist radeon
blacklist amdgpu


options kvm ignore_msrs=Y


# Nested VM support (not used by macOS)
options kvm-intel nested=Y


options vfio-pci ids=144d:a802,8086:1d2d,8086:1d26,10de:1c03,10de:10f1,10de:1380,1b73:1100,1002:6798,1002:aaa0 disable_vga=1
# Note that adding disable_vga here will probably prevent guests from booting in SeaBIOS mode

After editing those files you typically need to run update-grub (or pve-efiboot-tool refresh if you are using systemd-boot on Proxmox 6), update-initramfs -k all -u, then reboot Proxmox.

Host configuration

In the UEFI settings of my host system I had to set my onboard video card as my primary video adapter. Otherwise, the VBIOS of my discrete video cards would get molested by the host during boot, rendering them unusable for guests (this is especially a problem if your host boots in BIOS compatibility mode instead of UEFI mode).

One way to avoid needing to change this setting (e.g. if you only have one video card in your system!) is to dump the unmolested VBIOS of the card while it is attached to the host as a secondary card, then store a copy of the VBIOS as a file in Proxmox’s /usr/share/kvm directory, and provide it to the VM by using a “romfile” option like so:

hostpci0: 01:00,x-vga=on,romfile=my-vbios.bin

Or if you don’t have a spare discrete GPU of your own to achieve this, you can find somebody else who has done this online. However, I have never tried this approach myself.

Guest configuration

In Catalina, I have system sleep turned off in the power saving options (because I had too many instances where it went to sleep and never woke up again).

Launching the VM

I found that when I assign obscene amounts of RAM to the VM, it takes a long time for Proxmox to allocate the memory for it, causing a timeout during VM launch:

start failed: command '/usr/bin/kvm -id 100 ...'' failed: got timeout

You can instead avoid Proxmox’s timeout system entirely by running the VM like:

qm showcmd 100 | bash

Another RAM problem comes if my host has done a ton of disk IO. This causes ZFS’s ARC (disk cache) to grow in size, and it seems like the ARC is not automatically released when that memory is needed to start a VM (maybe this is an interaction with the hugepages feature). So the VM will complain that it’s out of memory on launch even though there is plenty of memory marked as “cache” available.

You can clear this read-cache and make the RAM available again by running:

echo 3 > /proc/sys/vm/drop_caches

1GB hugepages

My host supports 1GB hugepages:

# grep pdpe1gb /proc/cpuinfo

... nx pdpe1gb rdtsc ...

So I added this to the end of my /etc/kernel/cmdline to statically allocate 40GB of 1GB hugepages on boot:

default_hugepagesz=1G hugepagesz=1G hugepages=40

After running update-initramfs -u and rebooting, I can see that those pages have been successfully allocated:

# grep Huge /proc/meminfo

HugePages_Total: 40
HugePages_Free: 40
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 1048576 kB
Hugetlb: 41943040 kB

It turns out that those hugepages are evenly split between my two NUMA nodes (20GB per CPU socket), so I have to set “numa: 1” in the VM config.

The final step is to add “hugepages: 1024” to the VM’s config file so it will use those 1024MB hugepages. Otherwise it’ll ignore them completely and continue to allocate from the general RAM pool, which will immediately run me out of RAM and cause the VM launch to fail. Other VMs won’t be able to use that memory, even if macOS isn’t running, unless they’re also configured to use 1GB hugepages.

You can also add “+pdpe1gb” to the list of the VM’s CPU features so that the guest kernel can use 1GB pages internally for its own allocations, but I doubt that MacOS takes advantage of this feature.



Upgraded to 1GB hugepages. Proxmox 6.1 and macOS 10.15.2.


My VM was configured with a passthrough video card, and the config file also had “vga: std” in it. Normally if there is a passthrough card enabled, Proxmox disables the emulated VGA adapter, so this was equivalent to “vga: none”. However after upgrading pve-manager to 5.3-12, I found that the emulated vga adapter was re-enabled, so OpenCore ended up displaying on the emulated console, and both of my hardware monitors became “secondary” monitors in macOS. To fix this I needed to explicitly set “vga: none” in the VM configuration.


Added “hookscript” to take advantage of new Proxmox 5.4 features


Did an in-place upgrade to Proxmox 6 today!

Proxmox 6 now includes an up-to-date version of the netatalk package, so I use the prebuilt version instead of building it from source. Don’t forget to install my new patched pve-edk2-firmware package.


I upgraded to macOS 10.15.4, but even after updating Clover, Lilu and WhateverGreen, my R9 280X only gives me a black screen, although the system does boot and I can access it using SSH. Looking at the boot log I can see a GPU restart is attempted, but fails. Will update with a solution if I find one. 

EDIT: For the moment I built an offline installer ISO using a Catalina 10.15.3 installer I had lying around, and used that to reinstall macOS. Now I’m back on 10.15.3 and all my data is exactly where I left it, very painless. After the COVID lockdown I will probably try to replace my R9 280X with a better-supported RX 570 and try again.


Updated to OpenCore instead of Clover. Still running 10.15.3. pve-edk2-firmware no longer needs to be patched when using OpenCore!


Replaced my graphics card with an RX 580 to get 10.15.5 compatibility, and I’m back up and running now. Using a new method for fixing my passthrough network adapter’s driver which avoids patching Catalina’s drivers.


Added second NVMe SSD and 4x SSD carrier card


Upgraded from 2x E5-2670 to 2x E5-2687W v2. Fusion 360 is now 30% faster! Added photos of my rig


Updated to Proxmox 7, and Hackintosh is running fine without any tweaks

231 thoughts on “My macOS Big Sur / Proxmox setup”

  1. Nick, for us non Linux users/less experience. Could you answer in detail a bit more about scripts and snippets? Theres no beginners info out there with everyone just assuming you know things. So…
    How do you actually create or copy over an existing hookscript? I have one to stop shutdown bug for my gfx. So a bit confused. where is the snippets folder – typically on my lvm drive? How do I create it? MKDIR /var/lib/vz/snippets ? or thats wrong?
    Theres no UI so I was guessing command line? And do I use nano /var/lib/vz/snippets/name of .pl (perl script) and paste the script into it then save & make it executable – in this example ‘chmod +x’? Could you give me a quick point by point walk thru?
    Many thanks

    1. You can put your snippets in any “directory” storage, my VM config puts it in the storage “local” which is found at /var/lib/vz, so I created a snippets directory in there with “mkdir /var/lib/vz/snippets”. Then yes, you can just “nano /var/lib/vz/snippets/” to create the file and “chmod +x /var/lib/vz/snippets/” to make it executable.

  2. Hi Nick,

    Thanks for your efforts. I am running macOS Catalina on my Proxmox now, with GPU passthrough enabled. But as you wrote, the monitor became secondary monitor if the Display uses vmware compatible.

    When I change it to none, I don’t think my VM boots correctly. My monitor is blank. I already change the timeout to 5 seconds, so it should boot automatic to Catalina hard disc. But its blank.

    Do you have any idea which settings I should check?

    Thanks again

    1. I’m having this same issue when passing through my Intel 530. VM conf below:

      args: -device isa-applesmc,osk=”…” -smbios type=2 -device usb-kbd,bus=ehci.0,port=2 -cpu host,kvm=on,vendor=GenuineIntel,+kvm_pv_unhalt,+kvm_pv_eoi,+hypervisor,+invtsc
      balloon: 0
      bios: ovmf
      boot: c
      bootdisk: sata0
      cores: 4
      cpu: Penryn
      efidisk0: hdd-img:104/vm-104-disk-1.raw,size=128K
      hostpci0: 00:02,pcie=1,x-vga=1
      machine: q35
      memory: 8192
      name: catalina
      net0: vmxnet3=[removed],bridge=vmbr0,firewall=1
      numa: 0
      ostype: other
      sata0: hdd-img:104/vm-104-disk-0.raw,cache=unsafe,discard=on,size=64G,ssd=1
      scsihw: virtio-scsi-pci
      smbios1: uuid=[removed]
      sockets: 1
      vga: none
      vmgenid: [removed]

  3. Hello Nick,

    Could you please elaborate about the specifics of OpenCore config with R9 280 passthrough?
    I’m not running Proxmox, just NixOS with virt-manager.
    I’ve been able to boot Macos just fine with shared VNC, but passing through the GPU drops boot if the VNC is on of if VNC is off, VM just wont boot at all. No dmesg or qemu *.log info allow to locate the error, i suspect it is tied to the OpenCore.


    1. Hi there,

      No config is required, it just works. What exactly do you see on your screen during the boot sequence? (at what point does it go wrong?)

      EDIT: You might be running into the same problem as me, which is that my R9 280X doesn’t seem to be supported any more as of 10.15.4. I’m staying on 10.15.3 for the moment until I can either figure out how to make it work by OpenCore configuration, or buy a Sapphire Radeon Pulse RX 580 11265-05-20G…

  4. Thanks Nick ! great material.

    I recently followed the Proxmox/MacOs/Opencore post : and have it working.

    One issue that I want to solve is to use audio, when I looked to my proxmox host I see the following hardware:

    00:1f.3 Audio device: Intel Corporation 200 Series PCH HD Audio
    Subsystem: ASUSTeK Computer Inc. 200 Series PCH HD Audio
    Flags: bus master, fast devsel, latency 32, IRQ 179
    Kernel driver in use: snd_hda_intel
    Kernel modules: snd_hda_intel

    In the proxmox web-interface I see the following options : intel-hda, ich9-intel-hda and AC97.

    Most of the info that I found point to USB Passthrough and GPU/Audio, but how can use my onboard sound device ? Can someone point me to a guide ?


    1. I don’t think macOS supports any of the QEMU emulated audio devices, or your onboard audio, though I’d be happy to be corrected.

    1. Sorry, my own machine doesn’t have integrated video so this is something I’ve never attempted. This line looks suspicious:

      > GPU = S08 unknownPlatform (no matching vendor_id+device_id or GPU name)

      Is this GPU supported by macOS? If it is, the chances are that iMacPro 1,1 is not an SMBIOS model that macOS is expecting that GPU to appear on, try an SMBIOS for a Mac model that had a similar iGPU.

  5. Thank you so much for putting this together. This overview and the Installing Catalina with OpenCore is a complete game changer. In looking to improve my graphic performance, I bought and installed an AMD Sapphire Radeon RX 580 Pulse 8GB (11265-05-20G). I followed the PCI passthrough documentation and I see the card in my System Report, but it does not appear to be recognized at all. Anyone have any ideas one what I could be missing?

    1. You probably need to set vga: none in your VM config – it sounds like currently the emulated video is primary.

      1. Thank you. I figured that was the problem as well. I have tried setting vga: none, but then I have no access to the VM. I cannot console to it, no Remote Desktop. It is booted as best as I can tell, but it never appears on the network and does not pull an IP address. I will keep banging on it.

        Thanks again for the guide.

        1. You need a monitor or HDMI dummy plug attached for the video card to be happy, the driver will fail to start otherwise and this kills the VM.

          Proxmox’s console relies on the emulated video, so that’s definitely gone as soon as you set “vga: none”, you need to set up an alternate remote desktop solution like macOS’s Screen Sharing feature.

  6. Hey Nick.

    First and foremost, I gotta say thanks for the efforts so far into getting this to work on Proxmox.

    I’ve a slightly different setup than yours but most of it is the same. The two main differences is in the GPU and the USB card. I am using a RX 5700 and a Renesas USB 3 card.

    I noticed that the upstream kholia/OSX-KVM added a mXHCD.kext to the repository and you did not add that to your configuration, I’m assuming that’s because the FL1100 card has native support.

    I on the other hand am using a card that I bought from AliExpress that has 4 dedicated USB controllers that I can now split between my Windows VM and the MacOS VM. You can find more information at the card in the following point:

    I had to change the IOPCIMatch in the card from 0x01941033 to 0x00151912. I think the first device id is for the Renesas uPD720200 and mine is for the Renesas uPD720202. I am glad to report that this works though.

    However the card doesn’t show up in system report but using ioreg with:
    > iorerg -p IOUSB -l -w 0
    does show that the card is running at USB Super Speed.

    I am assuming that the card doesn’t show up in system report due to the ACPI tables?

    I have noticed some weird behavior with how it handles storage and with certain cables on how they work in one port but not the other, could be the way I’m connecting the devices but I’ll check again. Unplugging and plugging in a device seems to work fine. Based on some initial testing, it seems like the device is running at a quarter-past mark, kind of like 1.25Gbps speeds but that’s still better than USB2. Could be other factors but I’ll look into this more.

    1. Hi , I See your work guys on unraid forums , its amazing ..

      but as i know “correct me if i’m wrong” uPD720200 is not supported in Mac , only Fl1100 is supported ..

      Did you make it work on MacOSX ?

      1. I added the mXHCD.kext to get it to work. But like I mentioned in the Unraid forum, streaming data doesn’t seem to work. E.g. webcam/audio.

        I am now using the quad channel FL1100 and it works without the need of any additional kext. It shows up properly in the system report unlike the Renesas chipset.

        It does however have the reset bug but I guess if you’re passing through AMD GPUs, it’s not that big of a problem since you would have to reboot the host anyway.

        1. Could you share the brand of fl1100 ?
          As I notice that you buy 2 PC of different kit but you got same kit twice ?

  7. Hi again,

    I wanted to report back some observations from my setup:

    – For me, there is no need to reserve any PCI devices, other than GPUs, via vfio for later passthrough to a VM. Like you, I pass through network and USB PCI devices. Proxmox (I am on the most current version) seems to be able to assign them to a VM anyway when the VM is launched. The Proxmox online documentation also seems to only recommend the vfio approach for GPUs (and not for other PCI devices).

    – For the GPU, in my case, it is sufficient to reserve them via vfio; I don’t blacklist the respective drivers

    – I reserve the GPUs via vfio in the kernel command line (not in modprobe.d). One is probably as good as the other, but my thought was (during initial setup) that if anything doesn’t work, I can edit the kernel command line easily during grub boot. This helped me recover the system more than once when I tried to reserve the host’s primary GPU via vfio (didn’t work and I ended up “sacrificing” a third GPU just for the host to claim so that the other GPUs remain free for passthrough)

    – While my High Sierra VM with the NVIDIA web drivers seems to work relatively flawlessly, it is not exactly future proof. So I will follow your lead and try a Sapphire Radeon RX 580 Pulse (I don’t need super gaming power). The issues you are describing with it resemble the problems I had when I tried a FirePro W7000. I hope they don’t come back with the RX 580. Once I have the RX 580, I will set up one VM with Catalina and another one with your Big Sur guide.

    Thanks for your great tutorials!

  8. Hi again! Thanks for the detailed guide! I’ve already set up a guest with vmware vga enable! However, I stucked when trying GPU passthrough. I have changed plist settings so that it will auto boot after 3s timeout and this works fine when using vmware vga. However, my monitors can not receive any signal when I switched to passthroughed gpu (according to the guide, configured gpu passthrough settings in host, set display to none, and assign pci devices to guest). Since there is no vga, I can not figure out whether the guest boot or not. For more information, I tried enable vmware vga and passthoughed gpu together, it ends up display shows desktop but no tool bar nor dock is shown. Following is my configuration, would you please help me figure out what’s wrong with my configuration or setup? Thanks a lot in advance!

    Host Spec:
    cpu: 8700k
    mb: asus prime b360-plus
    mem: 32GB
    hdd: 3T
    PVE version: 6.1-3 (OVMF patched for deprecated clover guides)

    Guest Spec:
    cpu: 4cpus cores
    mem: 16GB
    ssd: 250GB 860EVO (passthroughed)
    GPU: RX580 (passthroughed)
    USB: mouse / keyboard (device passthroughed)
    OC Version: V5

    monitor socket used for guest: hdmi + dp

    Guest configs:
    args: -device isa-applesmc,osk=”***(c)***” -smbios type=2 -cpu Penryn,kvm=on,vendor=GenuineIntel,+kvm_pv_unhalt,+kvm_pv_eoi,+invtsc,vmware-cpuid-freq=on,+pcid,+ssse3,+sse4.2,+popcnt,+avx,+aes,+xsave,+xsaveopt,check -device usb-kbd,bus=ehci.0,port=2
    balloon: 0
    bios: ovmf
    boot: cdn
    bootdisk: virtio2
    cores: 4
    cpu: Penryn
    hostpci0: 01:00.0,pcie=1,x-vga=1
    hostpci1: 01:00.1,pcie=1
    hostpci2: 00:14,pcie=1
    machine: q35
    numa: 0
    ostype: other
    scsihw: virtio-scsi-pci
    sockets: 1
    vga: none

    1. When the emulated vga is turned on it becomes the primary display, and the menu bar and dock ends up going there. You can drag the white bar in the Displays settings between screens to change this.

      This is wrong;

      hostpci0: 01:00.0,pcie=1,x-vga=1
      hostpci1: 01:00.1,pcie=1

      It needs to be a single line like so:

      hostpci0: 01:00,pcie=1,x-vga=1

      Sometimes the passthrough GPU will not fire into life until you unplug and replug the monitor, give that a go.

      1. Thanks for the reply. I modified the config as you told me, but gpu still not works. After tried many times (including unplug and replug the monitor), I find out that gpu only works when emulated vga is turned on. Do you have any idea why this is happening?

        1. You can try editing OpenCore’s config.plist to add agdpmod=pikera to your kernel arguments (next to the debugsyms argument). I think you can try generic Hackintosh advice for this problem because I suspect it happens on bare metal too.

          It doesn’t have this issue on my RX 580

  9. Hey there! I was following your guide and seem to have encountered a blocker.

    I’ve got Proxmox 6.2 installed on an old Dell R710 with dual Xeon X5675 cpus, which support SSE 4.2 from what Intel’s website says. I’ve only using 4 cores from a single cpu in my setup as seen below.

    I’m at the point where I’m booting up using the Main drive for the first time to finish up the install, and the loader gets near the end and then the VM shuts down. Looking at dmesg, I see kvm[24160]: segfault at 4 ip 00005595157fa3b7 sp 00007f31c65f2e98 error 6 in qemu-system-x86_64[5595155d9000+4a2000].

    My 101.conf is below (with osk removed):

    args: -device isa-applesmc,osk=”…” -smbios type=2 -device usb-kbd,bus=ehci.0,port=2 -cpu host,kvm=on,vendor=GenuineIntel,+kvm_pv_unhalt,+kvm_pv_eoi,+hypervisor,+invtsc
    balloon: 0
    bios: ovmf
    boot: cdn
    bootdisk: ide2
    cores: 4
    cpu: Penryn
    efidisk0: local-lvm:vm-101-disk-1,size=4M
    ide0: freenas-proxmox:iso/Catalina-installer.iso,cache=unsafe,size=2095868K
    ide2: freenas-proxmox:iso/OpenCore.iso,cache=unsafe
    machine: q35
    memory: 32768
    name: mac
    net0: vmxnet3=[removed],bridge=vmbr0,firewall=1
    numa: 0
    ostype: other
    sata0: local-lvm:vm-101-disk-0,cache=unsafe,discard=on,size=64G,ssd=1
    scsihw: virtio-scsi-pci
    smbios1: uuid=[removed]
    sockets: 1
    vga: vmware
    vmgenid: [removed]

    1. Try the -cpu penryn alternative mentioned in the post and let me know what warnings it prints when you start the VM from the CLI (like qm start 100) and if it fixes it or not.

  10. Hi Nick,Thanks for the detailed guide. 
    Can we enable multiple resolution support in High Sierra ? I have tried all the options whichever are available in the proxmox display section but that did not work. I can see only one resolution in the High Sierra VM scaled list i.e 1920*1080. Please suggest how I can add more resolutions to this list(Settings>Displays>Scaled).
    The proxmox server version is 6.2(Dell Server R810).

    1. Using the emulated video adapter? I don’t think it supports multi resolution. If you’re using OpenCore you need to mount your EFI partition to edit your config.plist, that’s where the resolution is set.

  11. Nick,

    I also prefer using the macOS for my regular activities. And I now have a Catalina VM thanks to your awesome installation how-to.

    A simple noob question: Since macOS is your daily-driver, how do you boot and view it?

    I’ve been trying noVNC, Microsoft Remote Desktop, Apple Remote Desktop, and NoMachine connections from my Mac. They each seem a little awkward with a few difficulties.

    Your thoughts appreciated.

  12. Hey Nick, my build is refusing to boot the VM correctly if the HDMI is plugged into the passthrough GPU during PVE bootup. The VM macOS 10.15.6 will show the Apple logo without loading bar and then reboot and then show the error screen “Your computer restarted because of a problem” repeatedly. However, once PVE is at login, I can then plug in the HDMI to the passthrough GPU and then start my VM normally. What gives?

    I have my vfio.conf file listing my device IDs (VGA and audio) and the drivers are in the blacklist.conf file. I only passthrough VGA to the VM (01:00.0) as I’ve read there can be issues passing through audio along with it. Could that be the crux?

    Posted on Reddit but no replies yet.

    1. This is not a super unusual problem to have with a passthrough GPU, I’ve seen it on a couple of systems. Which GPU model are you using?

      If you open up the Console app in macOS you might possibly find some Crash Reports on the left that describe the problem, or if not it may be being logged elsewhere.

      1. I had the same issue. RX570, the apple logo would show up but there were 2 blinking gray artifacts at the top of the screen, and after ~15 seconds a quarter of the screen had artifacts. Then my display went off, flashed green, and proxmox restarted, but with green and pink colors instead of the normal orange and white ones.

        Rebooting the pve and waiting to plug in the hdmi cord until it had booted solved all these artifacts. I also had to set the primary graphics of my motherboard to IGFX, to default to the onboard display.

  13. Hey Nick,

    Thanks for this great info! Question – I’m hoping to use a smaller host OS disk for MacOS and add a secondary drive on a different (HDD) storage – but iCloud Photo Library won’t allow cloud sync to a disk that is network attached. Is there any way to ‘add’ an NFS or Samba share into the VM but make it appear as a local external disk?

    Currently I’m stuck adding an extra SSD to Proxmox just to have a huge space for the Photo library while still running the core OS off SSD for speed reasons. My Proxmox host install SSD is too small to accomodate the photos, and HDD host performance isn’t great either.

    Thanks for any thoughts.

  14. Hello Nick,

    Care to explain a bit about how are you doing the passthrough for the ssd?
    I’ve followed this guide, and while macos boots from the ssd, it is still detected ass QEMU HARDDISK, I don’t know if this affects the performance. I’m not sure if I should enable the “ssd emulation” check from the disk configuration page.


    1. My SSD is NVMe/PCIe, so I’m passing it through directly using PCIe passthrough (hostpciX).

      Your setup will work fine, it’s the best option if you have a SATA SSD. You can edit your config file to replace your sata1 (or whatever it’s called) with virtio1 to get increased performance (because virtio block is faster than the emulated SATA).

      Checking SSD emulation lets the guest know that the drive is an SSD, which some operating systems use to enable TRIM support or to turn off their automatic defrag. virtio doesn’t offer the “ssd” option, so make sure that’s not included if you switch from sata to virtio. While you’re there, also tick Discard because it’s required for TRIM to operate.

  15. Hello Nick,

    Your post have many important topics at to create a VM.
    But today, you got to add webcam and microphone na VM?

    I am running the Catalina with Qemu and Clover, but this drivers not showed.

    Can you help me about add this drivers in VM ?


    1. A webcam would be connected to the guest by USB, so you’d want to add it as a USB device. Most webcams should just work without an additional macOS driver.

      1. Hi Nick,

        Thanks to reply. I am using a laptop with webcam, but in Catalina VM this webcam not is showed. I tryed to add the mobile webcam (motorola Moto G4 Play) and too don’t functionally.

        Maybe I am doing wrong form. You have another suggest for me ?

  16. Hi there. Thanks for the great tutorials, Nick!

    I’m a bit stuck, and wonder if anyone has any ideas.

    The Mac VM will boot half way (Apple logo and progress bar 50%) before the screen goes black, the fans on the GPU spin up and don’t turn off until I reboot the PVE, and the VM freezes. Powering off and back on via the web GUI doesn’t seem to bring it back. I need to shut down the host and start over again to quiet those GPU fans.

    Proxmox 6.2-15
    GPU AMD Sapphire Radeon RX 580 Pulse 8GB (11265-05-20G)

    I’m having the experience that others have noted where I have to wait to plug in the monitor attached to the GPU until after the PVE has booted, because it looks like otherwise my host claims it at boot despite the blacklist.

    The following are all set exactly per the guide:

    I have edited this file as follows:
    options vfio-pci ids=1002:67df,1002:aaf0
    **I did not set disable_vga=1 As I have other SeaBIOS guests

    Here’s my [redacted] VM configuration file

    args: -device isa-applesmc,osk=”[redacted]” -smbios type=2 -device usb-kbd,bus=ehci.0,port=2 -cpu host,kvm=on,vendor=GenuineIntel,+kvm_pv_unhalt,+kvm_pv_eoi,+hypervisor,+invtsc
    balloon: 0
    bios: ovmf
    boot: order=sata0;net0
    cores: 16
    cpu: Penryn
    efidisk0: local-lvm:vm-102-disk-1,size=4M
    hostpci0: 01:00,pcie=1,x-vga=1,romfile=Sapphire.RX580.8192.180719.rom
    machine: q35
    memory: 24576
    name: proxmoxi7-catalina
    net0: vmxnet3=[redacted],bridge=vmbr0,firewall=1
    numa: 0
    ostype: other
    sata0: local-lvm:vm-102-disk-0,cache=unsafe,discard=on,size=300G,ssd=1
    scsihw: virtio-scsi-pci
    smbios1: uuid=[redacted]
    sockets: 1
    vga: none
    vmgenid: [redacted]

    Appreciate anyone’s help!

    1. Do you have another GPU in the system which is set as primary? If not, even Proxmox using it for its text console may be enough to raise the ire of the AMD Reset Bug. You can avoid that by adding video=efifb:off to your kernel commandline.

      You can check /var/log/kern.log to see what went wrong at the hangup. If it’s like my RX 580 there will be messages there about waiting for it to reset, then complaints about deadlocked kernel threads.

      If you have been doing warm reboots, try full hard host poweroffs to guarantee a complete card state reset. I once wasted an entire afternoon debugging a passthrough problem that was solved by this.

      1. Thanks for the response!

        I have one Radeon GPU and the onboard Intel GPU.

        I set my BIOS to allow dual monitors so that I could pass through the Intel GPU successfully to a Ubuntu server that’s hosting dockers (otherwise my motherboard disables it when the GPU is present). The Radeon is the only other GPU in the system. My BIOS has the PEG set as primary.

        I’ll try adding to the kernel command line – thank you.
        To confirm: does ‘video=efifb:off’ go at the end here –> ‘/etc/default/grub’ , so that the final full text is:

        GRUB_CMDLINE_LINUX_DEFAULT=”quiet intel_iommu=on rootdelay=10 video=efifb:off”

        There is so much action in my log file from all my prior mistakes that I can’t even begin to decipher it, as it overprints my screen!

        1. Yeah that’s the correct final text. Note that Proxmox won’t print anything to the screen at all after boot begins so make sure you have a way to administrate it remotely.

          You may have to disable that multi display option in your host settings to prevent the host initialising the card, but I reckon you’ll get away without it.

          1. Thanks! I’ve got kids using the server so need them to finish up a movie and will take it for a spin!

            My server usually runs headless, so no problem on ssh’ing in to manage. Appreciate the heads up.

            Hopefully I won’t have to disable the multi display as I’ll lose the quick sync hardware transcoding on my Plex.

            Thanks again. I’ll report back my findings.

            1. No joy. Same crash half way through the boot. I’ll try to disable the dual monitor in the bios and see what happens.

              Is this what I’m looking for in the logs?

              Nov 7 18:01:23 proxmoxi7 kernel: [ 162.034363] vfio-pci 0000:01:00.0: enabling device (0002 -> 0003)
              Nov 7 18:01:23 proxmoxi7 kernel: [ 162.034584] vfio-pci 0000:01:00.0: vfio_ecap_init: hiding ecap 0x19@0x270
              Nov 7 18:01:23 proxmoxi7 kernel: [ 162.034589] vfio-pci 0000:01:00.0: vfio_ecap_init: hiding ecap 0x1b@0x2d0
              Nov 7 18:01:23 proxmoxi7 kernel: [ 162.034592] vfio-pci 0000:01:00.0: vfio_ecap_init: hiding ecap 0x1e@0x370

              1. I’m really trying to avoid changing the BIOS. Just an update so I can track what I’ve tried.

                Here are the kernel commands I’ve tried, each with the same result (crash half way into booting, with GPU fans spinning up):

                GRUB_CMDLINE_LINUX_DEFAULT=”quiet intel_iommu=on”

                GRUB_CMDLINE_LINUX_DEFAULT=”quiet intel_iommu=on rootdelay=10″

                GRUB_CMDLINE_LINUX_DEFAULT=”quiet intel_iommu=on video=efifb:off”

                GRUB_CMDLINE_LINUX_DEFAULT=”quiet intel_iommu=on rootdelay=10 video=efifb:off”

                GRUB_CMDLINE_LINUX_DEFAULT=”quiet intel_iommu=on pcie_acs_override=downstream,multifunction video=efifb:eek:ff”

                I seem to definitely need the romfile in my voicemail configuration, otherwise I don’t even get to the half-booted screen on the Radeon card; the monitor never comes online.

                hostpci0: 01:00,pcie=1,x-vga=1,romfile=Sapphire.RX580.8192.180719.rom

                I downloaded the rom from the link in the post, it’s not one I’ve dumped myself.

                I think BIOS is the next option, and am planning on two options:
                1. change the primary from the GPU to the onboard
                2. If that doesn’t work, disable the dual monitor and see what happens

              2. >video=efifb:eek:ff

                This one is actually “efifb:off” that has been garbled by a forum that turns “:o” into an “:eek:” emoji when copied and pasted as text (thanks to MikeNizzle82 for solving this mystery: ). It’s funny how far this nonsense text has spread on the internet.

                Double check that your cmdline edits are really being seen by running “cat /proc/cmdline” to see the booted commandline.

                You absolutely want to set the primary to the onboard, that’ll be why Proxmox was always grabbing the GPU despite blacklisting the driver.

              3. Those ones are okay, I was expecting to see errors like:

                kernel: [259028.473404] vfio-pci 0000:03:00.0: vfio_ecap_init: hiding ecap 0x1b@0x2d0
                kernel: [261887.671038] pcieport 0000:00:02.0: AER: Uncorrected (Fatal) error received: 0000:00:02.0
                kernel: [261887.671808] pcieport 0000:00:02.0: AER: PCIe Bus Error: severity=Uncorrected (Fatal), type=Transaction Layer, (Receiver ID)
                kernel: [261887.672445] pcieport 0000:00:02.0: AER: device [8086:3c04] error status/mask=00000020/00000000
                kernel: [261887.673063] pcieport 0000:00:02.0: AER: [ 5] SDES (First)
                kernel: [261981.618083] vfio-pci 0000:03:00.0: not ready 65535ms after FLR; giving up

                I get this halfway through guest boot, at the point where macOS tries to init the graphics card. (00:02.0 is a PCIe root port and 03:00 is the RX 580)

              4. That’s pretty funny about the emoji – no wonder it didn’t work!

                Unfortunately, neither did changing the bios (first to make the onboard the primary, and then going back to Radeon but with dual monitor disabled).

                I crash as the same place every time.

                the edits to appear to be loading.

                Wonder if it’s my ROM file. I may need to look into dumping my own. It’s interesting though, as others don’t seem to need to do that with this card.

              5. In theory if the GPU is never initialised, the vBIOS is unmolested and so you don’t need to supply a ROM file. This is the situation I have on my machine, and so I’ve never manually supplied a ROM file. But this relies on the host BIOS never touching the card, which sounds like it’s impossible with your BIOS due to the iGPU disabling thing you mentioned.

                (My motherboard has an onboard VGA controller, which is fantastic for a text Proxmox console whilst keeping grubby hands off of my discrete GPUs)

              6. ok, not sure what combination mattered, but it’s working! Here are my final settings:

                .conf file (**removed the romfile):
                hostpci0: 01:00,pcie=1,x-vga=on

                grub (**back to the original):
                GRUB_CMDLINE_LINUX_DEFAULT=”quiet intel_iommu=on rootdelay=10″

                vfio-pci (**added the disable_vga=1):
                options vfio-pci ids=1002:67df,1002:aaf0 disable_vga=1

                BIOS set primary to integrated graphics (didn’t work before, but does now!).

                And HW transcode working from the onboard Intel GPU in the Ubuntu server (had to remove and re-add the PCI in the PVE web GUI).

                Thanks so much!!

  17. Hi Nick,

    Thanks for the guides here. I was able to passthrough a GT 710 card on my Dell R710 (dual L5640s). I also have keyboard and mouse working, simply by adding the hardware in Proxmox webgui. Woohoo!

    I am having a bit of a struggle getting my dual NIC PCIe card to show up in the VM. I’ve added the hardware, again — through the gui. I’ve also found a relevant kext for the card (I mounted the EFI partition and dropped it into the kext folder), but nothing seems to show up once booted. What can I do to check 1) if Proxmox is properly passing the PCI device to the VM and 2) if the VM is seeing the device. Maybe I’ve injected the kext incorrectly?

    Proxmox shows the PCIe NIC at 06:00, you’ll see that in the vm config:
    06:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
    06:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)

    Here’s my config:
    args: -device isa-applesmc,osk=”…” -smbios type=2 -device usb-kbd,bus=ehci.0$
    balloon: 0
    bios: ovmf
    boot: cdn
    bootdisk: virtio0
    cores: 8
    cpu: Penryn
    efidisk0: local-zfs:vm-143-disk-3,size=1M
    hostpci0: 07:00,pcie=1,x-vga=1
    hostpci1: 06:00,pcie=1
    machine: q35
    memory: 12288
    name: MAC1
    net0: vmxnet3=…,bridge=vmbr0,firewall=1
    numa: 1
    ostype: other
    scsihw: virtio-scsi-pci
    smbios1: uuid=…
    sockets: 1
    usb0: host=1-1.1
    usb1: host=1-1.2
    vga: vmware
    virtio0: local-zfs:vm-143-disk-2,cache=unsafe,discard=on,size=64G
    vmgenid: …

    …now that I paste this here, I see a bunch of args that you suggest are missing. I might tweak those too while I’m at it, but the machine does run with this config (minus the passthrough NIC)!

    1. It’s unlikely the passthrough config itself is the issue, because most errors here cause the VM to fail to launch. But you can launch the VM on the commandline like “qm start 143” to see any warnings (check dmesg output too after the launch).

      Which kext did you load? In theory you shouldn’t have to do this since OpenCore bypasses SIP, but you might need to disable SIP to allow an unsigned kext to be loaded: From the boot menu hit the Recovery option, then from the recovery environment open up Terminal from the top menu and enter “csrutil disable”, then reboot.

  18. Oh dear. I managed to break Proxmox by trying to add hugepages and then removing them when it didn’t work.

    I made a backup of /etc/kernel/cmdline and then added the “default_hugepagesz=1G hugepagesz=1G hugepages=64”
    ran update-initramfs -u and rebooted

    after my VM didn’t start up I tried to undo by reverting to the original “cmdline” and ran update-initramfs -u

    Then I got this:

    And rebooting proxmox shows an error and returns back to BIOS screen. I shouldn’t have done this, obviously.

    How can I fix this mess?

    1. You can boot from the Proxmox installer using the “rescue boot” option to fix things up.

      The error looks like the boot partition is not mounted, you may need to mount that manually.

      1. It actually turned out that the culprit was one of my RAM modules. After removing the bad module pair things went back to normal.

        I tried out hugepages and didn’t notice any improvement when working with RAM intensive Applications, like After Effects which can use up 50+ GB easily when working with 4K footage. There is some lag and stutter after the video has loaded into RAM until it playing back smoothly. I was hoping hugepages would improve on that.

        Could you explain what the benefit of using hugepages is?

  19. Hi Nick,

    great work.
    I´m trying to achieve quite the same goal but I´m running into the following problem.
    Im using an MacPro 5.1. Im running BigSur on FusionDrive through Proxmox 6.3-2. I got it all running with GPU Passthrough (GeForce RTX 770 with MacBios). But when set vga: to none the vm refuses to boot. It is going to 100% on one vCore forever.

    To get it booting again I have to add my stock MacPro GPU (GeForce 9500 GT). Then I get a bootscreen on the GT.. Later it switches to the RTX770 and runs well.

    Any ideas what the problem is?

    Greetings hackpapst

  20. Thanks for the awesome guide. and it empowers me to build a Unified Workstation (Win, macOS, Linux) during the 2020 year-end holiday.

    The Unified Workstation has 3 “Hybrid” VMs. Each VMs have a GPU passthrough and a USB Controller passthrough. These 3 hybrid VMs is able to be controlled by the same keyboard and the same mouse.

    You may check on what has you inspired at my newly setup website.

    Be safe.

  21. I´m trying to apply the method you used in this post to fake the id of your network card into my vm .conf file, but to fake the id of the graphics card.
    It´s being a pain in the ass.

  22. Hi Nick, great content and info! I’ve been keeping an eye on this for a while but haven’t taken the plunge since my edge case requirements haven’t been met with current technologies and caveats, but it is getting fairly close now!

    I’m looking to do a similar setup, however I’m hoping with recent paravirtualisation developments on vGPU partitioning ( -craft computing) would remove the need for dual GPU discrete cards for either Mac or Win VMs (for thermal and server PC size determinants).

    Would it be possible to ‘live-switch’ between say 2 or more (Mac+Win) VM’s if both are running simultaneously; perhaps instead of passing through mouse and keyboard directly to a guest, it was passed via a software kvm like barrier or sharemouse? I’m guessing there are still heavy caveats when trying for an all-in-one solution on Proxmox (which is my dream goal). I’m unsure if a headless Proxmox server with ‘passthrough hooks’ or a dedicated Linux host with QEMU/KVM derived guest VM’s is the way to go.

    My use case would be to isolate windows for pure gaming only (w/ vulkan, open gl support), Mac for multimedia work & web browsing, music (I know nVidia 10 series cards work great if using High Sierra only – enough for me – combined with a storage service (i.e ZFS) that delivers a management scenario for backing up and cloning VM’s.

    What was your need for going headless route if you’re not clustering this build? Was it mainly to manage your MacOS VM, cloning and testing etc.? Do you game casually or use a Windows VM with GPU passthrough? I’m guessing you might if you have 2 GPU’s but you never mention anything Windows related 😉

    Thanks again and keep up the great work.

    1. Yes, Barrier works nicely for this usecase, I’ve passed through my keyboard and mouse to macOS, and then shared those over to Windows using Barrier before. Potentially the input devices could be passed through with evdev passthrough (which allows easy bind/unbind), but I’m not sure if that requires a desktop environment to be running on the host to provide those events.

      High Sierra is out of support and no longer receives security updates, and macOS software vendors are pretty quick to drop support for these old versions. I don’t think this is a sustainable option.

      Going virtual for macOS meant that I didn’t need to set up Clover/OpenCore for my host hardware, since the VM only needs to see the emulated hardware (except the CPU and GPU). This guaranteed that I could get macOS working smoothly. It also meant I could easily roll back the VM when operating system updates broke the Hackintosh (although now since I PCIe passthrough an NVMe disk to macOS, I don’t have snapshot capability for that one any more). And I could run ZFS on my host to manage all my files for me and use them from any of my VMs (macOS didn’t have any ZFS support at all at that time).

      I game on Windows with GPU passthrough, I have two GPUs.

  23. Howdy! Found your guide via the ole’ Google and used it to setup a Proxom VM. I want to passthrough a RX 580 now, but I’m dying at “[PCI configuration begin].” Any chance you could post your OpenCore config.plist? Thanks so much!

    1. I didn’t edit it, it’s the same as my published OpenCore image plus a line for marking my network as built-in.

          1. I have been beating my face against this for several weeks now. I went back and used arch and got everything working, then I decided to try Promox 6.4 instead of 7. Worked the first time. So anyway if you figure out 7 I’ll be super curious. Thanks so much again!

            1. I’m using the RX 580 on Proxmox 7 and it works fine, so there’s no fundamental problem with 7 and the RX 580. More likely a passthrough configuration problem like the host is initialising the RX 580 during its own boot (the RX 580 has the AMD Reset Bug)

              1. Well, all of the info I needed was on this page if I had just spent all of the time to read it instead of using the ole’ Google. The vendor-reset module made the situation worse, but disabling the framebuffer with video=efifb:off got me right as rain. Grabbed a headless HDMI 1080p dongle off of Amazon. Thanks os much for your help and responsiveness Nick! Where is your tip-jar?

Leave a Reply to Nicholas Sherlock Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.