My macOS Catalina / Proxmox setup

I thought it might be helpful for people following my guide for installing macOS Catalina on Proxmox if I described my setup and how I’m using macOS.

Proxmox hardware specs

  • Motherboard: Asrock EP2C602
  • RAM: 64GB
  • CPU: 2 x Intel E5-2670 for a total of 16 cores / 32 threads
  • Storage
    • Samsung 950 Pro 512GB NVMe SSD for macOS
    • 30TB of spinning disks in various ZFS configurations
    • 1TB SATA SSD for Proxmox’s root device
  • Graphics
    • EVGA GeForce GTX 1060 6GB
    • AMD Radeon R9 280X (HD 7970 / Tahiti XTL) (not currently installed)
    • AMD Sapphire Radeon RX 580 Pulse 8GB (11265-05-20G)
  • IO
    • 2x onboard Intel C600 USB 2 controllers
    • Inateck USB 3 PCIe card (Fresco Logic FL1100 chipset)
    • 2x onboard Intel 82574L gigabit network ports
  • Case
    • Lian Li PC-X2000F full tower (sadly, long discontinued!)
    • Lian Li EX-H35 HDD Hot Swap Module (to add 3 x 3.5″ drive bays into 3 of the 4 x 5.25″ drive mounts), with Lian Li BZ-503B filter door, and Lian Li BP3SATA hot swap backplane. Note that because of the sideways-mounted 5.25″ design on this case, the door will fit flush with the left side of the case, while the unfiltered exhaust fan sits some 5-10mm proud of the right side of the case.
  • CPU cooler
    • 2 x Noctua NH-U14S coolers
  • Power
    • EVGA SuperNOVA 750 G2 750W

My Proxmox machine is my desktop computer, so I pass most of this hardware straight through to the macOS Catalina VM that I use as my daily-driver machine. I pass through both USB 2 controllers, the USB 3 controller, the NVMe SSD, and one of the gigabit network ports, plus the R9 280X graphics card.

Attached to the USB controllers I pass through to macOS are a Bluetooth adapter, keyboard, Logitech trackball dongle, and a DragonFly Black USB DAC for audio (the motherboard has no audio onboard).

Once macOS boots, this leaves no USB ports dedicated to the host, so no keyboard for the host! I normally manage my Proxmox host from a guest, or if all of my guests are down I use SSH from a laptop or smartphone instead (JuiceSSH on Android works nicely for running qm start 100 to boot up my macOS VM if I accidentally shut it down).

On High Sierra, I used to use the GTX 750 Ti, then later the GTX 1060, to drive two displays (one of them 4k@60Hz over DisplayPort) which both worked flawlessly. However NVIDIA drivers are not available for Catalina, so now I’m back with AMD.

My old AMD R9 280X had some support in Catalina, but I got flashing video corruption on parts of the screen intermittently, and it wasn’t stable on the host either, triggering DMAR warnings that suggest that it tries to read memory it doesn’t own. This all came to a head after upgrading 10.15.4, because it looks like Catalina no longer supports this GPU (it just goes to a black screen 75% of the way through boot and the system log shows the GPU driver crashed and didn’t return).

Now I’m using the Sapphire Radeon Pulse RX 580 8GB as suggested by Passthrough Post. This one is well supported by macOS, Apple even recommends it on their website. Newer GPUs than this, and siblings of this GPU, suffer from the AMD reset bug that makes them a pain in the ass to pass through.

Despite good experiences reported by other users, I’m still getting reset-bug-like behaviour on some guest boots, which causes a hang at the 75% mark on the progress bar just as the graphics would be initialised (screen does not go to black). At the same time this is printed to dmesg:

pcieport 0000:00:02.0: AER: Uncorrected (Non-Fatal) error received: 0000:00:02.0
pcieport 0000:00:02.0: AER: PCIe Bus Error: severity=Uncorrected (Non-Fatal), type=Transaction Layer, (Requester ID)
pcieport 0000:00:02.0: AER: device [8086:3c04] error status/mask=00004000/00000000
pcieport 0000:00:02.0: AER: [14] CmpltTO (First)
pcieport 0000:00:02.0: AER: Device recovery successful

At this point I stop the VM and start it again and it works on the second try. It looks like the host needs to be power-cycled between guest boots for the RX 580 to be fully reset.

On this motherboard is a third SATA controller, a Marvell SE9230, but enabling this in the ASRock UEFI setup causes it to throw a ton of DMAR errors and kill the host, so avoid using it.

What I use it for

I’m using my Catalina VM for developing software (IntelliJ / XCode), watching videos (YouTube / mpv), playing music, editing photos with Photoshop and Lightroom, editing video with DaVinci Resolve, buying apps on the App Store, syncing data with iCloud, and more. That all works trouble-free. I don’t use any of the Apple apps that are known to be troublesome on Hackintosh (iMessage etc), so I’m not sure if those are working or not.

VM configuration

Here’s my VM’s Proxmox configuration, with discussion to follow:

args: -device isa-applesmc,osk="..." -smbios type=2 cpu host,kvm=on,vendor=GenuineIntel,+kvm_pv_unhalt,+kvm_pv_eoi,+hypervisor,+invtsc,+pdpe1gb,check -smp 32,sockets=2,cores=8,threads=2 -device 'pcie-root-port,id=ich9-pcie-port-6,addr=10.1,x-speed=16,x-width=32,multifunction=on,bus=pcie.0,port=6,chassis=6' -device 'vfio-pci,host=0000:0a:00.0,id=hostpci5,bus=ich9-pcie-port-6,addr=0x0,x-pci-device-id=0x10f6,x-pci-sub-vendor-id=0x0000,x-pci-sub-device-id=0x0000'
balloon: 0
bios: ovmf
boot: d
cores: 16
cpu: Penryn
efidisk0: vms:vm-100-disk-1,size=128K
hostpci0: 03:00,pcie=1,x-vga=on
hostpci1: 00:1a.0,pcie=1
hostpci2: 00:1d.0,pcie=1
hostpci3: 82:00.0,pcie=1
hostpci4: 81:00.0,pcie=1
# hostpci5: 0a:00.0,pcie=1
hugepages: 1024 machine: q35 memory: 40960 name: Catalina-Desktop net0: vmxnet3=2B:F9:52:54:FE:8A,bridge=vmbr0 numa: 1 onboot: 1 ostype: other scsihw: virtio-scsi-pci smbios1: uuid=42c28f01-4b4e-4ef8-97ac-80dea43c0bcb sockets: 2 tablet: 0 vga: none
hookscript: local:snippets/hackintosh.sh
args
OpenCore now allows “cpu” to be set to “host” to pass through all supported CPU features automatically. The OC config causes the CPU to masquerade as Penryn to macOS to keep macOS happy.
I’m passing through all 32 of my host threads to macOS. Proxmox’s configuration format doesn’t natively support setting a thread count, so I had to add my topology manually here by adding “-smp 32,sockets=2,cores=8,threads=2”.
For an explanation of all that “-device” stuff on the end, read the “net0” section below.
hostpci0-5
I’m passing through 6 PCIe devices, which is now natively supported by the latest version of Proxmox 6. From first to last I have my graphics card, two USB 2 controllers, my NVMe storage, a USB 3 controller, and one gigabit network card.
hugepages
I’ve enabled 1GB hugepages on Proxmox, so I’m asking for 1024MB hugepages here. More details on that further down.
memory
40 gigabytes, baby!
net0
I usually have this emulated network card disabled in Catalina’s network settings, and use my passthrough Intel 82574L instead.
Although Catalina has a driver for the Intel 82574L, the driver doesn’t match the exact PCI ID of the card I have, so the driver doesn’t load and the card remains inactive. Luckily we can edit the network card’s PCI ID to match what macOS is expecting. Here’s the card I’m using
# lspci -nn | grep 82574L

0b:00.0 Ethernet controller [0200]: Intel Corporation 82574L Gigabit Network Connection [8086:10d3]

My PCI ID here is 8086:10d3. If you check the file

/System/Library/Extensions/IONetworkingFamily.kext/Contents/PlugIns/Intel82574L.kext/Contents/Info.plist, you can see the device IDs that macOS supports with this driver:

<key>IOPCIPrimaryMatch</key>
<string>0x104b8086 0x10f68086</string>
<key>IOPCISecondaryMatch</key>
<string>0x00008086 0x00000000</string>

Let’s make the card pretend to be that 8086:10f6 (primary ID) 0000:0000 (sub ID) card. To do this we need to edit some hostpci device properties that Proxmox doesn’t support, so we need to move the hostpci device’s definition into the “args” where we can edit it.

First make sure the hostpci entry for the network card in the VM’s config is the one with the highest index, then run qm showcmd <your VM ID> --pretty. Find the two lines that define that card:

...
-device 'pcie-root-port,id=ich9-pcie-port-6,addr=10.1,x-speed=16,x-width=32,multifunction=on,bus=pcie.0,port=6,chassis=6' \
-device 'vfio-pci,host=0000:0a:00.0,id=hostpci5,bus=ich9-pcie-port-6,addr=0x0' \
...

Copy those two lines, remove the trailing backslashes and combine them into one line, and add them to the end of your “args” line. Now we can edit the second -device to add the fake PCI IDs (new text in bold):

-device 'pcie-root-port,id=ich9-pcie-port-6,addr=10.1,x-speed=16,x-width=32,multifunction=on,bus=pcie.0,port=6,chassis=6' -device 'vfio-pci,host=0000:0a:00.0,id=hostpci5,bus=ich9-pcie-port-6,addr=0x0,x-pci-device-id=0x10f6,x-pci-sub-vendor-id=0x0000,x-pci-sub-device-id=0x0000'

Now comment out the “hostpci5” line for the network card, since we’re manually defining it through the args instead. Now macOS should see this card’s ID as if it was one of the supported cards, and everything works!

vga
I need to set this to “none”, since otherwise the crappy emulated video card would become the primary video adapter, and I only want my passthrough card to be active.
hookscript
This is a new feature in Proxmox 5.4 that allows a script to be run at various points in the VM lifecycle.
In recent kernel versions, some devices like my USB controllers are grabbed by the host kernel very early during boot, before vfio can claim them. This means that I need to manually release those devices in order to start the VM. I created /var/lib/vz/snippets/hackintosh.sh with this content (and marked it executable with chmod +x): 
#!/usr/bin/env bash

if [ "$2" == "pre-start" ]
then
# First release devices from their current driver (by their PCI bus IDs)
echo 0000:00:1d.0 > /sys/bus/pci/devices/0000:00:1d.0/driver/unbind
echo 0000:00:1a.0 > /sys/bus/pci/devices/0000:00:1a.0/driver/unbind
echo 0000:81:00.0 > /sys/bus/pci/devices/0000:81:00.0/driver/unbind
echo 0000:82:00.0 > /sys/bus/pci/devices/0000:82:00.0/driver/unbind
echo 0000:0a:00.0 > /sys/bus/pci/devices/0000:0a:00.0/driver/unbind

# Then attach them by ID to VFIO
echo 8086 1d2d > /sys/bus/pci/drivers/vfio-pci/new_id
echo 8086 1d26 > /sys/bus/pci/drivers/vfio-pci/new_id
echo 1b73 1100 > /sys/bus/pci/drivers/vfio-pci/new_id
echo 144d a802 > /sys/bus/pci/drivers/vfio-pci/new_id
echo 8086 10d3 > /sys/bus/pci/drivers/vfio-pci/new_id
fi

Guest file storage

The macOS VM’s primary storage is the passthrough Samsung 950 Pro 512GB NVMe SSD, which can be installed onto and used in Catalina with no issues. TRIM is supported and enabled automatically.

For secondary storage, my Proxmox host exports a number of directories over the AFP network protocol using netatalk. 

Proxmox 5

Debian Stretch’s version of the netatalk package is seriously out of date (and I’ve had file corruption issues with old versions), so I installed netatalk onto Proxmox from source instead following these directions:

http://netatalk.sourceforge.net/wiki/index.php/Install_Netatalk_3.1.11_on_Debian_9_Stretch

My configure command ended up being:

./configure --with-init-style=debian-systemd --without-libevent --without-tdb --with-cracklib --enable-krbV-uam --with-pam-confdir=/etc/pam.d --with-dbus-daemon=/usr/bin/dbus-daemon --with-dbus-sysconf-dir=/etc/dbus-1/system.d --with-tracker-pkgconfig-version=1.0

Netatalk is configured in /usr/local/etc/afp.conf like so:

; Netatalk 3.x configuration file

[Global]

[Downloads]
path = /tank/downloads
rwlist = nick ; List of usernames with rw permissions on this share

[LinuxISOs]
path = /tank/isos
rwlist = nick

When connecting to the fileshare from macOS, you connect with a URL like “afp://proxmox”, then specify the name and password of the unix user you’re authenticating as (here, “nick”), and that user’s account will be used for all file permissions checks.

Proxmox 6

Proxmox 6’s prebuilt version of Netatalk is good now, so I backed up my afp.conf, removed my old version that was installed from source (with “make uninstall”, note that this erases afp.conf!), and apt-installed the netatalk package instead. The configuration is now found at /etc/netatalk/afp.conf.

Proxmox configuration

Passthrough of PCIe devices requires a bit of configuration on Proxmox’s side, much of which is described in their manual. Here’s what I ended up with:

/etc/default/grub

...
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on rootdelay=10"
...

Note that Proxmox 6 will be booted using systemd-boot rather than GRUB if you are using a ZFS root volume and booting using UEFI. If you’re using systemd-boot you need to create this file instead:

/etc/kernel/cmdline (Proxmox 6 when using systemd-boot)

root=ZFS=rpool/ROOT/pve-1 boot=zfs intel_iommu=on rootdelay=10

/etc/modules

vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd

/etc/modprobe.d/blacklist.conf

blacklist nouveau
blacklist nvidia
blacklist nvidiafb
blacklist snd_hda_codec_hdmi
blacklist snd_hda_intel
blacklist snd_hda_codec
blacklist snd_hda_core
blacklist radeon
blacklist amdgpu

/etc/modprobe.d/kvm.conf

options kvm ignore_msrs=Y

/etc/modprobe.d/kvm-intel.conf

# Nested VM support (not used by macOS)
options kvm-intel nested=Y

/etc/modprobe.d/vfio-pci.conf

options vfio-pci ids=144d:a802,8086:1d2d,8086:1d26,10de:1c03,10de:10f1,10de:1380,1b73:1100,1002:6798,1002:aaa0 disable_vga=1
# Note that adding disable_vga here will probably prevent guests from booting in SeaBIOS mode

After editing those files you typically need to run update-grub (or pve-efiboot-tool refresh if you are using systemd-boot on Proxmox 6), update-initramfs -k all -u, then reboot Proxmox.

Host configuration

In the UEFI settings of my host system I had to set my onboard video card as my primary video adapter. Otherwise, the VBIOS of my discrete video cards would get molested by the host during boot, rendering them unusable for guests (this is especially a problem if your host boots in BIOS compatibility mode instead of UEFI mode).

One way to avoid needing to change this setting (e.g. if you only have one video card in your system!) is to dump the unmolested VBIOS of the card while it is attached to the host as a secondary card, then store a copy of the VBIOS as a file in Proxmox’s /usr/share/kvm directory, and provide it to the VM by using a “romfile” option like so:

hostpci0: 01:00,x-vga=on,romfile=my-vbios.bin

Or if you don’t have a spare discrete GPU of your own to achieve this, you can find somebody else who has done this online. However, I have never tried this approach myself.

Guest configuration

In Catalina, I have system sleep turned off in the power saving options (because I had too many instances where it went to sleep and never woke up again).

Launching the VM

I found that when I assign obscene amounts of RAM to the VM, it takes a long time for Proxmox to allocate the memory for it, causing a timeout during VM launch:

start failed: command '/usr/bin/kvm -id 100 ...'' failed: got timeout

You can instead avoid Proxmox’s timeout system entirely by running the VM like:

qm showcmd 100 | bash

Another RAM problem comes if my host has done a ton of disk IO. This causes ZFS’s ARC (disk cache) to grow in size, and it seems like the ARC is not automatically released when that memory is needed to start a VM (maybe this is an interaction with the hugepages feature). So the VM will complain that it’s out of memory on launch even though there is plenty of memory marked as “cache” available.

You can clear this read-cache and make the RAM available again by running:

echo 3 > /proc/sys/vm/drop_caches

1GB hugepages

My host supports 1GB hugepages:

# grep pdpe1gb /proc/cpuinfo

... nx pdpe1gb rdtsc ...

So I added this to the end of my /etc/kernel/cmdline to statically allocate 40GB of 1GB hugepages on boot:

default_hugepagesz=1G hugepagesz=1G hugepages=40

After running update-initramfs -u and rebooting, I can see that those pages have been successfully allocated:

# grep Huge /proc/meminfo

HugePages_Total: 40
HugePages_Free: 40
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 1048576 kB
Hugetlb: 41943040 kB

It turns out that those hugepages are evenly split between my two NUMA nodes (20GB per CPU socket), so I have to set “numa: 1” in the VM config.

The final step is to add “hugepages: 1024” to the VM’s config file so it will use those 1024MB hugepages. Otherwise it’ll ignore them completely and continue to allocate from the general RAM pool, which will immediately run me out of RAM and cause the VM launch to fail. Other VMs won’t be able to use that memory, even if macOS isn’t running, unless they’re also configured to use 1GB hugepages.

You can also add “+pdpe1gb” to the list of the VM’s CPU features so that the guest kernel can use 1GB pages internally for its own allocations, but I doubt that MacOS takes advantage of this feature.

Notes

2019-12-19

Upgraded to 1GB hugepages. Proxmox 6.1 and macOS 10.15.2.

2019-03-29

My VM was configured with a passthrough video card, and the config file also had “vga: std” in it. Normally if there is a passthrough card enabled, Proxmox disables the emulated VGA adapter, so this was equivalent to “vga: none”. However after upgrading pve-manager to 5.3-12, I found that the emulated vga adapter was re-enabled, so OpenCore ended up displaying on the emulated console, and both of my hardware monitors became “secondary” monitors in macOS. To fix this I needed to explicitly set “vga: none” in the VM configuration.

2019-04-12

Added “hookscript” to take advantage of new Proxmox 5.4 features

2019-07-18

Did an in-place upgrade to Proxmox 6 today!

Proxmox 6 now includes an up-to-date version of the netatalk package, so I use the prebuilt version instead of building it from source. Don’t forget to install my new patched pve-edk2-firmware package.

2020-03-27

I upgraded to macOS 10.15.4, but even after updating Clover, Lilu and WhateverGreen, my R9 280X only gives me a black screen, although the system does boot and I can access it using SSH. Looking at the boot log I can see a GPU restart is attempted, but fails. Will update with a solution if I find one. 

EDIT: For the moment I built an offline installer ISO using a Catalina 10.15.3 installer I had lying around, and used that to reinstall macOS. Now I’m back on 10.15.3 and all my data is exactly where I left it, very painless. After the COVID lockdown I will probably try to replace my R9 280X with a better-supported RX 570 and try again.

2020-04-27

Updated to OpenCore instead of Clover. Still running 10.15.3. pve-edk2-firmware no longer needs to be patched when using OpenCore!

2020-06-05

Replaced my graphics card with an RX 580 to get 10.15.5 compatibility, and I’m back up and running now. Using a new method for fixing my passthrough network adapter’s driver which avoids patching Catalina’s drivers.

182 thoughts on “My macOS Catalina / Proxmox setup”

  1. Nick, for us non Linux users/less experience. Could you answer in detail a bit more about scripts and snippets? Theres no beginners info out there with everyone just assuming you know things. So…
    How do you actually create or copy over an existing hookscript? I have one to stop shutdown bug for my gfx. So a bit confused. where is the snippets folder – typically on my lvm drive? How do I create it? MKDIR /var/lib/vz/snippets ? or thats wrong?
    Theres no UI so I was guessing command line? And do I use nano /var/lib/vz/snippets/name of .pl (perl script) and paste the script into it then save & make it executable – in this example ‘chmod +x reset-gpu.pl’? Could you give me a quick point by point walk thru?
    Many thanks

    1. You can put your snippets in any “directory” storage, my VM config puts it in the storage “local” which is found at /var/lib/vz, so I created a snippets directory in there with “mkdir /var/lib/vz/snippets”. Then yes, you can just “nano /var/lib/vz/snippets/hackintosh.sh” to create the file and “chmod +x /var/lib/vz/snippets/hackintosh.sh” to make it executable.

  2. Hi Nick,

    Thanks for your efforts. I am running macOS Catalina on my Proxmox now, with GPU passthrough enabled. But as you wrote, the monitor became secondary monitor if the Display uses vmware compatible.

    When I change it to none, I don’t think my VM boots correctly. My monitor is blank. I already change the timeout to 5 seconds, so it should boot automatic to Catalina hard disc. But its blank.

    Do you have any idea which settings I should check?

    Thanks again

    1. I’m having this same issue when passing through my Intel 530. VM conf below:

      args: -device isa-applesmc,osk=”…” -smbios type=2 -device usb-kbd,bus=ehci.0,port=2 -cpu host,kvm=on,vendor=GenuineIntel,+kvm_pv_unhalt,+kvm_pv_eoi,+hypervisor,+invtsc
      balloon: 0
      bios: ovmf
      boot: c
      bootdisk: sata0
      cores: 4
      cpu: Penryn
      efidisk0: hdd-img:104/vm-104-disk-1.raw,size=128K
      hostpci0: 00:02,pcie=1,x-vga=1
      machine: q35
      memory: 8192
      name: catalina
      net0: vmxnet3=[removed],bridge=vmbr0,firewall=1
      numa: 0
      ostype: other
      sata0: hdd-img:104/vm-104-disk-0.raw,cache=unsafe,discard=on,size=64G,ssd=1
      scsihw: virtio-scsi-pci
      smbios1: uuid=[removed]
      sockets: 1
      vga: none
      vmgenid: [removed]

  3. Hello Nick,

    Could you please elaborate about the specifics of OpenCore config with R9 280 passthrough?
    I’m not running Proxmox, just NixOS with virt-manager.
    I’ve been able to boot Macos just fine with shared VNC, but passing through the GPU drops boot if the VNC is on of if VNC is off, VM just wont boot at all. No dmesg or qemu *.log info allow to locate the error, i suspect it is tied to the OpenCore.

    Thanks!

    1. Hi there,

      No config is required, it just works. What exactly do you see on your screen during the boot sequence? (at what point does it go wrong?)

      EDIT: You might be running into the same problem as me, which is that my R9 280X doesn’t seem to be supported any more as of 10.15.4. I’m staying on 10.15.3 for the moment until I can either figure out how to make it work by OpenCore configuration, or buy a Sapphire Radeon Pulse RX 580 11265-05-20G…

  4. Thanks Nick ! great material.

    I recently followed the Proxmox/MacOs/Opencore post : https://www.nicksherlock.com/2020/04/installing-macos-catalina-on-proxmox-with-opencore/ and have it working.

    One issue that I want to solve is to use audio, when I looked to my proxmox host I see the following hardware:

    00:1f.3 Audio device: Intel Corporation 200 Series PCH HD Audio
    Subsystem: ASUSTeK Computer Inc. 200 Series PCH HD Audio
    Flags: bus master, fast devsel, latency 32, IRQ 179
    ….
    Kernel driver in use: snd_hda_intel
    Kernel modules: snd_hda_intel

    In the proxmox web-interface I see the following options : intel-hda, ich9-intel-hda and AC97.

    Most of the info that I found point to USB Passthrough and GPU/Audio, but how can use my onboard sound device ? Can someone point me to a guide ?

    Thanks.

    1. I don’t think macOS supports any of the QEMU emulated audio devices, or your onboard audio, though I’d be happy to be corrected.

    1. Sorry, my own machine doesn’t have integrated video so this is something I’ve never attempted. This line looks suspicious:

      > GPU = S08 unknownPlatform (no matching vendor_id+device_id or GPU name)

      Is this GPU supported by macOS? If it is, the chances are that iMacPro 1,1 is not an SMBIOS model that macOS is expecting that GPU to appear on, try an SMBIOS for a Mac model that had a similar iGPU.

  5. Thank you so much for putting this together. This overview and the Installing Catalina with OpenCore is a complete game changer. In looking to improve my graphic performance, I bought and installed an AMD Sapphire Radeon RX 580 Pulse 8GB (11265-05-20G). I followed the PCI passthrough documentation and I see the card in my System Report, but it does not appear to be recognized at all. Anyone have any ideas one what I could be missing?

    1. You probably need to set vga: none in your VM config – it sounds like currently the emulated video is primary.

      1. Thank you. I figured that was the problem as well. I have tried setting vga: none, but then I have no access to the VM. I cannot console to it, no Remote Desktop. It is booted as best as I can tell, but it never appears on the network and does not pull an IP address. I will keep banging on it.

        Thanks again for the guide.

        1. You need a monitor or HDMI dummy plug attached for the video card to be happy, the driver will fail to start otherwise and this kills the VM.

          Proxmox’s console relies on the emulated video, so that’s definitely gone as soon as you set “vga: none”, you need to set up an alternate remote desktop solution like macOS’s Screen Sharing feature.

  6. Hey Nick.

    First and foremost, I gotta say thanks for the efforts so far into getting this to work on Proxmox.

    I’ve a slightly different setup than yours but most of it is the same. The two main differences is in the GPU and the USB card. I am using a RX 5700 and a Renesas USB 3 card.

    I noticed that the upstream kholia/OSX-KVM added a mXHCD.kext to the repository and you did not add that to your configuration, I’m assuming that’s because the FL1100 card has native support.

    I on the other hand am using a card that I bought from AliExpress that has 4 dedicated USB controllers that I can now split between my Windows VM and the MacOS VM. You can find more information at the card in the following point: https://forums.unraid.net/topic/58843-functional-multi-controller-usb-pci-e-adapter/?do=findComment&comment=868053

    I had to change the IOPCIMatch in the card from 0x01941033 to 0x00151912. I think the first device id is for the Renesas uPD720200 and mine is for the Renesas uPD720202. I am glad to report that this works though.

    However the card doesn’t show up in system report but using ioreg with:
    > iorerg -p IOUSB -l -w 0
    does show that the card is running at USB Super Speed.

    I am assuming that the card doesn’t show up in system report due to the ACPI tables?

    I have noticed some weird behavior with how it handles storage and with certain cables on how they work in one port but not the other, could be the way I’m connecting the devices but I’ll check again. Unplugging and plugging in a device seems to work fine. Based on some initial testing, it seems like the device is running at a quarter-past mark, kind of like 1.25Gbps speeds but that’s still better than USB2. Could be other factors but I’ll look into this more.

  7. Hi again,

    I wanted to report back some observations from my setup:

    – For me, there is no need to reserve any PCI devices, other than GPUs, via vfio for later passthrough to a VM. Like you, I pass through network and USB PCI devices. Proxmox (I am on the most current version) seems to be able to assign them to a VM anyway when the VM is launched. The Proxmox online documentation also seems to only recommend the vfio approach for GPUs (and not for other PCI devices).

    – For the GPU, in my case, it is sufficient to reserve them via vfio; I don’t blacklist the respective drivers

    – I reserve the GPUs via vfio in the kernel command line (not in modprobe.d). One is probably as good as the other, but my thought was (during initial setup) that if anything doesn’t work, I can edit the kernel command line easily during grub boot. This helped me recover the system more than once when I tried to reserve the host’s primary GPU via vfio (didn’t work and I ended up “sacrificing” a third GPU just for the host to claim so that the other GPUs remain free for passthrough)

    – While my High Sierra VM with the NVIDIA web drivers seems to work relatively flawlessly, it is not exactly future proof. So I will follow your lead and try a Sapphire Radeon RX 580 Pulse (I don’t need super gaming power). The issues you are describing with it resemble the problems I had when I tried a FirePro W7000. I hope they don’t come back with the RX 580. Once I have the RX 580, I will set up one VM with Catalina and another one with your Big Sur guide.

    Thanks for your great tutorials!

  8. Hi again! Thanks for the detailed guide! I’ve already set up a guest with vmware vga enable! However, I stucked when trying GPU passthrough. I have changed plist settings so that it will auto boot after 3s timeout and this works fine when using vmware vga. However, my monitors can not receive any signal when I switched to passthroughed gpu (according to the guide, configured gpu passthrough settings in host, set display to none, and assign pci devices to guest). Since there is no vga, I can not figure out whether the guest boot or not. For more information, I tried enable vmware vga and passthoughed gpu together, it ends up display shows desktop but no tool bar nor dock is shown. Following is my configuration, would you please help me figure out what’s wrong with my configuration or setup? Thanks a lot in advance!

    Host Spec:
    cpu: 8700k
    mb: asus prime b360-plus
    mem: 32GB
    hdd: 3T
    PVE version: 6.1-3 (OVMF patched for deprecated clover guides)

    Guest Spec:
    cpu: 4cpus cores
    mem: 16GB
    ssd: 250GB 860EVO (passthroughed)
    GPU: RX580 (passthroughed)
    USB: mouse / keyboard (device passthroughed)
    OC Version: V5

    monitor socket used for guest: hdmi + dp

    Guest configs:
    args: -device isa-applesmc,osk=”***(c)***” -smbios type=2 -cpu Penryn,kvm=on,vendor=GenuineIntel,+kvm_pv_unhalt,+kvm_pv_eoi,+invtsc,vmware-cpuid-freq=on,+pcid,+ssse3,+sse4.2,+popcnt,+avx,+aes,+xsave,+xsaveopt,check -device usb-kbd,bus=ehci.0,port=2
    balloon: 0
    bios: ovmf
    boot: cdn
    bootdisk: virtio2
    cores: 4
    cpu: Penryn
    hostpci0: 01:00.0,pcie=1,x-vga=1
    hostpci1: 01:00.1,pcie=1
    hostpci2: 00:14,pcie=1
    machine: q35
    numa: 0
    ostype: other
    scsihw: virtio-scsi-pci
    sockets: 1
    vga: none

    1. When the emulated vga is turned on it becomes the primary display, and the menu bar and dock ends up going there. You can drag the white bar in the Displays settings between screens to change this.

      This is wrong;

      hostpci0: 01:00.0,pcie=1,x-vga=1
      hostpci1: 01:00.1,pcie=1

      It needs to be a single line like so:

      hostpci0: 01:00,pcie=1,x-vga=1

      Sometimes the passthrough GPU will not fire into life until you unplug and replug the monitor, give that a go.

      1. Thanks for the reply. I modified the config as you told me, but gpu still not works. After tried many times (including unplug and replug the monitor), I find out that gpu only works when emulated vga is turned on. Do you have any idea why this is happening?

        1. You can try editing OpenCore’s config.plist to add agdpmod=pikera to your kernel arguments (next to the debugsyms argument). I think you can try generic Hackintosh advice for this problem because I suspect it happens on bare metal too.

          It doesn’t have this issue on my RX 580

  9. Hey there! I was following your guide and seem to have encountered a blocker.

    I’ve got Proxmox 6.2 installed on an old Dell R710 with dual Xeon X5675 cpus, which support SSE 4.2 from what Intel’s website says. I’ve only using 4 cores from a single cpu in my setup as seen below.

    I’m at the point where I’m booting up using the Main drive for the first time to finish up the install, and the loader gets near the end and then the VM shuts down. Looking at dmesg, I see kvm[24160]: segfault at 4 ip 00005595157fa3b7 sp 00007f31c65f2e98 error 6 in qemu-system-x86_64[5595155d9000+4a2000].

    My 101.conf is below (with osk removed):

    “`
    args: -device isa-applesmc,osk=”…” -smbios type=2 -device usb-kbd,bus=ehci.0,port=2 -cpu host,kvm=on,vendor=GenuineIntel,+kvm_pv_unhalt,+kvm_pv_eoi,+hypervisor,+invtsc
    balloon: 0
    bios: ovmf
    boot: cdn
    bootdisk: ide2
    cores: 4
    cpu: Penryn
    efidisk0: local-lvm:vm-101-disk-1,size=4M
    ide0: freenas-proxmox:iso/Catalina-installer.iso,cache=unsafe,size=2095868K
    ide2: freenas-proxmox:iso/OpenCore.iso,cache=unsafe
    machine: q35
    memory: 32768
    name: mac
    net0: vmxnet3=[removed],bridge=vmbr0,firewall=1
    numa: 0
    ostype: other
    sata0: local-lvm:vm-101-disk-0,cache=unsafe,discard=on,size=64G,ssd=1
    scsihw: virtio-scsi-pci
    smbios1: uuid=[removed]
    sockets: 1
    vga: vmware
    vmgenid: [removed]
    “`

    1. Try the -cpu penryn alternative mentioned in the post and let me know what warnings it prints when you start the VM from the CLI (like qm start 100) and if it fixes it or not.

  10. Hi Nick,Thanks for the detailed guide. 
    Can we enable multiple resolution support in High Sierra ? I have tried all the options whichever are available in the proxmox display section but that did not work. I can see only one resolution in the High Sierra VM scaled list i.e 1920*1080. Please suggest how I can add more resolutions to this list(Settings>Displays>Scaled).
    The proxmox server version is 6.2(Dell Server R810).

    1. Using the emulated video adapter? I don’t think it supports multi resolution. If you’re using OpenCore you need to mount your EFI partition to edit your config.plist, that’s where the resolution is set.

Leave a Reply to GPU Passthru Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.