I thought it might be helpful for people following my guide for installing macOS Big Sur on Proxmox if I described my setup and how I’m using macOS.
Proxmox hardware specs
- Motherboard: Asrock EP2C602
- RAM: 64GB
- CPU: 2 x Intel E5-2670 for a total of 16 cores / 32 threads
- Samsung 950 Pro 512GB NVMe SSD for macOS
- 30TB of spinning disks in various ZFS configurations
- 1TB SATA SSD for Proxmox’s root device
- EVGA GeForce GTX 1060 6GB
- AMD Radeon R9 280X (HD 7970 / Tahiti XTL) (not currently installed)
- AMD Sapphire Radeon RX 580 Pulse 8GB (11265-05-20G)
- 2x onboard Intel C600 USB 2 controllers
- Inateck USB 3 PCIe card (Fresco Logic FL1100 chipset)
- 2x onboard Intel 82574L gigabit network ports
- Lian Li PC-X2000F full tower (sadly, long discontinued!)
- Lian Li EX-H35 HDD Hot Swap Module (to add 3 x 3.5″ drive bays into 3 of the 4 x 5.25″ drive mounts), with Lian Li BZ-503B filter door, and Lian Li BP3SATA hot swap backplane. Note that because of the sideways-mounted 5.25″ design on this case, the door will fit flush with the left side of the case, while the unfiltered exhaust fan sits some 5-10mm proud of the right side of the case.
- CPU cooler
- 2 x Noctua NH-U14S coolers
- EVGA SuperNOVA 750 G2 750W
My Proxmox machine is my desktop computer, so I pass most of this hardware straight through to the macOS Big Sur VM that I use as my daily-driver machine. I pass through both USB 2 controllers, the USB 3 controller, the NVMe SSD, and one of the gigabit network ports, plus the R9 280X graphics card.
Once macOS boots, this leaves no USB ports dedicated to the host, so no keyboard for the host! I normally manage my Proxmox host from a guest, or if all of my guests are down I use SSH from a laptop or smartphone instead (JuiceSSH on Android works nicely for running
qm start 100 to boot up my macOS VM if I accidentally shut it down).
On High Sierra, I used to use the GTX 750 Ti, then later the GTX 1060, to drive two displays (one of them 4k@60Hz over DisplayPort) which both worked flawlessly. However NVIDIA drivers are not available for Big Sur, so now I’m back with AMD.
My old AMD R9 280X had some support in Big Sur, but I got flashing video corruption on parts of the screen intermittently, and it wasn’t stable on the host either, triggering DMAR warnings that suggest that it tries to read memory it doesn’t own. This all came to a head after upgrading 10.15.4, because it looks like Big Sur no longer supports this GPU (it just goes to a black screen 75% of the way through boot and the system log shows the GPU driver crashed and didn’t return).
Now I’m using the Sapphire Radeon Pulse RX 580 8GB as suggested by Passthrough Post. This one is well supported by macOS, Apple even recommends it on their website. Newer GPUs than this, and siblings of this GPU, suffer from the AMD reset bug that makes them a pain in the ass to pass through.
Despite good experiences reported by other users, I’m still getting reset-bug-like behaviour on some guest boots, which causes a hang at the 75% mark on the progress bar just as the graphics would be initialised (screen does not go to black). At the same time this is printed to dmesg:
pcieport 0000:00:02.0: AER: Uncorrected (Non-Fatal) error received: 0000:00:02.0
pcieport 0000:00:02.0: AER: PCIe Bus Error: severity=Uncorrected (Non-Fatal), type=Transaction Layer, (Requester ID)
pcieport 0000:00:02.0: AER: device [8086:3c04] error status/mask=00004000/00000000
pcieport 0000:00:02.0: AER:  CmpltTO (First)
pcieport 0000:00:02.0: AER: Device recovery successful
At this point I stop the VM and start it again and it works on the second try. It looks like the host needs to be power-cycled between guest boots for the RX 580 to be fully reset.
On this motherboard is a third SATA controller, a Marvell SE9230, but enabling this in the ASRock UEFI setup causes it to throw a ton of DMAR errors and kill the host, so avoid using it.
What I use it for
I’m using my Big Sur VM for developing software (IntelliJ / XCode), watching videos (YouTube / mpv), playing music, editing photos with Photoshop and Lightroom, editing video with DaVinci Resolve, buying apps on the App Store, syncing data with iCloud, and more. That all works trouble-free. I don’t use any of the Apple apps that are known to be troublesome on Hackintosh (iMessage etc), so I’m not sure if those are working or not.
Here’s my VM’s Proxmox configuration, with discussion to follow:
args: -device isa-applesmc,osk="..." -smbios type=2 cpu host,kvm=on,vendor=GenuineIntel,+kvm_pv_unhalt,+kvm_pv_eoi,+hypervisor,+invtsc,+pdpe1gb,check -smp 32,sockets=2,cores=8,threads=2 -device 'pcie-root-port,id=ich9-pcie-port-6,addr=10.1,x-speed=16,x-width=32,multifunction=on,bus=pcie.0,port=6,chassis=6' -device 'vfio-pci,host=0000:0a:00.0,id=hostpci5,bus=ich9-pcie-port-6,addr=0x0,x-pci-device-id=0x10f6,x-pci-sub-vendor-id=0x0000,x-pci-sub-device-id=0x0000' balloon: 0 bios: ovmf boot: order=hostpci4
cores: 16 cpu: Penryn efidisk0: vms:vm-100-disk-1,size=128K hostpci0: 03:00,pcie=1,x-vga=on hostpci1: 00:1a.0,pcie=1 hostpci2: 00:1d.0,pcie=1 hostpci3: 82:00.0,pcie=1 hostpci4: 81:00.0,pcie=1 # hostpci5: 0a:00.0,pcie=1
hugepages: 1024 machine: q35 memory: 40960 name: Big Sur net0: vmxnet3=2B:F9:52:54:FE:8A,bridge=vmbr0 numa: 1 onboot: 1 ostype: other scsihw: virtio-scsi-pci smbios1: uuid=42c28f01-4b4e-4ef8-97ac-80dea43c0bcb sockets: 2 tablet: 0 vga: none
- Enabling the agent enables Proxmox’s “shutdown” button to ask macOS to perform an orderly shutdown.
- OpenCore now allows “cpu” to be set to “host” to pass through all supported CPU features automatically. The OC config causes the CPU to masquerade as Penryn to macOS to keep macOS happy.
- I’m passing through all 32 of my host threads to macOS. Proxmox’s configuration format doesn’t natively support setting a thread count, so I had to add my topology manually here by adding “-smp 32,sockets=2,cores=8,threads=2”.
- For an explanation of all that “-device” stuff on the end, read the “net0” section below.
- I’m passing through 6 PCIe devices, which is now natively supported by the latest version of Proxmox 6. From first to last I have my graphics card, two USB 2 controllers, my NVMe storage, a USB 3 controller, and one gigabit network card.
- I’ve enabled 1GB hugepages on Proxmox, so I’m asking for 1024MB hugepages here. More details on that further down.
- 40 gigabytes, baby!
- I usually have this emulated network card disabled in Big Sur’s network settings, and use my passthrough Intel 82574L instead.
- Although Big Sur has a driver for the Intel 82574L, the driver doesn’t match the exact PCI ID of the card I have, so the driver doesn’t load and the card remains inactive. Luckily we can edit the network card’s PCI ID to match what macOS is expecting. Here’s the card I’m using
# lspci -nn | grep 82574L 0b:00.0 Ethernet controller : Intel Corporation 82574L Gigabit Network Connection [8086:10d3]
My PCI ID here is 8086:10d3. If you check the file /System/Library/Extensions/IONetworkingFamily.kext/Contents/PlugIns/Intel82574L.kext/Contents/Info.plist, you can see the device IDs that macOS supports with this driver:
<key>IOPCIPrimaryMatch</key> <string>0x104b8086 0x10f68086</string> <key>IOPCISecondaryMatch</key> <string>0x00008086 0x00000000</string>
Let’s make the card pretend to be that 8086:10f6 (primary ID) 0000:0000 (sub ID) card. To do this we need to edit some hostpci device properties that Proxmox doesn’t support, so we need to move the hostpci device’s definition into the “args” where we can edit it.
First make sure the hostpci entry for the network card in the VM’s config is the one with the highest index, then run
qm showcmd <your VM ID> --pretty. Find the two lines that define that card:
-device 'pcie-root-port,id=ich9-pcie-port-6,addr=10.1,x-speed=16,x-width=32,multifunction=on,bus=pcie.0,port=6,chassis=6' \
-device 'vfio-pci,host=0000:0a:00.0,id=hostpci5,bus=ich9-pcie-port-6,addr=0x0' \
Copy those two lines, remove the trailing backslashes and combine them into one line, and add them to the end of your “args” line. Now we can edit the second -device to add the fake PCI IDs (new text in bold):
-device 'pcie-root-port,id=ich9-pcie-port-6,addr=10.1,x-speed=16,x-width=32,multifunction=on,bus=pcie.0,port=6,chassis=6' -device 'vfio-pci,host=0000:0a:00.0,id=hostpci5,bus=ich9-pcie-port-6,addr=0x0,x-pci-device-id=0x10f6,x-pci-sub-vendor-id=0x0000,x-pci-sub-device-id=0x0000'
Now comment out the “hostpci5” line for the network card, since we’re manually defining it through the args instead. Now macOS should see this card’s ID as if it was one of the supported cards, and everything works!
- I need to set this to “none”, since otherwise the crappy emulated video card would become the primary video adapter, and I only want my passthrough card to be active.
- This is a new feature in Proxmox 5.4 that allows a script to be run at various points in the VM lifecycle.
- In recent kernel versions, some devices like my USB controllers are grabbed by the host kernel very early during boot, before vfio can claim them. This means that I need to manually release those devices in order to start the VM. I created /var/lib/vz/snippets/hackintosh.sh with this content (and marked it executable with chmod +x):
if [ "$2" == "pre-start" ]
# First release devices from their current driver (by their PCI bus IDs)
echo 0000:00:1d.0 > /sys/bus/pci/devices/0000:00:1d.0/driver/unbind
echo 0000:00:1a.0 > /sys/bus/pci/devices/0000:00:1a.0/driver/unbind
echo 0000:81:00.0 > /sys/bus/pci/devices/0000:81:00.0/driver/unbind
echo 0000:82:00.0 > /sys/bus/pci/devices/0000:82:00.0/driver/unbind
echo 0000:0a:00.0 > /sys/bus/pci/devices/0000:0a:00.0/driver/unbind
# Then attach them by ID to VFIO
echo 8086 1d2d > /sys/bus/pci/drivers/vfio-pci/new_id
echo 8086 1d26 > /sys/bus/pci/drivers/vfio-pci/new_id
echo 1b73 1100 > /sys/bus/pci/drivers/vfio-pci/new_id
echo 144d a802 > /sys/bus/pci/drivers/vfio-pci/new_id
echo 8086 10d3 > /sys/bus/pci/drivers/vfio-pci/new_id
Guest file storage
The macOS VM’s primary storage is the passthrough Samsung 950 Pro 512GB NVMe SSD, which can be installed onto and used in Big Sur with no issues. TRIM is supported and enabled automatically.
For secondary storage, my Proxmox host exports a number of directories over the AFP network protocol using netatalk.
Debian Stretch’s version of the netatalk package is seriously out of date (and I’ve had file corruption issues with old versions), so I installed netatalk onto Proxmox from source instead following these directions:
My configure command ended up being:
./configure --with-init-style=debian-systemd --without-libevent --without-tdb --with-cracklib --enable-krbV-uam --with-pam-confdir=/etc/pam.d --with-dbus-daemon=/usr/bin/dbus-daemon --with-dbus-sysconf-dir=/etc/dbus-1/system.d --with-tracker-pkgconfig-version=1.0
Netatalk is configured in /usr/local/etc/afp.conf like so:
; Netatalk 3.x configuration file [Global] [Downloads] path = /tank/downloads rwlist = nick ; List of usernames with rw permissions on this share [LinuxISOs] path = /tank/isos rwlist = nick
When connecting to the fileshare from macOS, you connect with a URL like “afp://proxmox”, then specify the name and password of the unix user you’re authenticating as (here, “nick”), and that user’s account will be used for all file permissions checks.
Proxmox 6’s prebuilt version of Netatalk is good now, so I backed up my afp.conf, removed my old version that was installed from source (with “make uninstall”, note that this erases afp.conf!), and apt-installed the netatalk package instead. The configuration is now found at /etc/netatalk/afp.conf.
Passthrough of PCIe devices requires a bit of configuration on Proxmox’s side, much of which is described in their manual. Here’s what I ended up with:
... GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on rootdelay=10" ...
Note that Proxmox 6 will be booted using systemd-boot rather than GRUB if you are using a ZFS root volume and booting using UEFI. If you’re using systemd-boot you need to create this file instead:
/etc/kernel/cmdline (Proxmox 6 when using systemd-boot)
root=ZFS=rpool/ROOT/pve-1 boot=zfs intel_iommu=on rootdelay=10
vfio vfio_iommu_type1 vfio_pci vfio_virqfd
blacklist nouveau blacklist nvidia blacklist nvidiafb blacklist snd_hda_codec_hdmi blacklist snd_hda_intel blacklist snd_hda_codec blacklist snd_hda_core blacklist radeon blacklist amdgpu
options kvm ignore_msrs=Y
# Nested VM support (not used by macOS) options kvm-intel nested=Y
options vfio-pci ids=144d:a802,8086:1d2d,8086:1d26,10de:1c03,10de:10f1,10de:1380,1b73:1100,1002:6798,1002:aaa0 disable_vga=1 # Note that adding disable_vga here will probably prevent guests from booting in SeaBIOS mode
After editing those files you typically need to run
pve-efiboot-tool refresh if you are using systemd-boot on Proxmox 6),
update-initramfs -k all -u, then reboot Proxmox.
In the UEFI settings of my host system I had to set my onboard video card as my primary video adapter. Otherwise, the VBIOS of my discrete video cards would get molested by the host during boot, rendering them unusable for guests (this is especially a problem if your host boots in BIOS compatibility mode instead of UEFI mode).
One way to avoid needing to change this setting (e.g. if you only have one video card in your system!) is to dump the unmolested VBIOS of the card while it is attached to the host as a secondary card, then store a copy of the VBIOS as a file in Proxmox’s /usr/share/kvm directory, and provide it to the VM by using a “romfile” option like so:
Or if you don’t have a spare discrete GPU of your own to achieve this, you can find somebody else who has done this online. However, I have never tried this approach myself.
In Catalina, I have system sleep turned off in the power saving options (because I had too many instances where it went to sleep and never woke up again).
Launching the VM
I found that when I assign obscene amounts of RAM to the VM, it takes a long time for Proxmox to allocate the memory for it, causing a timeout during VM launch:
start failed: command '/usr/bin/kvm -id 100 ...'' failed: got timeout
You can instead avoid Proxmox’s timeout system entirely by running the VM like:
qm showcmd 100 | bash
Another RAM problem comes if my host has done a ton of disk IO. This causes ZFS’s ARC (disk cache) to grow in size, and it seems like the ARC is not automatically released when that memory is needed to start a VM (maybe this is an interaction with the hugepages feature). So the VM will complain that it’s out of memory on launch even though there is plenty of memory marked as “cache” available.
You can clear this read-cache and make the RAM available again by running:
echo 3 > /proc/sys/vm/drop_caches
My host supports 1GB hugepages:
# grep pdpe1gb /proc/cpuinfo
... nx pdpe1gb rdtsc ...
So I added this to the end of my /etc/kernel/cmdline to statically allocate 40GB of 1GB hugepages on boot:
default_hugepagesz=1G hugepagesz=1G hugepages=40
After running update-initramfs -u and rebooting, I can see that those pages have been successfully allocated:
# grep Huge /proc/meminfo
Hugepagesize: 1048576 kB
Hugetlb: 41943040 kB
It turns out that those hugepages are evenly split between my two NUMA nodes (20GB per CPU socket), so I have to set “numa: 1” in the VM config.
The final step is to add “hugepages: 1024” to the VM’s config file so it will use those 1024MB hugepages. Otherwise it’ll ignore them completely and continue to allocate from the general RAM pool, which will immediately run me out of RAM and cause the VM launch to fail. Other VMs won’t be able to use that memory, even if macOS isn’t running, unless they’re also configured to use 1GB hugepages.
You can also add “+pdpe1gb” to the list of the VM’s CPU features so that the guest kernel can use 1GB pages internally for its own allocations, but I doubt that MacOS takes advantage of this feature.
Upgraded to 1GB hugepages. Proxmox 6.1 and macOS 10.15.2.
My VM was configured with a passthrough video card, and the config file also had “vga: std” in it. Normally if there is a passthrough card enabled, Proxmox disables the emulated VGA adapter, so this was equivalent to “vga: none”. However after upgrading pve-manager to 5.3-12, I found that the emulated vga adapter was re-enabled, so OpenCore ended up displaying on the emulated console, and both of my hardware monitors became “secondary” monitors in macOS. To fix this I needed to explicitly set “vga: none” in the VM configuration.
Added “hookscript” to take advantage of new Proxmox 5.4 features
Did an in-place upgrade to Proxmox 6 today!
Proxmox 6 now includes an up-to-date version of the netatalk package, so I use the prebuilt version instead of building it from source. Don’t forget to install my new patched pve-edk2-firmware package.
I upgraded to macOS 10.15.4, but even after updating Clover, Lilu and WhateverGreen, my R9 280X only gives me a black screen, although the system does boot and I can access it using SSH. Looking at the boot log I can see a GPU restart is attempted, but fails. Will update with a solution if I find one.
EDIT: For the moment I built an offline installer ISO using a Catalina 10.15.3 installer I had lying around, and used that to reinstall macOS. Now I’m back on 10.15.3 and all my data is exactly where I left it, very painless. After the COVID lockdown I will probably try to replace my R9 280X with a better-supported RX 570 and try again.
Updated to OpenCore instead of Clover. Still running 10.15.3. pve-edk2-firmware no longer needs to be patched when using OpenCore!
Replaced my graphics card with an RX 580 to get 10.15.5 compatibility, and I’m back up and running now. Using a new method for fixing my passthrough network adapter’s driver which avoids patching Catalina’s drivers.
Updated to my OpenCore v10 release, then upgraded to Big Sur, no issues!