My macOS Mojave / Proxmox setup

I thought it might be helpful for people following my guide for installing macOS Mojave on Proxmox if I described my setup and how I’m using macOS.

Proxmox hardware specs

  • Motherboard: Asrock EP2C602
  • RAM: 64GB
  • CPU: 2 x Intel E5-2670 for a total of 16 cores / 32 threads
  • Storage
    • Samsung 950 Pro 512GB NVMe SSD for macOS
    • 30TB of spinning disks in various ZFS configurations
    • 1TB SATA SSD for Proxmox’s root device
  • Graphics
    • EVGA GeForce GTX 1060 6GB
    • EVGA GeForce GTX 750 Ti
    • AMD Radeon R9 280X (HD 7970 / Tahiti XTL)
  • IO
    • 2x onboard Intel C600 USB 2 controllers
    • Inateck USB 3 PCIe card (Fresco Logic FL1100 chipset)
    • 2x onboard Intel 82574L gigabit network ports
  • Case
    • Lian Li PC-X2000F full tower (sadly, long discontinued!)
    • Lian Li EX-H35 HDD Hot Swap Module (to add 3 x 3.5″ drive bays into 3 of the 4 x 5.25″ drive mounts), with Lian Li BZ-503B filter door, and Lian Li BP3SATA hot swap backplane. Note that because of the sideways-mounted 5.25″ design on this case, the door will fit flush with the left side of the case, while the unfiltered exhaust fan sits some 5-10mm proud of the right side of the case.
  • CPU cooler
    • 2 x Noctua NH-U14S coolers
  • Power
    • EVGA SuperNOVA 750 G2 750W

My Proxmox machine is my desktop computer, so I pass most of this hardware straight through to the macOS Mojave VM that I use as my daily-driver machine. I pass through both USB 2 controllers, the USB 3 controller, the NVMe SSD, and one of the gigabit network ports, plus the R9 280X graphics card.

Attached to the USB controllers I pass through to macOS are a Bluetooth adapter, keyboard, Logitech trackball dongle, and a DragonFly Black USB DAC for audio (the motherboard has no audio onboard).

Once macOS boots, this leaves no USB ports dedicated to the host, so no keyboard for the host! I normally manage my Proxmox host from a guest, or if all of my guests are down I use SSH from a laptop or smartphone instead (JuiceSSH on Android works nicely for running qm start 100 to boot up my macOS VM if I accidentally shut it down).

On High Sierra, I used to use the GTX 750 Ti, then later the GTX 1060, to drive two displays (one of them 4k@60Hz over DisplayPort) which both worked flawlessly. However NVIDIA has not yet released compatible graphics drivers for Mojave, so now I’m back with my old R9 280X, which has some built-in support.

R9 280X being unnecessarily nosy

This Radeon card is not stable for the guest; I get flashing video corruption on parts of the screen intermittently, and it’s not stable for the host, triggering DMAR warnings that suggest that it tries to read memory it doesn’t own, and causing random host lockups the second time that a VM that uses it is booted. I’m looking forward to those new NVIDIA drivers so that I can stop using this card!

Take note that if your video card does not support booting using UEFI, you’ll be in a world of pain due to VGA bus arbitration conflicts with the host. Although it is possible to patch the VBIOS of some old video cards to support UEFI, you will save a lot of blood, sweat and tears by just buying a newer video card!

On this motherboard is a third SATA controller, a Marvell SE9230, but enabling this in the ASRock UEFI setup causes it to throw a ton of DMAR errors and kill the host, so avoid using it.

What I use it for

I’m using my Mojave VM for developing software (IntelliJ / XCode), watching videos (YouTube / mpv), playing music, editing photos with Photoshop and Lightroom, editing video with DaVinci Resolve, buying apps on the App Store, syncing data with iCloud, and more. That all works trouble-free. I don’t use any of the Apple apps that are known to be troublesome on Hackintosh (iMessage etc), so I’m not sure if those are working or not.

VM configuration

Here’s my VM’s Proxmox configuration, with discussion to follow:

args: -device isa-applesmc,osk="..." -smbios type=2 -cpu Penryn,kvm=on,vendor=GenuineIntel,+invtsc,vmware-cpuid-freq=on,+pcid,+ssse3,+sse4.2,+popcnt,+avx,+aes,+xsave,+xsaveopt,check -smp 32,sockets=2,cores=8,threads=2
balloon: 0
bios: ovmf
boot: d
cores: 16
cpu: Penryn
efidisk0: vms:vm-100-disk-1,size=128K
hostpci0: 03:00,pcie=1,x-vga=on
hostpci1: 00:1a.0,pcie=1
hostpci2: 00:1d.0,pcie=1
hostpci3: 82:00.0,pcie=1
hostpci4: 81:00.0,pcie=1
hostpci5: 0b:00.0,pcie=1
machine: pc-q35-2.12
memory: 40960
name: Mojave-Desktop
net0: e1000-82545em=2B:F9:52:54:FE:8A,bridge=vmbr0
numa: 1
onboot: 1
ostype: other
scsihw: virtio-scsi-pci
smbios1: uuid=42c28f01-4b4e-4ef8-97ac-80dea43c0bcb
sockets: 2
tablet: 0
vga: none
hookscript: local:snippets/hackintosh.sh
args
My CPU masquerades as a Penryn (which appears to be a requirement for macOS to boot) but Mojave also requires CPU features that were first introduced in the subsequent Nehalem CPU generation, so those need to be added (ssse3, sse4.2, and popcnt). On top of that I’m passing through some more features that my CPU supports that can speed up macOS (AVX, AES-NI, etc) . You can use cat /proc/cpuinfo on Proxmox to check what features your CPU supports.
macOS refuses to boot on my machine if I pass certain numbers of cores through to it, so I ended up having to pass all 32 threads through to the VM instead of the 24 I intended. Proxmox’s configuration format doesn’t natively support setting a thread count, so I had to add my topology manually here by adding “-smp 32,sockets=2,cores=8,threads=2”.
hostpci0-5
I’m passing through 6 PCIe devices, which Proxmox doesn’t support natively (it normally maxes out at 4), so I had to patch Proxmox to add support. From first to last I have my graphics card, two USB 2 controllers, my NVMe storage, a USB 3 controller, and one gigabit network card.
memory
40 gigabytes, baby!
net0
I usually have this emulated network card disabled in Mojave’s network settings, and use my passthrough Intel 82574L instead.
Although Mojave has a driver for the Intel 82574L, the driver doesn’t match the exact PCI ID of the card I have, so the driver doesn’t load and the card remains inactive. Luckily you can edit the driver to fix this. First check the PCI ID of the network card in Proxmox:
# lspci -nn | grep 82574L

0b:00.0 Ethernet controller [0200]: Intel Corporation 82574L Gigabit Network Connection [8086:10d3]

The PCI ID here is 8086:10d3. Now edit /System/Library/Extensions/IONetworkingFamily.kext/Contents/PlugIns/Intel82574L.kext/Contents/Info.plist. Find the section which defines IOPCIPrimaryMatch and IOPCISecondaryMatch:

<key>IOPCIPrimaryMatch</key>
<string>0x104b8086 0x10f68086</string>
<key>IOPCISecondaryMatch</key>
<string>0x00008086 0x00000000</string>

Those define the hardware that the driver will be loaded for. Remove those lines and replace them with a new “IOPCIMatch” section that has your PCI ID in it:

<key>IOPCIMatch</key>
<string>0x10d38086</string>

Note that in this format, the last part of the PCI ID (10d3) needs to come first, followed by the first part (8086). After rebooting macOS, the network driver will now consider your card to be compatible, and will load for it. Note that changing your network card can break software that relies on your MAC address staying the same for license checking / DRM.

vga
I need to set this to “none”, since otherwise the crappy emulated video card would become the primary video adapter, and I only want my passthrough card to be active.
hookscript
This is a new feature in Proxmox 5.4 that allows a script to be run at various points in the VM lifecycle.
In recent kernel versions, some devices like my USB controllers are grabbed by the host kernel very early during boot, before vfio can claim them. This means that I need to manually release those devices in order to start the VM. I created /var/lib/vz/snippets/hackintosh.sh with this content (and marked it executable with chmod +x): 
#!/usr/bin/env bash

if [ "$2" == "pre-start" ]
then
# First release devices from their current driver (by their PCI bus IDs)
echo 0000:00:1d.0 > /sys/bus/pci/devices/0000:00:1d.0/driver/unbind
echo 0000:00:1a.0 > /sys/bus/pci/devices/0000:00:1a.0/driver/unbind
echo 0000:81:00.0 > /sys/bus/pci/devices/0000:81:00.0/driver/unbind
echo 0000:82:00.0 > /sys/bus/pci/devices/0000:82:00.0/driver/unbind
echo 0000:0a:00.0 > /sys/bus/pci/devices/0000:0a:00.0/driver/unbind

# Then attach them by ID to VFIO
echo 8086 1d2d > /sys/bus/pci/drivers/vfio-pci/new_id
echo 8086 1d26 > /sys/bus/pci/drivers/vfio-pci/new_id
echo 1b73 1100 > /sys/bus/pci/drivers/vfio-pci/new_id
echo 144d a802 > /sys/bus/pci/drivers/vfio-pci/new_id
echo 8086 10d3 > /sys/bus/pci/drivers/vfio-pci/new_id
fi

Guest file storage

The macOS VM’s primary storage is the passthrough Samsung 950 Pro 512GB NVMe SSD, which can be installed onto and used in Mojave with no issues. TRIM is supported and enabled automatically.

For secondary storage, my Proxmox host exports a number of directories over the AFP network protocol using netatalk. I installed netatalk onto Proxmox from source following these directions:

http://netatalk.sourceforge.net/wiki/index.php/Install_Netatalk_3.1.11_on_Debian_9_Stretch

My configure command ended up being:

./configure --with-init-style=debian-systemd --without-libevent --without-tdb --with-cracklib --enable-krbV-uam --with-pam-confdir=/etc/pam.d --with-dbus-daemon=/usr/bin/dbus-daemon --with-dbus-sysconf-dir=/etc/dbus-1/system.d --with-tracker-pkgconfig-version=1.0

Netatalk is configured in /usr/local/etc/afp.conf like so:

; Netatalk 3.x configuration file

[Global]

[Downloads]
path = /tank/downloads
rwlist = nick ; List of usernames with rw permissions on this share

[LinuxISOs]
path = /tank/isos
rwlist = nick

When connecting to the fileshare from macOS, you connect with a URL like “afp://proxmox”, then specify the name and password of the unix user you’re authenticating as (here, “nick”), and that user’s account will be used for all file permissions checks.

Proxmox configuration

Passthrough of PCIe devices requires a bit of configuration on Proxmox’s side, much of which is described in their manual. Here’s what I ended up with:

/etc/default/grub

...
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on rootdelay=10"
...

/etc/modules

vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd

/etc/modprobe.d/blacklist.conf

blacklist nouveau
blacklist nvidia
blacklist nvidiafb
blacklist snd_hda_codec_hdmi
blacklist snd_hda_intel
blacklist snd_hda_codec
blacklist snd_hda_core
blacklist radeon
blacklist amdgpu

/etc/modprobe.d/kvm.conf

options kvm ignore_msrs=Y

/etc/modprobe.d/kvm-intel.conf

# Nested VM support (not used by macOS)
options kvm-intel nested=Y

/etc/modprobe.d/vfio-pci.conf

options vfio-pci ids=144d:a802,8086:1d2d,8086:1d26,10de:1c03,10de:10f1,10de:1380,1b73:1100,1002:6798,1002:aaa0 disable_vga=1
# Note that adding disable_vga here will probably prevent guests from booting in SeaBIOS mode

After editing those files you typically need to run update-grubupdate-initramfs -k all -u, then reboot Proxmox.

Host configuration

In the UEFI settings of my host system I had to set my onboard video card as my primary video adapter. Otherwise, the VBIOS of my discrete video cards would get molested by the host during boot, rendering them unusable for guests (this is especially a problem if your host boots in BIOS compatibility mode instead of UEFI mode).

I’ve heard that one way to avoid needing to change this setting (e.g. if you only have one video card in your system!) is to dump the unmolested VBIOS of the card while it is attached to the host as a secondary card, then provide that copy of the VBIOS as a file to Proxmox using a “romfile” option like so:

hostpci0: 01:00,x-vga=on,romfile=/root/my-vbios.bin

Or if you don’t have a spare discrete GPU of your own to achieve this, you can find somebody else who has done this online. However, I have never tried this approach myself.

Guest configuration

In Mojave, I have system sleep turned off in the power saving options (because I had too many instances where it went to sleep and never woke up again).

Drive encryption did not work the last time I tried it, seemingly because UEFI keyboard drivers were missing that would allow password entry on the boot screen. Drive encryption support is mentioned in Clover changelogs frequently, so it may already be working by the time you read this.

I used Clover Configurator to set my SMBIOS to iMac 14,2 before I tried passing through any hardware.

Launching the VM

I found that when I assign obscene amounts of RAM to the VM, it takes a long time for Proxmox to allocate the memory for it, causing a timeout during VM launch:

start failed: command '/usr/bin/kvm -id 100 ...'' failed: got timeout

You can instead avoid Proxmox’s timeout system entirely by running the VM like:

qm showcmd 100 | bash

Notes

2019-03-29

My VM was configured with a passthrough video card, and the config file also had “vga: std” in it. Normally if there is a passthrough card enabled, Proxmox disables the emulated VGA adapter, so this was equivalent to “vga: none”. However after upgrading pve-manager to 5.3-12, I found that the emulated vga adapter was re-enabled, so Clover ended up displaying on the emulated console, and both of my hardware monitors became “secondary” monitors in macOS. To fix this I needed to explicitly set “vga: none” in the VM configuration.

2019-04-12

Added “hookscript” to take advantage of new Proxmox 5.4 features

58 thoughts on “My macOS Mojave / Proxmox setup”

  1. Until proxmox builds new .deb files with edited perls for exposing fake ssd virtual disks to the osx vm (needed for restoring a time machine backup to a disk) here is how I done it.
    important part is rotation_rate and has to be added to args: in vmid.conf

    -drive file=”/dev/path-to-volumegroup/logicalvolume,if=none”,id=drive-ide0,cache=unsafe,format=raw,aio=threads,detect-zeroes=on -device ‘ide-hd,bus=ide.0,unit=0,drive=drive-ide0,id=ide0,rotation_rate=1’

    1. Not sure if it’s a new update but I’m seeing the option for “SSD emulation” (check advanced for the drive). MacOS then shows it as Solid State.

  2. Hi,
    I don’t understand why my gpu dont show any signals …
    in : /etc/default/grub
    GRUB_CMDLINE_LINUX_DEFAULT=”quiet amd_iommu=on”

    in : /etc/modules
    vfio
    vfio_iommu_type1
    vfio_pci
    vfio_virqfd

    run : lspci -n -s 01:00
    01:00.0 0300: 10de:0fc6 (rev a1)
    01:00.1 0403: 10de:0e1b (rev a1)

    in : /etc/modprobe.d/vfio.conf
    options vfio-pci ids=10de:0fc6,10de:0e1b disable_vga=1

    run : dmesg | grep ecap
    [ 0.004000] DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap 1c0000c40660462 ecap 7e3ff0505e
    [ 0.004000] DMAR: dmar1: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da
    [ 84.780920] vfio_ecap_init: 0000:01:00.0 hiding ecap 0x19@0x900

    run : find /sys/kernel/iommu_groups/ -type l
    […]
    /sys/kernel/iommu_groups/1/devices/0000:00:01.0
    /sys/kernel/iommu_groups/1/devices/0000:01:00.0
    /sys/kernel/iommu_groups/1/devices/0000:01:00.1
    […]

    in VM config file :
    hostpci0: 01:00,pcie=1,x-vga=on

    “blacklist nvidia” >> /etc/modprobe.d/blacklist.conf

    What did i miss ?

    1. What’s that “00:01.0” device that’s also in the same group as your card?

      Does your card have a UEFI BIOS? I think that’s required since the guest boots in UEFI mode. Sometimes there are patched BIOS images available on the internet that you can supply with the “romfile” argument to “hostpci0” to use.

      You can try creating a new VM in SeaBIOS mode and see if the card turns on there.

      1. Hi,

        “00:01.0”, just one part form the result of the command i don’t know

        to test efi on gpu :
        run : cat rom > /tmp/image.rom
        get : cat: rom: Input/output error

        I have found this https://www.techpowerup.com/vgabios/153654/gigabyte-gtx650-2048-131211
        And use ./rom-fixer ../Gigabyte.GTX650.2048.131211.rom

        When start win10 with SeaBIOS :
        kvm: -device vfio-pci,host=01:00.0,id=hostpci0.0,bus=ich9-pcie-port-1,addr=0x0.0,x-vga=on,multifunction=on: vfio error: 0000:01:00.0: failed getting region info for VGA region index 8: Invalid argument
        device does not support requested feature x-vga

        This is because proxmox still using it ? if yes, why ?
        thanks

        1. Because you have “disable_vga=1” in your vfio-pci options, your passthrough card will no longer be available for use in SeaBIOS systems (these require VGA support).

          You can remove this setting to make the card compatible with both BIOS and UEFI guests, but you will have a greater chance of booting a guest locking up the host due to VGA arbitration conflicts. It’s better to install windows using UEFI.

          Run “lspci -s 00:01.0” to see what that device is, I guess it’s a PCIe root port or something. You may need to explicitly pass it through to the guest by adding another hostpci line, since everything in the same iommu group has to be passed through at once. edit: apparently it doesn’t need to be explicitly passed through: https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF

          1. lspci -s 00:01.0
            00:01.0 PCI bridge: Intel Corporation Skylake PCIe Controller (x16) (rev 07)

            I removed “disable_vga=1” still not start.
            I will try to install windows using UEFI (so put “disable_vga=1” in vfio-pci options ?).

            I don’t know what I’m doing exactly so it’s hard to follow you 🙁

            This is too hard for me, I don’t see the problem and understand this. I will stay with the bad graphics on the macOS Mojave VM, and control it under the web gui of proxmox 🙁

            Thanks a lot for your time and help.

          2. What error do you get when booting the Windows installer with UEFI? Yeah, you want “disable_vga=1” if all your guests will be UEFI.

            You can have Proxmox use your patched VBIOS like so:

            hostpci0: 01:00,x-vga=on,romfile=/tmp/Gigabyte.GTX650.2048.131211.rom

          3. Hi,

            So installed Win10 with UEFI, use “disable_vga=1”, in VM conf : “hostpci0: 01:00,x-vga=on,romfile=Gigabyte.GTX650.2048.131211.rom” (Copied the rom in /usr/share/kvm/)
            and finally run “update-grub”, “update-initramfs -k all -u”.

            Start with no errors in the logs (TASK OK).
            But no output signals.
            Somthing is strange here (http://vfio.blogspot.com/2014/08/does-my-graphics-card-rom-support-efi.html) we can see if our card is valid for the EFI boot right ?

            When I run
            # cd /sys/bus/pci/devices/0000:01:00.0/
            # echo 1 > rom
            # cat rom > /tmp/image.rom
            I got >>> cat: rom: Input/output error
            # echo 0 > rom

            (I have enable the internal graphics to see the boot of proxmox)

            My card should be the second card and not be used by proxmox right ?
            (I will try to move the card in an other PCI slot this evening)
            thanks

          4. So, I change the card to an other PCI slot :
            lspci -n -s 02:00
            02:00.0 0300: 10de:0fc6 (rev a1)
            02:00.1 0403: 10de:0e1b (rev a1)

            and change vm config :
            hostpci0: 02:00,pcie=1,x-vga=on,romfile=Gigabyte.GTX650.2048.131211.rom
            > No signal output either

    1. Yes I have test with 2 HDMI Câbles and monitors.
      In your case when you make a gpu passthrough you can see the console output of your VM in the web gui of proxmox ?

      1. Can you double check that vfio-pci is getting the card correctly? `lspci -nn -k | grep -i -A 2 nvidia` should say “Kernel driver in use: vfio-pci”.

        1. lspci -nn -k | grep -i -A 2 nvidia
          02:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK107 [GeForce GTX 650] [10de:0fc6] (rev a1)
          Subsystem: Gigabyte Technology Co., Ltd GK107 [GeForce GTX 650] [1458:3555]
          Kernel driver in use: vfio-pci
          Kernel modules: nvidiafb, nouveau
          02:00.1 Audio device [0403]: NVIDIA Corporation GK107 HDMI Audio Controller [10de:0e1b] (rev a1)
          Subsystem: Gigabyte Technology Co., Ltd GK107 HDMI Audio Controller [1458:3555]
          Kernel driver in use: vfio-pci

          1. That looks good, but it shows that there are Nvidia drivers available that you aren’t blacklisting, and these could be wrecking your GPU during host boot. Update your /etc/modprobe.d/blacklist.conf to be:

            blacklist nvidia
            blacklist nvidiafb
            blacklist nouveau

            Then `update-initramfs -k all -u` and reboot the host.

            You may also need to blacklist the driver for the video card’s audio controller (one of snd_hda_codec_hdmi, snd_hda_intel, snd_hda_codec, snd_hda_core probably, look at the full output of “lspci -nn -k” to check).

  3. Content of “/etc/modprobe.d/blacklist.conf” :
    blacklist nouveau
    blacklist nvidia
    blacklist nvidiafb
    blacklist snd_hda_codec_hdmi
    blacklist snd_hda_intel
    blacklist snd_hda_codec
    blacklist snd_hda_core
    blacklist radeon
    blacklist amdgpu

    What I can see from “lspci -nn -k” :
    02:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK107 [GeForce GTX 650] [10de:0fc6] (rev a1)
    Subsystem: Gigabyte Technology Co., Ltd GK107 [GeForce GTX 650] [1458:3555]
    Kernel driver in use: vfio-pci
    Kernel modules: nvidiafb, nouveau
    02:00.1 Audio device [0403]: NVIDIA Corporation GK107 HDMI Audio Controller [10de:0e1b] (rev a1)
    Subsystem: Gigabyte Technology Co., Ltd GK107 HDMI Audio Controller [1458:3555]
    Kernel driver in use: vfio-pci
    Kernel modules: snd_hda_intel

    I’m not home so as you said “adding passthrough video disables the console view entirely”…
    I will check it this evening thanks.

      1. Oh I forgot one more thing, make sure that your host UEFI/BIOS is set to make your onboard video primary instead of the card you want to pass through.

        If that still doesn’t work then I’d expect it to be due to the card’s BIOS not supporting UEFI properly.

        1. Hi, yes I used the onboard video for proxmox boot. I also tested with the gpu card to boot (Seen proxmox boot) and start the VM (Immediate black screen), So I imagine, proxmox dedicate the card to the VM but there are no signals from it.
          As you said, maybe my gpu can not use UEFI.

  4. Great tutorial(s)!

    How much is your systems power consumption?

    My Dell T20 setup (Exxon with 16 GB Ram) consumes < 45W with 4 x SSDs, 4 x 2.5" 1TB HDDs, 1 USB 3.0 PCI-E Card, 1 Nvidia GT710 (works great with passthrough) and 1 PERC H310.
    But I need a better performance and more processor cores, so I am looking for a good replacement with low power consumption at all.

    1. Good idea, I’ll do some current measurements with a clamp meter, let’s see…

      When my host is completely idle (no VMs), it sits at 180W. Adding an idle macOS VM brings that up to 220W.

      Running a Prime95 stress test on the host brings it to 500W peak.

      I’ve got 9×3.5″ hard drives, 2 SSDs, and 8 sticks of DDR3, so they probably account for a lot of the idle power consumption. It certainly keeps my desk toasty warm at all times!

      I suspect if you want something power efficient then you should stay away from multiple-socket systems like mine.

  5. Hi, it’s me again 😉
    You said you are using a “DragonFly Black USB DAC for audio”. In the future, I want a KVM to switch my mouse and keyboard and HDMI output between Windows VM and MacOS VM.
    So, I pass through the USB keyboard and mouse (and with your help a GPU thanks a lot) and all is working great. But I need some sound, so I pass through my USB headset (Logitech G35) and the sound output is very strange, it’s likes someone clicks pause and play very fast (crackles) did you know how to solve this?. (Google not very helpful, or not, find something useful).
    See something about “args: -device AC97, addr=0x18” but it’s not helping (Maybe it’s for “jack” sound card of the mother board but it’s not my goal). If you think it’s just the driver is missing (because I cannot find a MacOS driver for the G35) I will make me sad :(.

    Thanks again for all your help

    1. I’m passing through an entire USB controller using PCIe passthrough, I think you’ll end up with timing issues if you use the emulated soundcard like you’re showing instead.

      USB audio devices should work without a specialised driver, there’s a USB audio standard that they conform to (similar to USB Mass Storage for USB drives).

      Another option, if your video card driver is cooperative and your monitor supports it, is to send your audio out over HDMI and have your monitor decode it to a headphone jack on the monitor. I had some trouble doing that in the past but it helped out to add Lilu and Whatevergreen to my Clover kexts/other directory (and edit config.plist to enable kext injection): https://github.com/acidanthera/WhateverGreen

        1. You might get enough performance by just passing through your physical drives as virtual disks instead:

          https://pve.proxmox.com/wiki/Physical_disk_to_kvm

          Instead of “virtio2” suggested there, you’ll want to use SATA instead like “sata1”.

          You can indeed use PCIe passthrough to pass through a SATA controller like the one you linked, but I haven’t tried that particular model out myself.

          1. Ok (I need a RAID 1), but I will make a pass through real drives like you, thanks for the advice and your time !

  6. After hours of tinkering to get GPU pass through working on Mojave/Proxmox 5.3 (new to the VE). Finally got vfio-pci to show as the kernel driver. Sadly Mojave won’t load my R7-265 driver. This is what I get in info pci in the monitor with the guest running. So guessing something is awry 🙂 Thx for the guide though, might try High Sierra since it has proper GTX – 1xxx support.

    # info pci
    Bus 0, device 0, function 0:
    Host bridge: PCI device 8086:29c0
    id “”
    Bus 0, device 1, function 0:
    VGA controller: PCI device 1234:1111
    BAR0: 32 bit prefetchable memory at 0x90000000 [0x97ffffff].
    BAR2: 32 bit memory at 0x99207000 [0x99207fff].
    BAR6: 32 bit memory at 0xffffffffffffffff [0x0000fffe].
    id “vga”

    I know there’s no guarantee as it’s a Hackintosh lol. Hopefully Nvidia/Apple will at some point release drivers for Mojave though I doubt it’ll be any time soon.

  7. Hi something is strange with my nvme, in /dev there is no /dev/nvme*. When i make a PCI through with it, no start of the VM : […] vfio error failed to add PCI capability […]. I change the GRUB_CMDLINE_LINUX_DEFAULT=”quiet intel_iommu=on rootdelay=10″ like you. did you have some ideas ? thanks

        1. If you’re using an NVMe slot on your motherboard, check your motherboard manual to see if it needs to be enabled in UEFI settings, or if it shares PCIe lanes with one of your other cards (so that card needs to be moved to a different slot).

          1. Hi, I finally use the “Physical disk to kvm” method (sata0: /dev/disk/by-id/nvme-INTEL[blabla…] and sata1: /dev/disk/by-id/ata-KINGSTON_[blabla…]). But in the VM the disk is a “type : rotation” so I search in the proxmox doc https://pve.proxmox.com/wiki/Manual:_qm.conf and tow option seem to be perfect the “ssd=1” and “discard=on”. The thing is when i added theses parameters, the disk disappeared in the hardware tab and obviously, no boot. How do you define a SSD (best way, beacause it’s work by default but not futur proof for my SSD I think) ?

          2. Maybe your syntax was wrong? Here’s what one of my disks looks like that uses those flags:

            scsi0: vms-ssd:vm-141-disk-0,cache=unsafe,discard=on,size=128G,ssd=1

            I think that the “discard” option is probably not meaningful when passing through a disk like that, so I’d leave it off.

            PCIe passthrough will give you much better performance, and TRIM support. When you tried your NVMe passthrough, did you remember to update your vfio config to add the PCIe ID of your NVMe drive?

          3. Yes,
            I Search the ID with : lspci
            03:00.0 Non-Volatile memory controller: Intel Corporation Device f1a6 (rev 03)
            and with : lspci -n -s 03:00
            03:00.0 0108: 8086:f1a6 (rev 03)
            to add it in the config file here :
            /etc/modprobe.d/vfio.conf
            (In your guide is : /etc/modprobe.d/vfio-pci.conf not in https://pve.proxmox.com/wiki/Pci_passthrough)
            options vfio-pci ids=10de:13c2,10de:0fbb,10de:128b,10de:0e0f,8086:f1a6,1b73:1100 disable_vga=1
            And finally use :
            update-grub
            update-initramfs -k all -u
            And reboot
            In the VM config file :
            hostpci0: 03:00.0,pcie=1
            (Yes this is a M.2 slot in the motherboard)
            For classic SATA SSD, mabye i think, adding ssd=1 can only work with virtual disk (like you vm-141-disk-0 no) ?

          4. Thanks for your help once again ;)I will keep the working configuration, the performances are good enough for me.
            But for the ssd=1 it’s strange…
            Anyway, maybe some update will fix it (I have a proxmox 5.2-1 on the top and maybe your patch will not work with a newer version)

          5. Hi i just (take the risk) make a dist-upgrade to PVE 5.3-6 and ssd=1 works fine TRIM enabled.
            So simple…

  8. Hi,
    Is your EVGA GeForce GTX 1060 6GB working now or you still with the AMD Radeon R9 280X ?
    I tested with KFA2 GTX 970 EXOC but the screen is not used entirely not like the GT 710 working great. So I wonder if is the hackintosh compatibility or this is the “no NVIDIA Mojave driver” problem.
    Anyway, I will probably buy an RX 560 later.

  9. Hi, a little thing you can add in your “Proxmox hardware specs” is the case because “9×3.5″ HDD and 2 SSDs” is not easy to install in most desktop cases 🙂

    1. Haha, very true! I’m lucky to have a desk that’s tall enough to fit this thing underneath it. I’ve updated the post with the details of the case now.

      1. Nice, thanks for this 😉
        What is the “global configuration” of the “30TB of spinning disks in various ZFS configurations” because with the “Lian Li EX-H35 HDD Hot Swap Module” I can imagine you use it like a NAS (only 3 of 9) and the others is some zfs pool (like a RAID 1) ?
        I am thinking to build a custom NAS with a VM and pass through the disks and create a VM with FreeNAS for example (bad performance ? I think so…).
        But for you, how did you use your 9 disks? (in the VM too, performances?).

        1. I have 3 + 4 + 4 TB of non-redundant storage (used for “Linux ISOs” and backups), then 3 x 3TB in RAIDZ1 and 3 x 4TB as another RAIDZ1.

          FreeNAS is what I upgraded from. But I can’t see a reason to use it anymore now that ZoL (ZFS-on-Linux) is so mature on Proxmox.

          I had been some running VMs from the RAIDZ1 arrays, but the performance of spinning disks in this config is just way too slow. I now try to run all of my common VMs from my 1TB SATA SSD using SCSI emulation (my macOS VM has its own 512GB NVMe SSD using PCIe passthrough).

          1. “I had been some running VMs from the RAIDZ1 arrays” Is it a virtual disk file like VMID.row in the storage mount as RAIDZ1 (Classic use, file on disk is the virtual disk) or the RAIDZ1 (entire zpool) is a complete virtual disk and the VM see it like a normal disk ?
            In fact, this is what I try to do, create a zpool (RAIDZ1 for example) and use use the entire zpool like a virtual disk for the VM.
            With that, we can encrypt, change disk and it will be “transparent” like a normal disk for the VM.

          2. The disk images are ZFS “zvols”. This gives you all the management abilities of ZFS on the host and the VMs just see a regular disk.

          3. I’m thinking about this : passthrough the NVMe in the VM. How did you migrate, restore, or if you change it for a new one like for the samsung 960 pro 1To.
            Is The only way like you explain here ? https://www.nicksherlock.com/2017/02/accelerate-io-for-macos-sierra-proxmox-guests-by-passing-through-an-nvme-ssd/
            With something like : dd if=/path/to/the/disk/image/backup of=/dev/disk/id
            Or there is something now in proxmox to do that ? So what is your plan if something happened on your NVMe (if failed) ?
            (I searched for restore vm disk in assigned disk, but nothing interesting)

          4. Yes, I just used DD as described there. If the source disk image is smaller than the SSD you’re copying it to, it becomes pretty straightforward and you just need to grow the partition to fill the disk in Disk Utility after you’ve booted up your new copy from the SSD.

            My VM has backup software installed which sends file backups to a separate drive on the host over ssh (Duplicacy). You could also use dd from the host to make an image of the SSD as backup, before the SSD is detached from the host for the guest’s use.

  10. Really awesome guides, thank you! Would you post some picturesicture of what your machine looks like? I’d like to see how all of your components fit together, especially with 3 graphics cards.

Leave a Reply to Nicholas Sherlock Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.