Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Flint Ironstag

macrumors 65816
Dec 1, 2013
1,334
744
Houston, TX USA

jscipione

macrumors 6502
Mar 27, 2017
429
243
I've been using High Sierra in Qemu for a few months ... Now the benefits:

1. Boot screens for my RX580.
2. "native" support for M.2 NVME booting.
3. unmodified version of the macOS.
4. I can run Windows, Linux, and the macOS all at the same time, passing each on 8 cores and 16 threads, and the Linux host will load level CPU use with no real performance hit.
5. I can upgrade my system.
6. My system is open source, and I'm only really limited to how good of a coder I am, as to what I can make it do.

1 You don't get a Mac boot screen on your RX 580, you get a PC BIOS boot screen and everything that goes with that.
2 Cheesegrater Mac Pro got native NVMe boot support in Mojave.
3 Mac Pro also runs macOS unmodified
4 You could run the same virtualization software on a Mac Pro and virtualize Windows and Linux too
5 You can upgrade the cheesegrater Mac Pro too.
6 HA!

I'd love to take the "Pepsi challenge" with folks with real Apple hardware, as far as what works and benchmarks, tho if you have a 12 core 24 thread machine you will likely beat me out, as well if you have a 1070/1080/Titan XP, I'm sure you can put up better numbers than me. However my system costs around $1200 to build, and I can upgrade my CPU, video card, and ram.

A fair test is your virtualized jalopy vs. the new Mac Pro. Hopefully the boys in Cuportino don't drop the ball this time.
 

AidenShaw

macrumors P6
Feb 8, 2003
18,667
4,677
The Peninsula
A fair test is your virtualized jalopy vs. the new Mac Pro. Hopefully the boys in Cuportino don't drop the ball this time.
The longer they take to announce the vwMP, the more likely it is that they've dropped the ball.

Seriously - the vwMP should have shipped in 2017. When it didn't ship in 2018, alarm bells were flashing.

We're now nearly five months into 2019, and it's still vapour-ware. It's time to order a Z-series and move to Windows or Linux.

Do you honestly believe that Apple will ship a "Mac Pro 2019", followed by a "Mac Pro 2020", followed by a "Mac Pro 2021", followed by a "Mac Pro 2022", followed by ....

Six years between updates just won't fly.
 
Last edited:
  • Like
Reactions: macsforme

startergo

macrumors 603
Sep 20, 2018
5,022
2,283
Here is a list of VMWARE shared pass-through graphics guide:
https://www.vmware.com/resources/compatibility/pdf/vi_sptg_guide.pdf
Deploying Hardware-Accelerated Graphics with VMware Horizon 7:
https://techzone.vmware.com/resource/deploying-hardware-accelerated-graphics-vmware-horizon-7
Installation, Configuration, and Setup
For graphics acceleration, you need to install and configure the following components:

  • ESXi 6.x host
  • Virtual machine
  • Guest operating system
  • Horizon 7 version 7.x desktop pool settings
  • License server
ESXi 6.x Host
Installing the graphics card and configuring the ESXi host vary based on the type of graphics acceleration.

Installing and Configuring the ESXi Host for vSGA or vGPU
  1. Install the graphics card on the ESXi host.
  2. Put the host in maintenance mode.
  3. If you are using an NVIDIA Tesla P card, disable ECC.
  4. If you are using an NVIDIA Tesla M card, set the card to graphics mode (the default is compute) using GpuModeSwitch, which comes as a bootable ISO or a VIB.
    a. Install GpuModeSwitch without an NVIDIA driver installed:
    esxcli software vib install --no-sig-check -v /<path_to_vib>/NVIDIA-GpuModeSwitch-
    1OEM.xxx.0.0.xxxxxxx.x86_64.vib
    b. Reboot the host.
    c. Change all GPUs to graphics mode:
    gpumodeswitch --gpumode graphics
    d. Remove GpuModeSwitch:
    esxcli software vib remove -n NVIDIA-VMware_ESXi_xxx_GpuModeSwitch_Driver
  5. Install the GPU VIB:
    esxcli software vib install -v /<path_to_vib>\NVIDIA-VMware_ESXi_xxx_Host_Driver_xxx.xx-1OEM.xxx.0.0.xxxxxxx.vib
    If you are using ESXi 6.0, vSGA and vGPU have different VIB files.
    If you are using ESXi 6.5, both vSGA and vGPU use the same VIB file.
  6. Reboot, and take the host out of maintenance mode.
  7. If you are using an NVIDIA card and vSphere 6.5 or later, in the vSphere Web Client, navigate to Host > Configure > Hardware > Graphics > Host Graphics > Edit to open the Edit Host Graphics Settings window.
    5199-Graphic2.png

    a. For vGPU, select Shared Direct. For vSGA, select Shared.
    b. If you are using vGPU with different profiles per GPU, select Group VMs on GPU until full (GPU consolidation). In this case, different profiles are placed on different GPUs, and same profiles are placed on the same GPU until it is full. This method prevents you from running out of free GPUs for different profiles.

    Example:

    The host has a single M60 card, which has two GPUs. Each GPU has 8 GB of memory. Two VMs with 4 GB of frame buffer and four VMs with 2 GB are trying to run. If the first two machines started have the same profile, they are placed on different GPUs. As a result, no GPU is available for the other profile. With Group VMs on GPU until full (GPU consolidation), virtual machines with the same profile start on the same GPU.
Installing and Configuring the ESXi Host for MxGPU
  1. Install the graphics card on the ESXi host.
  2. Put the host in maintenance mode.
  3. In the BIOS of the ESXi host, verify that single-root IO virtualization (SR-IOV) is enabled and that one of the following is also enabled.
    1. Intel Virtualization Technology support for Direct I/O (Intel VT-d)
    2. AMD IO memory management unit (IOMMU)
  4. Browse to the location of the AMD FirePro VIB driver and AMD VIB install utility:
    cd /<path_to_vib>
  5. Make the VIB install utility executable, and execute it:
    chmod +x mxgpu-install.sh && sh mxgpu-install.sh –i
  6. In the script, select the option that suits your environment:
    Enter the configuration mode([A]uto/[H]ybrid/[M]anual,default:A)A
  7. For the number of virtual functions, enter the number of users you want to run on a GPU:
    Please enter number of VFs: (default:4): 8
  8. Choose whether you want keep performance fixed and independent of the number of active VMs:
    Do you want to enable Predictable Performance? ([Y]es/[N]o,default:N)N

    Done
    The configuration needs a reboot to take effect
  9. Reboot and take the host out of maintenance mode.
Installing and Configuring the ESXi Host for vDGA
  1. Install the graphics card on the ESXi host.
  2. In the BIOS of the ESXi host, verify that Intel VT-d or AMD IOMMU is enabled.
  3. To enable pass-through for the GPU in the vSphere Web Client, navigate to Host > Configure > Hardware > PCI Devices > Edit.
  4. In the All PCI Devices window, select the GPU, and reboot.
5199-Graphic3.png


Virtual Machine
Configure the general settings for the virtual machine, and then configure it according to the type of graphics acceleration you are using.

General Settings for Virtual Machines
Hardware level – The recommended hardware level is the highest that all hosts support. The minimum is hardware level version 11.

CPU – The number of CPUs required depends on usage and is determined by actual workload. As a starting point, consider these numbers:

Knowledge workers: 2

Power users: 4

Designers: 6

Memory – The amount of memory required depends on usage and is determined by actual workload. As a starting point, consider these amounts:

Knowledge workers: 2 GB

Power users: 4 GB

Designers: 8 GB

Virtual network adapter – The recommended virtual network adapter is VMXNET3.

Virtual storage controller – The recommended virtual disk is LSI Logic SAS, but demanding workloads using local flash-based storage might benefit from using VMware Paravirtual.

Other devices – We recommend removing devices that are not used, such as a COM port, a printer port, DVD, or floppy.

Now that you have configured the general settings for the virtual machine, configure the settings for the type of graphics acceleration.

Virtual Machine Settings for vSGA
Configure the virtual machine as follows if you are using vSGA.

  1. Enable 3D graphics by selecting Enable 3D Support.
  2. Set the 3D Renderer to Automatic or Hardware.
    Automatic uses hardware acceleration if the host that the virtual machine is starting in has a capable and available hardware GPU. If a hardware GPU is not available, the virtual machine uses software 3D rendering for 3D tasks. The Automatic option allows the virtual machine to be started on or migrated to (via vSphere vMotion) any host (vSphere version 5.0 or later) and to use the best solution available on that host.
    Hardware uses only hardware-accelerated GPUs. If a hardware GPU is not present in a host, the virtual machine does not start, or you cannot perform a live vSphere vMotion migration to that host. Migration is possible as long as the host that the virtual machine is being moved to has a capable and available hardware GPU. The Hardware option guarantees that a virtual machine always uses hardware 3D rendering when a GPU is available, but it limits the virtual machine to using hosts that have hardware GPUs.
  3. Select the amount of video memory (3D Memory).
3D Memory has a default of 96 MB, a minimum of 64 MB, and a maximum of 512 MB.

5199-Graphic4.png


Virtual Machine Settings for vGPU
Configure the virtual machine as follows if you are using vGPU.

  1. On the vSphere console, select your virtual machine, and navigate to Edit Settings.
  2. Add a shared PCI device to the virtual machine, and select the appropriate PCI device to enable GPU pass-through on the virtual machine. In this case, select NVIDIA GRID vGPU.
    5199-Graphic5.png
  3. From the GPU Profile drop-down menu, select the correct profile.
    5199-Graphic6.png

    The last part of the GPU Profile string (4q in this example) indicates the size of the frame buffer (VRAM) in gigabytes and the required GRID license. For the VRAM, 0 means 512 MB, 1 means 1024 MB, and so on. So for this profile, the size is 4 GB. The possible GRID license types are:
    b – GRID Virtual PC virtual GPUs for business desktop computing
    a – GRID Virtual Application virtual GPUs for Remote Desktop Session Hosts
    q – Quadro Virtual Datacenter Workstation (vDWS) for workstation-specific graphics features and accelerations, such as up to four 4K monitors and certified drivers for professional applications

  4. Click Reserve all memory, which reserves all memory when creating the virtual machine.
Virtual Machine Settings for MxGPU and vDGA
Configure the virtual machine as follows if you are using MxGPU or vDGA.

  1. For devices with a large BAR size (for example, Tesla P40), you must use vSphere 6.5 and set the following advanced configuration parameters on the VM:
    1. firmware="efi"
    2. pciPassthru.use64bitMMIO="TRUE"
    3. pciPassthru.64bitMMIOSizeGB="64"
  2. Add a PCI device (virtual functions are presented as PCI devices) to the virtual machine, and select the appropriate PCI device to enable GPU pass-through.
    5199-Graphic12.png


    With MxGPU, you can also do this by installing the Radeon Pro Settings for the VMware vSphere Client Plug-in.

    5199-Graphic7.png

    To add a PCI device for multiple machines at once, from ssh:
    a. Browse to the AMD FirePro VIB driver and AMD VIB install utility:
    cd /<path_to_vib>
    b. Edit vms.cfg:
    vi vms.cfg
    Press I, and change the instances of .* to match the names of your VMs that require a GPU. For example, to match *MxGPU* to VM names that include MxGPU, such as WIN10-MxGPU-001 or WIN8.1-MxGPU-002:
    .*MxGPU.*
    To save and quit, press Esc, type :wq, and press Enter.
    c. Assign the virtual functions to the VMs:
    sh mxgpu-install.sh –a assign
    Eligible VMs:
    WIN10-MxGPU-001
    WIN10-MxGPU-002
    WIN8.1-MxGPU-001
    WIN8.1-MxGPU-002
    These VMs will be assigned a VF, is it OK?[Y/N]y
    d. Press Enter.
  3. Select Reserve all guest memory (All locked).
    5199-Graphic8_0.png
Guest Operating System
Install and configure the guest operating system.

Windows Guest Operating System
For a Windows guest operating system, install and configure as follows.

  1. Install Windows 7, 10, or 2012 R2, and install all updates.
  2. The following installations are also recommended.
    1. Install common Microsoft runtimes and features.
      Before updating Windows in the VM, install the required versions of Microsoft runtimes that are patched by Windows Update and that can run side by side in the image. For example, install:
      1. .NET Framework (3.5, 4.5, and so on)
      2. Visual C++ Redistributables x86 / x64 (2005 SP1, 2008, 2012, and so on)
    2. Install Microsoft updates.
      Install the updates to Microsoft Windows and other Microsoft products with Windows Update or Windows Server Update Service. You might need to first manually install Windows Update Client for Windows 8.1 and Windows Server 2012 R2: March 2016.
    3. Tune Windows with the VMware OS Optimization Tool using the default options
  3. If you are not using vSGA:
    1. Obtain the GPU drivers from the GPU vendor (with vGPU, this is a matched pair with the VIB file).
    2. Install the GPU device drivers in the guest operating system of the virtual machine. For MxGPU, make sure that the GPU Server option is selected.
  4. Install VMware Tools™ and Horizon Agent (select 3D RDSH feature for Windows 2012 R2 Remote Desktop Session Hosts) in the guest operating system.
  5. Reboot the system.
Red Hat Enterprise Linux Operating System for vGPU and vDGA
For a Red Hat Enterprise Linus guest operating system, install and configure as follows.

  1. Install Red Hat Enterprise Linux 6.9 or 7.4 x64, install all updates, and reboot.
  2. Install gcc, kernel makefiles, and headers:
    sudo yum install gcc-c++ kernel-devel-$(uname -r) kernel-headers-$(uname -r) -y
  3. Disable libvirt:
    sudo systemctl disable libvirtd.service
  4. Disable the open-source nouveau driver.
    a. Open the following configuration file using vi:
    sudo vi /etc/default/grub
    If you are using RHEL 6.x:
    sudo vi /boot/grub/grub.conf
    b. Find the line for GRUB_CMDLINE_LINUX, and add blacklist=nouveau to the line.
    c. Add the line blacklist=nouveau anywhere in the following configuration file:
    sudo vi /etc/modprobe.d/blacklist.conf
  5. Generate new grub.cfg and initramfs files:
    sudo grub2-mkconfig -o /boot/grub2/grub.cfg
    sudo dracut /boot/initramfs-$(uname -r).img $(uname -r) -f
  6. Reboot.
  7. Install the NVIDIA driver, and acknowledge all questions:
    init 3
    chmod +x NVIDIA-Linux-x86_64-xxx.xx-grid.run
    sudo ./NVIDIA-Linux-x86_64-xxx.xx-grid.run
  8. (Optional) Install the CUDA Toolkit (run file method recommended), but do not install the included driver.
  9. Add license server information:
    sudo cp /etc/nvidia/gridd.conf.template /etc/nvidia/gridd.conf
    sudo vi /etc/nvidia/gridd.conf
    Set ServerAddress and BackupServerAddress to the DNS names or IPs of your license servers, and FeatureType to 1 for vGPU and 2 for vDGA.
  10. Install the Horizon Agent:
    tar -zxvf VMware-horizonagent-linux-x86_64-7.3.0-6604962.tar.gz
    cd VMware-horizonagent-linux-x86_64-7.3.0-6604962
    sudo ./install_viewagent.sh
    Following is a screenshot of the NVIDIA X Server Settings window showing the results of installation and configuration for a Red Hat Enterprise Linux guest operating system.
    5199-Graphic9.png

Horizon 7 version 7.x Pool and Farm Settings
During the creation of a new farm in Horizon 7, configuring a 3D farm is the same as a normal farm. During the creation of a new View desktop pool in Horizon 7, configure the pool as normal until you reach the Desktop Pool Settings section.

  1. In the Add Desktop Pool window scroll to the Remote Display Protocol section.
  2. For the 3D Renderer option, do one of the following.
  • For vSGA, select either Hardware or Automatic.
  • For vDGA or MxGPU, select Hardware.
  • For vGPU, select NVIDIA GRID VGPU.
5199-Graphic10.png


Automatic uses hardware acceleration if the host that the virtual machine is starting in has a capable and available hardware GPU. If a hardware GPU is not available, the virtual machine uses software 3D rendering for any 3D tasks. The Automatic option allows the virtual machine to be started on, or migrated (via vSphere vMotion) to any host (VMware vSphere version 5.0 or later), and to use the best solution available on that host.

Hardware uses only hardware-accelerated GPUs. If a hardware GPU is not present in a host, the virtual machine will not start, or you cannot perform a live vSphere vMotion migration to that host. Migration is possible as long as the host the virtual machine is being moved to has a capable and available hardware GPU. The Hardware option guarantees that a virtual machine always uses hardware 3D rendering when a GPU is available, but it limits the virtual machine to using hosts that have hardware GPUs.

For Horizon 7 version 7.0 or 7.1, configure the amount of VRAM you want each virtual desktop to have. If you are using vGPU, also select the profile to use. With Horizon 7 version 7.1, you can use vGPU with instant clones, but the profile must match the profile set on the parent VM with the vSphere Web Client.

3D Memory has a default of 96 MB, a minimum of 64 MB, and a maximum of 512 MB.

With Horizon 7 version 7.2 and later, the video memory and vGPU profile are inherited from the VM or VM snapshot.

5199-Graphic11.png


License Server
For vGPU with GRID 2.0, you must install a license server. See the GRID Virtual GPU User Guide included with your NVIDIA driver download.

Resource Monitoring
Various tools are available for monitoring resources when using graphics acceleration.

gpuvm
To better manage the GPU resources available on an ESXi host, examine the current GPU resource allocation. The ESXi command-line query utility gpuvm lists the GPUs installed on an ESXi host and displays the amount of GPU memory that is allocated to each virtual machine on that host.

gpuvm

Xserver unix:0, GPU maximum memory 2076672KB

pid 118561, VM "Test-VM-001", reserved 131072KB of GPU memory pid 664081, VM "Test-VM-002", reserved 261120KB of GPU memory GPU memory left 1684480KB

nvidia-smi
To get a summary of the vGPUs currently running on each physical GPU in the system, run nvidia-smi without arguments.



Thu Oct 5 09:28:05 2017

+-----------------------------------------------------------------------------+

| NVIDIA-SMI 384.73 Driver Version: 384.73 |

|-------------------------------+----------------------+----------------------+

| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |

| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |

|===============================+======================+======================|

| 0 Tesla P40 On | 00000000:84:00.0 Off | Off |

| N/A 38C P0 60W / 250W | 12305MiB / 24575MiB | 0% Default |

+-------------------------------+----------------------+----------------------+



+-----------------------------------------------------------------------------+

| Processes: GPU Memory |

| GPU PID Type Process name Usage |

|=============================================================================|

| 0 135930 M+C+G manual 4084MiB |

| 0 223606 M+C+G centos3D004 4084MiB |

| 0 223804 M+C+G centos3D003 4084MiB |

+-----------------------------------------------------------------------------+



To monitor vGPU engine usage across multiple vGPUs, run nvidia-smi vgpu with the –u or --utilization option:

nvidia-smi vgpu -u

The following usage statistics are reported once every second for each vGPU.

#gpu

vgpu

sm

mem

enc

dec

#Idx

Id

%

%

%

%

0

11924

6

3

0

0

1

11903

8

3

0

0

2

11908

10

4

0

0

Key:

gpu – GPU ID

vgpu – vGPU ID

sm – Compute

mem – Memory controller bandwidth

enc – Video encoder

dec – Video decoder

Troubleshooting
Try these troubleshooting techniques to address general problems or a specific symptom.

General Troubleshooting for Graphics Acceleration
If an issue arises with vSGA, vGPU, or vDGA, or if Xorg fails to start, try one or more of the following solutions in any order.

Verify That the GPU Driver Loads
To verify that the GPU VIB is installed, run one of the following commands.

  • For AMD-based GPUs:
    #esxcli software vib list | grep fglrx
  • For NVIDIA-based GPUs:
    #esxcli software vib list | grep NVIDIA
If the VIB is installed correctly, the output resembles the following:

NVIDIA-VMware 304.59-1-OEM.510.0.0.799733 NVIDIA
VMwareAccepted 2012-11-14

To verify that the GPU driver loads, run the following command.

  • For AMD-based GPUs:
    #esxcli system module load –m fglrx
  • For NVIDIA-based GPUs:
    #esxcli system module load –m nvidia
If the driver loads correctly, the output resembles the following:

Unable to load module /usr/lib/vmware/vmkmod/nvidia: Busy

If the GPU driver does not load, check the vmkernel.log:

# vi /var/log/vmkernel.log

On AMD hardware, search for FGLRX. On NVIDIA hardware, search for NVRM. Often, an issue with the GPU is identified in the vmkernel.log.

Verify That Display Devices Are Present in the Host
To make sure that the graphics adapter is installed correctly, run the following command on the ESXi host:

# esxcli hardware pci list –c 0x0300 –m 0xff

The output should resemble the following example, even if some of the particulars differ:

000:001:00.0
Address: 000:001:00.0
Segment: 0x0000
Bus: 0x01
Slot: 0x00
Function: 0x00
VMkernel Name:
Vendor Name: NVIDIA Corporation
Device Name: NVIDIA Quadro 6000
Configured Owner: Unknown
Current Owner: VMkernel
Vendor ID: 0x10de
Device ID: 0x0df8
SubVendor ID: 0x103c
SubDevice ID: 0x0835
Device Class: 0x0300
Device Class Name: VGA compatible controller
Programming Interface: 0x00
Revision ID: 0xa1
Interrupt Line: 0x0b
IRQ: 11
Interrupt Vector: 0x78
PCI Pin: 0x69

Check the PCI Bus Slot Order
If you installed a second lower-end GPU in the server, the ESXi console session chooses the higher-end card. If this occurs, swap the two GPUs between PCIe slots, or change the primary GPU settings in the server BIOS. Then the card for the console (low-end) will come first.

Check Xorg Logs
If the correct devices are present in the previous troubleshooting methods, view the Xorg log file to see if there is an obvious issue:

# vi /var/log/Xorg.log

Troubleshooting Specific Issues in Graphics Acceleration
This section describes solutions to specific issues that could arise in graphics acceleration deployments.

Problem:

sched.mem.min error when starting the virtual machine.

Solution:

Check sched.mem.min.

If you get a vSphere error about sched.mem.min, add the following parameter to the VMX file of the virtual machine:

sched.mem.min = "4096"

Note: The number in quotes, 4096 in the previous example, must match the amount of configured virtual machine memory. The example is for a virtual machine with 4 GB of RAM.


Problem:

Only able to use one display in Windows 10 with vGPU -0B or -0Q profiles.

Solution:

Use a profile that supports more than one virtual display head and has at least 1 GB of frame buffer.

To reduce the possibility of memory exhaustion, vGPU profiles with 512 MB or less of frame buffer support only one virtual display head on a Windows 10 guest OS.


Problem:

Unable to use NVENC with vGPU -0B or -0Q profiles.

Solution:

If you require NVENC to be enabled, use a profile that has at least 1 GB of frame buffer.

Using the frame buffer for the NVIDIA hardware-based H.264 / HEVC video encoder (NVENC) might cause memory exhaustion with vGPU profiles that have 512 MB or less of frame buffer. To reduce the possibility of memory exhaustion, NVENC is disabled on profiles that have 512 MB or less of frame buffer.


Problem:

Unable to load vGPU driver in the guest operating system.

Depending on the versions of drivers in use, the vSphere VM’s log file reports one of the following errors.

  • A version mismatch between guest and host drivers:
vthread-10| E105: vmiop_log: Guest VGX version(2.0) and Host VGX version(2.1) do not match

  • A signature mismatch:
vthread-10| E105: vmiop_log: VGPU message signature mismatch

Solution:

Install the latest NVIDIA vGPU release driver matching the installed VIB on ESXi in the VM.


Problem:

Tesla-based vGPU fails to start.

Solution:

Disable error-correcting code (ECC) on all GPUs.

Tesla GPUs support ECC, but the NVIDIA GRID vGPU does not support ECC memory. If ECC memory is enabled, the NVIDIA GRID vGPU fails to start. The following error is logged in the VMware vSphere VM’s log file:

vthread10|E105: Initialization: VGX not supported with ECC Enabled.

  1. Use nvidia-smi to list the status of all GPUs.
  2. Check whether ECC is enabled on the GPUs.
  3. Change the ECC status to Off on each GPU for which ECC is enabled by executing the following command:
    nvidia-smi -i id -e 0 (id is the index of the GPU as reported by nvidia-smi)
  4. Reboot the host.
Problem:

Single vGPU benchmark scores are lower than the pass-through GPU.

Solution:

Disable the Frame Rate Limiter (FRL) by adding the configuration parameter pciPassthru0.cfg.frame_rate_limiter with a value of 0 in the VM’s advanced configuration options.

FRL is enabled on all vGPUs to ensure balanced performance across multiple vGPUs that are resident on the same physical GPU. FRL is designed to provide a good interactive remote graphics experience, but it can reduce scores in benchmarks that depend on measuring frame-rendering rates as compared to the same benchmarks running on a pass-through GPU.


Problem:

VMs configured with large memory fail to initialize the vGPU when booted.

When starting multiple VMs configured with large amounts of RAM (typically more than 32 GB per VM), a VM might fail to initialize the vGPU. The NVIDIA GRID GPU is present in Windows Device Manager but displays a warning sign and the following device status:

Windows has stopped this device because it has reported problems. (Code 43)

The vSphere VM’s log file contains these error messages:

vthread10|E105: NVOS status 0x29
vthread10|E105: Assertion Failed at 0x7620fd4b:179
vthread10|E105: 8 frames returned by backtrace
...
vthread10|E105: VGPU message 12 failed, result code: 0x29
...
vthread10|E105: NVOS status 0x8
vthread10|E105: Assertion Failed at 0x7620c8df:280
vthread10|E105: 8 frames returned by backtrace
...
vthread10|E105: VGPU message 26 failed, result code: 0x8

Solution:

A vGPU reserves a portion of the VM’s frame buffer for use in GPU mapping of VM system memory. The default reservation is sufficient to support up to 32 GB of system memory. You can accommodate up to 64 GB by adding this configuration parameter:

pciPassthru0.cfg.enable_large_sys_mem

with a value of 1 in the VM’s advanced configuration options.

Summary
VMware Horizon 7 offers three technologies for hardware-accelerated graphics, each with its own advantages.

  • Virtual Shared Pass-Through Graphics Acceleration (MxGPU or vGPU) – Best match for nearly all use cases.
  • Virtual Shared Graphics Acceleration (vSGA) – For light graphical workloads that use only DirectX9 or OpenGL 2.1 and require the maximum level of consolidation.
  • Virtual Dedicated Graphics Acceleration (vDGA) – For heavy graphical workloads that require the maximum level of performance.


[doublepost=1556428162][/doublepost]If you notice in the list of supported cards there is GRID K1 and GRID K2 for NVIDIA. You can covert consumer NVIDIA to K1 and K2:
" It normally requires hard wire mod then after flash the GPU BIOS it will act like Grid/Quadro but the function is limited. For example, even you could mode as K1/K2 but you can only use pass-through to a single VM. It's good for testing propose, but I won't use that for production.

Current gen K1/K2 price has been going down since the announcement of the new gen. Quadro on the other hand can be found in reasonable price. K4000 can be found around $450. You will be better off use pass-through for that. If more user session is required I would do RDS/XenApp with K4000 pass-through. You can easily get 10-20 users using CAD that way with reasonable performance."

For example conversion of GT 640 to Grid K1:
http://www.eevblog.com/forum/chat/h...HPSESSID=r3ift8acta09i2bg0iildomg85#msg213332
[doublepost=1556429476][/doublepost]More on vDGA:
https://www.vmware.com/content/dam/...phics-acceleration-deployment-white-paper.pdf

http://images.nvidia.com/content/grid/vmware/horizon-with-grid-vgpu-faq.pdf

https://images.nvidia.com/content/pdf/vgpu/guides/vgpu-deployment-guide-horizon-on-vsphere-final.pdf
 

startergo

macrumors 603
Sep 20, 2018
5,022
2,283
If you decide to try virtualization and video card PCI pass-through here is a list of NVIDIAVideo Encode and Decode GPU Support Matrix:
https://developer.nvidia.com/video-encode-decode-gpu-support-matrix

Citrix supervisor pass-through list:
http://hcl.xenserver.org/gpus/

Quadro P2000 Vmware compatibility guide:

https://www.vmware.com/resources/compatibility/search.php?deviceCategory=vdga&details=1&gpuDeviceModels=NVIDIA Quadro P2000&page=1&display_interval=10&sortColumn=Partner&sortOrder=Asc

"I found stable workaround to make Quadro P2000 work in passthrough (nvidia-smi reports Virtualization mode : Pass-Through). Some relevant sample outputs for nvenc/nvdec (with nvenc application running):

  1. # nvidia-smi
  2. Tue May 16 19:53:28 2017
  3. +-----------------------------------------------------------------------------+
  4. | NVIDIA-SMI 381.22 Driver Version: 381.22 |
  5. |-------------------------------+----------------------+----------------------+
  6. | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
  7. | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
  8. |===============================+======================+======================|
  9. | 0 Quadro P2000 Off | 0000:00:05.0 Off | N/A |
  10. | 64% 67C P0 27W / 75W | 223MiB / 5053MiB | 4% Default |
  11. +-------------------------------+----------------------+----------------------+

  12. +-----------------------------------------------------------------------------+
  13. | Processes: GPU Memory |
  14. | GPU PID Type Process name Usage |
  15. |=============================================================================|
  16. | 0 767 C /root/thinvdi_encoder 213MiB |
  17. +-----------------------------------------------------------------------------+
  18. # nvidia-smi -ac 3504,1721
  19. # nvidia-smi -q
  20. Timestamp : Tue May 16 19:51:49 2017
  21. Driver Version : 381.22
  22. Attached GPUs : 1
  23. GPU 0000:00:05.0
  24. Product Name : Quadro P2000
  25. Product Brand : Quadro
  26. Display Mode : Disabled
  27. Display Active : Disabled
  28. Persistence Mode : Disabled
  29. Accounting Mode : Disabled
  30. Accounting Mode Buffer Size : 1920
  31. Driver Model
  32. Current : N/A
  33. Pending : N/A
  34. Serial Number : ....
  35. GPU UUID : GPU-....
  36. Minor Number : 0
  37. VBIOS Version : 86.06.3F.00.2E
  38. MultiGPU Board : No
  39. Board ID : 0x5
  40. GPU Part Number : 900-5G410-0300-000
  41. Inforom Version
  42. Image Version : G410.0502.00.02
  43. OEM Object : 1.1
  44. ECC Object : N/A
  45. Power Management Object : N/A
  46. GPU Operation Mode
  47. Current : N/A
  48. Pending : N/A
  49. GPU Virtualization Mode
  50. Virtualization mode : Pass-Through
  51. PCI
  52. Bus : 0x00
  53. Device : 0x05
  54. Domain : 0x0000
  55. Device Id : 0x1C3010DE
  56. Bus Id : 0000:00:05.0
  57. Sub System Id : 0x11B3103C
  58. GPU Link Info
  59. PCIe Generation
  60. Max : 3
  61. Current : 3
  62. Link Width
  63. Max : 16x
  64. Current : 16x
  65. Bridge Chip
  66. Type : N/A
  67. Firmware : N/A
  68. Replays since reset : 0
  69. Tx Throughput : 32000 KB/s
  70. Rx Throughput : 658000 KB/s
  71. Fan Speed : 64 %
  72. Performance State : P0
  73. Clocks Throttle Reasons
  74. Idle : Not Active
  75. Applications Clocks Setting : Not Active
  76. SW Power Cap : Not Active
  77. HW Slowdown : Not Active
  78. Sync Boost : Not Active
  79. Unknown : Not Active
  80. FB Memory Usage
  81. Total : 5053 MiB
  82. Used : 223 MiB
  83. Free : 4830 MiB
  84. BAR1 Memory Usage
  85. Total : 256 MiB
  86. Used : 2 MiB
  87. Free : 254 MiB
  88. Compute Mode : Default
  89. Utilization
  90. Gpu : 4 %
  91. Memory : 1 %
  92. Encoder : 17 %
  93. Decoder : 0 %
  94. Ecc Mode
  95. Current : N/A
  96. Pending : N/A
  97. ECC Errors
  98. Volatile
  99. Single Bit
  100. Device Memory : N/A
  101. Register File : N/A
  102. L1 Cache : N/A
  103. L2 Cache : N/A
  104. Texture Memory : N/A
  105. Texture Shared : N/A
  106. Total : N/A
  107. Double Bit
  108. Device Memory : N/A
  109. Register File : N/A
  110. L1 Cache : N/A
  111. L2 Cache : N/A
  112. Texture Memory : N/A
  113. Texture Shared : N/A
  114. Total : N/A
  115. Aggregate
  116. Single Bit
  117. Device Memory : N/A
  118. Register File : N/A
  119. L1 Cache : N/A
  120. L2 Cache : N/A
  121. Texture Memory : N/A
  122. Texture Shared : N/A
  123. Total : N/A
  124. Double Bit
  125. Device Memory : N/A
  126. Register File : N/A
  127. L1 Cache : N/A
  128. L2 Cache : N/A
  129. Texture Memory : N/A
  130. Texture Shared : N/A
  131. Total : N/A
  132. Retired Pages
  133. Single Bit ECC : N/A
  134. Double Bit ECC : N/A
  135. Pending : N/A
  136. Temperature
  137. GPU Current Temp : 65 C
  138. GPU Shutdown Temp : 104 C
  139. GPU Slowdown Temp : 101 C
  140. Power Readings
  141. Power Management : Supported
  142. Power Draw : 27.33 W
  143. Power Limit : 75.00 W
  144. Default Power Limit : 75.00 W
  145. Enforced Power Limit : 75.00 W
  146. Min Power Limit : 75.00 W
  147. Max Power Limit : 75.00 W
  148. Clocks
  149. Graphics : 1721 MHz
  150. SM : 1721 MHz
  151. Memory : 3499 MHz
  152. Video : 1544 MHz
  153. Applications Clocks
  154. Graphics : 1721 MHz
  155. Memory : 3504 MHz
  156. Default Applications Clocks
  157. Graphics : 1075 MHz
  158. Memory : 3504 MHz
  159. Max Clocks
  160. Graphics : 1721 MHz
  161. SM : 1721 MHz
  162. Memory : 3504 MHz
  163. Video : 1556 MHz
  164. Clock Policy
  165. Auto Boost : N/A
  166. Auto Boost Default : N/A
  167. Processes
  168. Process ID : 767
  169. Type : C
  170. Name : /root/thinvdi_encoder
  171. Used GPU Memory : 213 MiB
  172. # nvidia-smi pmon
  173. # gpu pid type sm mem enc dec command
  174. # Idx # C/G % % % % name
  175. 0 767 C 3 0 16 0 thinvdi_encoder
  176. 0 767 C 3 0 16 0 thinvdi_encoder
  177. # nvidia-smi dmon
  178. # gpu pwr temp sm mem enc dec mclk pclk
  179. # Idx W C % % % % MHz MHz
  180. 0 27 68 4 1 17 0 3499 1721
  181. 0 27 68 4 1 17 0 3499 1721


nvidia-quadro-p2000-nvenc-performance.png
"

https://devtalk.nvidia.com/default/...quadro-p4000-p2000-p1000-p600-p400/?offset=11

How to enable a VMware Virtual Machine for GPU Pass-through:
Environment:
Configuration Steps:
  1. Enable the Host for GPU Passthrough:
    1. Check VT-d or AMD IOMMU Is enabled on the host by running the following command, either via SSH or on the console. (Note: replace <module_name> with the name of the module: vtddmar for Intel, AMDiommu for AMD). # esxcfg-module –l | grep <module_name> If the appropriate module is not present, you might have to enable it in the BIOS, or your hardware might not be capable of providing PCI passthrough.
    2. Using the vSphere Client, connect to VMware vCenter™ and select the host with the GPU card installed.
    3. Select the Configuration tab for the host, and click Advanced Settings (Hardware in the top left section). If the host has devices enabled for passthrough, these devices will be listed here.
    4. To configure passthrough for the GPU, click Configure Passthrough.
    5. In the Mark Devices for Passthrough window, check the box that corresponds to the GPU adapter installed in the host.
    6. Click OK. The GPU should now be listed now in the Window on the Advanced settings page.
    7. Note: If the device has an orange arrow displayed on the icon, the host needs to be rebooted before passthrough will function. If the device icon is green, passthrough is enabled.
  2. Enable the Virtual Machine for GPU Passthrough
    1. Update the VM to Hardware Version 9
    2. For vDGA to function, all the virtual machine configured memory must be reserved. If each virtual machine has 2GB of memory allocated, you should reserve all 2GB. To do this, select the Reserve all guest memory option when you view the Memory option under the Resources tab in a virtual machine’s settings window.
    3. For virtual machines that have more than 2GB of configured memory, add the following parameter to the .vmx file of the virtual machine (you can add this at the end of the file): pciHole.start = "2048"
    4. Using the vSphere Client, connect directly to the ESXi host with the GPU card installed, or select the host in vCenter.
    5. Right-click the virtual machine and select Edit Settings
    6. Add a new device by selecting PCI Device from the list, and click Next.
    7. Select the GPU as the passthrough device to connect to the virtual machine from the drop-down list, and click Next.
    8. Click Finish
    9. Download and install the drivers according to the Virtual Machine's OS.
    10. Reboot the virtual machine.
https://www.dell.com/support/articl...-virtual-machine-for-gpu-pass-through?lang=en



Configure VMware ESXi 6.5U1 for VMDirectPath pass-through of any NVMe device like Intel Optane:

Using GPUs with Virtual Machines on vSphere – Part 2: VMDirectPath I/O
https://blogs.vmware.com/apps/2018/...hines-on-vsphere-part-2-vmdirectpath-i-o.html

Setting up GTX-1070 Passthrough with ESXi
https://medium.com/@davidramsay/setting-up-gtx-1070-passthrough-with-esxi-2c4f3519d39e
 
Last edited:

monkeybagel

macrumors 65816
Jul 24, 2011
1,142
61
United States
I asked one of the MOSR MacRumors Forum admins about setting up a subforum specifically about these sort of visualisation questions, perhaps with posts preconfigured to declare / sort based on host / client systems as a way to organise it. Would anyone else be interested in that? There really isn't a good resource for "i've never done this and don't know where to start", the way the Mac Pro community here is for people looking at getting into / maintaining a cMP.
[doublepost=1555343443][/doublepost]

I'm also interested in this sort of thing, and frankly, I just don't know if it can be done - but to have a workstation, with something running the hardware, that can pass through processor and Nvidia GPU to a Windows system for VR work / gaming, and processor and an AMD GPU (if necessary) to a macOS system for the rest of my work (perhaps even to multiple versions of macOS for old software / systems etc). I really don't want to run two separate computers, unless I build a PC that's effectively a smallish VR appliance.

Has anyone else looked at / run the Linux distribution elementaryOS? It really looks a lot calmer / less blingy than macOS now.


I am always interested in ways to extend the life of the cheese grater. Although I have had no reason whatsoever to want me to upgrade as it still meets my needs as good as it did when I purchased it new, I do not do 4K rendering or the like as many do on here - I mainly managed Windows Server 2012/2016/2019 Servers, ESXi and VMware vSphere networks with Horizon, and have pure SSD storage for my working disks and 128GB of RAM. I even reinstalled the 5870 video card after the disappointing news the 980Ti lacked Mojave support (anyone interested in purchasing it? It is flashed) and it works fine with 10.13 and two 27” Apple LED Cinema Displays. I see no need at this point in time in upgrading and am so used to the UNIX reliability and (to our knowledge) less data going back to Apple, in addition to the inevitable time Microsoft finally unveils the WaaS plan and starts pushing their subscription junk down our throats, I would prefer to stay with OS X unless Apple makes a similar move. The *aaS thing is out of control IMO.
 

startergo

macrumors 603
Sep 20, 2018
5,022
2,283
It looks like even the consumer NVIDIA cards can be passed through the ESXI so long as the card is not detected as being run in a virtual machine. So adding:
Code:
hypervisor.cpuid.v0 = FALSE
to the bottom of the .vmx file should hide the virtualization from the guest and the regular Windows drivers should load without error code 43
https://ianmcdowell.net/blog/esxi-nvidia/
Somebody with an ESXI installed may try that
As far as the OSX virtual machine the RX580 should pass-through to it without issues. So if you have NVIDIA and RX580 installed first pass-through the NVIDIA to the ESXI then attach it to a WIN 10 VM while you monitor is connected to the RX-580 output( otherwise the virtual machine may "steal" your GPU and you won't be able to see anything). If Windows VM does not work you may try Linux.

And another trial/error guide for the passthrough:
https://communities.vmware.com/docs/DOC-25479
[doublepost=1556623635][/doublepost]another successful passthrough:
https://www.reddit.com/r/homelab/comments/61plsd/the_multiheaded_gaming_r720_project/
 
  • Like
Reactions: mattspace

startergo

macrumors 603
Sep 20, 2018
5,022
2,283
I have a dilemma here. As you may be aware passing through USB ports requires PCI USB card. Otherwise you have to do VNC type connection to the Hypervisor. I have picked up 2 cards. This PCI-e USB 3.0 card has 4 individual chips, so it shows up as 4 PCI devices that can passthrough to 4 VMs. But it is x4 PCI 2 and it supports 4x5GB simultaneous transfers. It is essentially USB3.1 Generation 1 card
And there is this one, which is USB 3.1 GEN 2 PCIe 3.0 x4 and it supports 4x10GB simultaneous transfers ( my assumption is that this one also has 4 independent controllers)
And the dilemma is that to get those speeds I need to place it in the x16 slot. Which is ok. My question is:
If I put it in the x4 slot will it at least perform the same as HBA RocketU 1144D? I don't mind the higher price If it gives the option to use it at full 10GB speed (in x16) slot or at 5GB in the x4 slot.
 

jscipione

macrumors 6502
Mar 27, 2017
429
243
I have a dilemma here. As you may be aware passing through USB ports requires PCI USB card. Otherwise you have to do VNC type connection to the Hypervisor. I have picked up 2 cards. This PCI-e USB 3.0 card has 4 individual chips, so it shows up as 4 PCI devices that can passthrough to 4 VMs. But it is x4 PCI 2 and it supports 4x5GB simultaneous transfers. It is essentially USB3.1 Generation 1 card
And there is this one, which is USB 3.1 GEN 2 PCIe 3.0 x4 and it supports 4x10GB simultaneous transfers ( my assumption is that this one also has 4 independent controllers)
And the dilemma is that to get those speeds I need to place it in the x16 slot. Which is ok. My question is:
If I put it in the x4 slot will it at least perform the same as HBA RocketU 1144D? I don't mind the higher price If it gives the option to use it at full 10GB speed (in x16) slot or at 5GB in the x4 slot.
The 1144D splits a PCIe 2.0x4 connection into four PCIe 2.0 x1 connections. The 1344A splits a PCIe 3.0 x4 connection into two PCIe 3.0 x2 connections. Both cards utilize only up to a x4 connection and will not benefit further from a x16 connection. The first card will produce 4x 5gbps, the latter 2x 10gbps.
 
  • Like
Reactions: startergo

startergo

macrumors 603
Sep 20, 2018
5,022
2,283
The 1144D splits a PCIe 2.0x4 connection into four PCIe 2.0 x1 connections. The 1344A splits a PCIe 3.0 x4 connection into two PCIe 3.0 x2 connections. Both cards utilize only up to a x4 connection and will not benefit further from a x16 connection. The first card will produce 4x 5gbps, the latter 2x 10gbps.
I thought that 3x4 pci bandwidth is 3.94 GB/s on a gen 3 pcie and 2x8 on gen 2 pcie is 4.0 GB/s? so to get the full 3x4 speed on gen 2 pcie we would need bifurcation controller on a x16 slot?
https://en.wikipedia.org/wiki/PCI_Express
 

jscipione

macrumors 6502
Mar 27, 2017
429
243
I thought that 3x4 pci bandwidth is 3.94 GB/s on a gen 3 pcie and 2x8 on gen 2 pcie is 4.0 GB/s? so to get the full 3x4 speed on gen 2 pcie we would need bifurcation controller on a x16 slot?
https://en.wikipedia.org/wiki/PCI_Express
You are limited by PCIe 2.0 x1 and PCIe 3.0 x2. A x16 connection will not increase those limits. Approx 4x 4000mbps for the first card and 2x 15,760mbps on the second. So the first card won't be able to saturate the USB 3.0 5gbps limit while the second should be able to handle the full USB 3.1 gen2 10gbps connection with room to spare, at least on a PCIe 3.0 x4 slot (PCIe 2.0 x2 like in the Mac Pro will cap at ~8000mbps).
 
  • Like
Reactions: startergo

startergo

macrumors 603
Sep 20, 2018
5,022
2,283
There is a native hypervisor framework similar to KVM but for OSX for accessing directly the hardware:
https://github.com/hibari0614/qemu-hvf
"Qemu fork which supports macOS' native hypervisor with Hypervisor.framework (HVF) API. In this fork, the HVF patches have been added into the latest Qemu stable version, current is v2.10.1, and can be found in the branch which named develop-v2.10.2-hvf. The HVF patches are coming from "2017 Google Summer of Code" project and can be found in http…"
[doublepost=1560423364][/doublepost]On a side note Microsoft released WSL-2 with real Linux kernel . Now we can run Dockers in the WSL-2
 

LK LAW

macrumors regular
May 30, 2016
103
43
I've been using High Sierra in Qemu for a few months, and it runs really well, there are a few drawbacks.

1. I haven't figured out how to pass a PCI Firewire or XHCI USB card to my virtual MP, yet, in a way that they work correct.

2. I only have a 8 core 16 thread "Penryn" cpu, as I can't get my EFI firmware to boot when I select a CPU that would be "supported" by about this Mac.

3. Can't figure how to pass my Apple bluetooth.

Now the befits:

1. Boot screens for my RX580.
2. "native" support for M.2 NVME booting.
3. unmodified version of the macOS.
4. I can run Windows, Linux, and the macOS all at the same time, passing each on 8 cores and 16 threads, and the Linux host will load level CPU use with no real performance hit.
5. I can upgrade my system.
6. My system is open source, and I'm only really limited to how good of a coder I am, as to what I can make it do.

I passthrough a OEM Apple 802.11AC wifi card, an M.2 960 EVO, and an RX580, as well as any AHCI drives and USB( sans Apple bluetooth ).

I'd love to take the "Pepsi challenge" with folks with real Apple hardware, as far as what works and benchmarks, tho if you have a 12 core 24 thread machine you will likely beat me out, as well if you have a 1070/1080/Titan XP, I'm sure you can put up better numbers than me. However my system costs around $1200 to build, and I can upgrade my CPU, video card, and ram.
Tried PCIE pas through with proxmox yet?
 

Flint Ironstag

macrumors 65816
Dec 1, 2013
1,334
744
Houston, TX USA
Well, over a year later and still haven't found perfection. I gave Morgonaut a spin, and I have a usable system that I'm typing this from. Still on an 820, dual socket 2GHz, 32GB, SSDs, Quadro K4000 & Vega 56, HP Thunderbolt 2 card.

What doesn't work (no fault of hers, need to schedule some more time)
- sound
- vega 56 won't pass through
- logitech mouse, webcam & razer orbweaver won't pass through USB (probably user error).
- TB2 card not installed yet

I'm pretty impressed so far, and once the above issues are sorted, I think this will be my box until we see what the rumored Mac tower looks like.

Geekbench attached. If this works, the plan is to bump the CPUs to ~3GHz 10 cores, and stick some 6000 series GPUs in it.

All this makes me both appreciate and chafe at real Macs.
 

Attachments

  • Mac Pro (Late 2019) vs iMacPro1,1 - Geekbench Browser.pdf
    150.8 KB · Views: 137
  • Screen Shot 2020-12-03 at 1.13.01 PM.png
    Screen Shot 2020-12-03 at 1.13.01 PM.png
    78.6 KB · Views: 146

h9826790

macrumors P6
Apr 3, 2014
16,656
8,587
Hong Kong
Just wonder if anyone tried to run macOS 11.3 (or beyond) on their cMP 5,1 via QEMU.

This won't be a perfect answer due to the complexity, VM performance hit, and need an extra GPU to provide display for the host OS, etc.

However, this may be a possible work around to avoid the racing condition on the 5,1. May be we can explorer in this direction to see if we can extend the 5,1's life for a bit more.
 
  • Like
Reactions: cdf
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.