NVIDIA vGPU software v 7.1
Hi All
Its time to plan updating your NVIDIA TESLA M6, M10, M60, P4, P6, P40, P100, V100, T4 with NVIDIA vGPU software 7.1
NVIDIA have released new drivers for NVIDIA vGPU 7.1 for December 2018
Important:
- Citrix XenServer 7.0, 7.1, 7.5 is not supported with NVIDIA Tesla T4
- NVIDIA vGPU 7.1 is supported with VMware Horizon 7.7, 7.6, 7.5, 7.4, 7.3, 7.2, 7.1, 7.0, 6.2
- NVIDIA vGPU 7.1 is only supported with Citrix Virtual Apps & Desktops (aka XenDesktop) 7.15, 7.18, 7 1808, 7 1811 in HDX 3D Pro mode
- VMware vSphere ESXi 5.5 is no longer supported with NVIDIA vGPU 7.1
- If you are a customer using XenServer 7.2, 7.3, 7.4 its no longer supported with NVIDIA vGPU 7.1 and should plan upgrading to XenServer 7.5 or 7.6.
- Customer using Citrix XenServer 7.2, 7.3 is supported with NVIDIA vGPU 6.0 & 6.1
- Customer using Citrix XenServer 7.4 is supported with NVIDIA vGPU 6.0 & 6.1 & 6.2
- Customers using Citrix XenServer 7.0, 7.1, 7.5, 7.6 is supported with NVIDIA vGPU 7.1
- Customers using Citrix XenServer 7.0, 7.1 isn’t supported with XenMotion with vGPU
This release includes the following software:
- NVIDIA vGPU Manager version 410.68 for the Citrix XenServer, VMware vSphere, RHEL KVM, Nutanix AHV
- NVIDIA Windows driver version 411.81
- NVIDIA Linux driver version 410.71
New in this Release:
- Support for Nutanix AHV 5.10
- Support for NVIDIA Tesla T4 GPU
- Support for VMware Horizon 7.7
- Support for NVIDIA Quadro Virtual Workstation on Microsoft Azure
Other important notes about NVIDIA vGPU 7.1
- Citrix XenServer 7.2, 7.3, 7.4 are no longer supported.
- VMware vSphere ESXi 5.5 is no longer supported with NVIDIA vGPU 7.1
- Nutanix AHV 5.6 is no longer supported.
Supported NVIDIA GPUs with vGPU 7.1
- Tesla M6
- Tesla M10
- Tesla M60
- Tesla P4
- Tesla P6
- Tesla P40
- Tesla P100 PCIe 16 GB
- Tesla P100 SXM2 16 GB
- Tesla P100 PCIe 12GB
- Tesla V100 SXM2
- Tesla V100 SXM2 32GB
- Tesla V100 PCIe
- Tesla V100 PCIe 32GB
- Tesla V100 32GB
- Tesla V100 FHHL
- Tesla T4
Supported Hypervisors with NVIDIA vGPU 7.1
- Citrix XenServer
Citrix XenServer 7.0,7.1, 7.5, 7.6 (supported with Tesla M6, M10, M60, P4, P6, P40, P100,V100,T4) - VMware vSphere
VMware vSphere 6.7 (supported with Tesla M6, M10, M60, P4, P6, P40, P100,V100,T4)
VMware vSphere 6.5 (supported with Tesla M6, M10, M60, P4, P6, P40, P100,V100,T4)
Vmware vSphere 6.0 update3, update 2, update 1, RTM b2494585 (supported with Tesla M6, M10, M60, P4, P6, P40, P100,V100,T4) - Microsoft Hyper-V 2016
Microsoft Windows Server 2016 with Hyper-V role (supported with Tesla M6, M10, M60, P4, P6, P40, P100,T4)
note: Microsoft Windows Server with Hyper-V role supports GPU pass-through over Microsoft Virtual PCI bus. This bus is supported through paravirtualized drivers. - Red Hat Enterprise Linux with KVM
Red Hat Enterprise Linux with KVM 7.5, 7.6 (vGPU supported with Tesla M6, M10, M60, P4, P6, P40, P100, V100,T4)
Red Hat Enterprise Linux with KVM 7.2, 7.3,7.4 (Passthrough only supported with Tesla M6, M10, M60, P4, P6, P40, P100, V100,T4)
Red Hat Enterprise Linux with KVM 7.0, 7.1 (supported with Tesla M6,M10,M60)
Red Hat Virtualization (RHV) 4.1, 4.2 (vGPU supported with Tesla M6, M10, M60, P4, P6, P40, P100, V100,T4) - Nutanix AHV
Nutanix AOS Hypervisor (AHV) 5.5, 5.6, 5.8,5.9 (supported with Tesla M10, M60, P40)
Supported Hypervisor with migration of vGPU across hypervisors
- XenMotion with vGPU is supported with Citrix XenServer 7.5, 7.6
- vMotion with vGPU is supported with VMware vSphere 6.7 CU1
Supported VMware vSphere Hypervisor (ESXi) releases:
- Release 6.7 U1 and compatible updates support vMotion with vGPU and suspend-resume with vGPU.
- Release 6.7 supports only suspend-resume with vGPU.
- Releases earlier than 6.7 do not support any form of vGPU migration.
Supported guest OS releases: Windows and Linux
This release of NVIDIA vGPU software provides support for the following NVIDIA GPUs on Citrix XenServer, running on validated server hardware platform
- Tesla M6
- Tesla M10
- Tesla M60
- Tesla P4
- Tesla P6
- Tesla P40
- Tesla P100 PCIe 16 GB (XenMotion with vGPU is not supported.)
- Tesla P100 SXM2 16 GB (XenMotion with vGPU is not supported.)
- Tesla P100 PCIe 12GB (XenMotion with vGPU is not supported.)
- Tesla V100 SXM2 (XenMotion with vGPU is not supported.)
- Tesla V100 SXM2 32GB (XenMotion with vGPU is not supported.)
- Tesla V100 PCIe (XenMotion with vGPU is not supported.)
- Tesla V100 PCIe 32GB (XenMotion with vGPU is not supported.)
- Tesla V100 FHHL (XenMotion with vGPU is not supported.)
- Tesla T4
Quadro Virtual Workstation on Microsoft Azure
Supported Microsoft Azure VM Sizes
This release of Quadro Virtual Workstation is supported with the Microsoft Azure VM sizes listed in the table. Each VM size is configured with a specific number of NVIDIA GPUs in GPU pass through mode.
VM Size | NVIDIA GPU | Quantity |
---|---|---|
NCv3 series | Tesla V100 | 1 |
NCv2 series | Tesla P100 | 1 |
ND6 | Tesla P40 | 1 |
ND12 | Tesla P40 | 2 |
ND24 | Tesla P40 | 4 |
Guest OS Support
Quadro Virtual Workstation is available on Microsoft Azure images preconfigured with a choice of 64-bit Windows releases and Linux distributions as a guest OS.
Windows Guest OS Support
Quadro Virtual Workstation is available on Microsoft Azure VMs preconfigured only with following 64-bit Windows releases as a guest OS:
Note:
If a specific release, even an update release, is not listed, it’s not supported.
- Windows Server 2016
What is MULTI-vGPU
Supported Hypervisor with Multiple vGPU support
Following Hypervisors is supported with assigning multiple vGPU to a single VM:
- Nutanix AHV 5.5, 5.8, 5.9, 5.10
- RHEL KVM 5.6 & 5.7
The assignment of more than one vGPU device to a VM is supported only on a subset of vGPUs and Red Hat Enterprise Linux with KVM releases and Nutanix AHV releases.
Supported vGPUs profile with (Multiple vGPU support functionality)
Only Q-series vGPUs that are allocated all of the physical GPU’s frame buffer are supported.
GPU Architecture | Board | vGPU |
---|---|---|
Volta | V100 SXM2 32GB | V100DX-32Q |
V100 PCIe 32GB | V100D-32Q | |
V100 SXM2 | V100X-16Q | |
V100 PCIe | V100-16Q | |
V100 FHHL | V100L-16Q | |
Pascal | P100 SXM2 | P100X-16Q |
P100 PCIe 16GB | P100-16Q | |
P100 PCIe 12GB | P100C-12Q | |
P40 | P40-24Q | |
P6 | P6-8Q | |
P4 | P4-8Q | |
Maxwell | M60 | M60-8Q |
M10 | M10-8Q | |
M6 | M6-8Q |
Maximum vGPUs per VM
NVIDIA vGPU software supports up to a maximum of 16 vGPUs per VM.
Whats new in NVIDIA vGPU 7.1– 410.91-412.16-410.92
NVIDIA have released a new version of GRID 7.1 – 410.91-412.16-410.92 for NVIDIA GRID (Tesla M6, M10, M60, P4, P6, P40, P100,V100,T4 platform)
Included in this release is
- NVIDIA Virtual GPU Manager versions 410.91 for Citrix XenServer 7.0
- NVIDIA Virtual GPU Manager versions 410.91 for Citrix XenServer 7.1
- NVIDIA Virtual GPU Manager versions 410.91 for Citrix XenServer 7.5
- NVIDIA Virtual GPU Manager versions 410.91 for Citrix XenServer 7.6
- NVIDIA Virtual GPU Manager version 410.91 for VMware vSphere 6.0 Hypervisor (ESXi)
- NVIDIA Virtual GPU Manager version 410.91 for VMware vSphere 6.5 Hypervisor (ESXi)
- NVIDIA Virtual GPU Manager version 410.91 for VMware vSphere 6.7 Hypervisor (ESXi)
- NVIDIA Virtual GPU Manager version 410.91 for Nutanix AHV 5.5, 5.8, 5.9, 5.10
- NVIDIA Virtual GPU Manager version 410.91 for Huawei UVP version RC520
- NVIDIA Windows drivers for vGPU version 412.16
- NVIDIA Linux drivers for vGPU version 410.92
Important:
The GRID vGPU Manager and Windows guest VM drivers must be installed together. Older VM drivers will not function correctly with this release of GRID vGPU Manager. Similarly, older GRID vGPU Managers will not function correctly with this release of Windows guest drivers
Windows Guest OS support in NVIDIA vGPU 7.1 – 412.16
GRID vGPU 412.16 supports following Windows release as a guest OS
- Microsoft Windows 7 (32/64bit)
- Microsoft Windows 8 (32/64bit)
- Microsoft Windows 8.1 (32/64bit)
- Microsoft Windows 10 (32/64bit) (1507, 1511, 1607, 1703, 1709, 1803)
- Microsoft Windows Server 2008R2
- Microsoft Windows Server 2012 R2
- Microsoft Windows Server 2016 (1607, 1709)
Linux Guest OS support in NVIDIA vGPU 7.1 – 410.92
GRID vGPU 410.92 supports following Linux distributions as a guest OS only on supported Tesla GPUs
- Red Hat Enterprise Linux 7.0-7.5
- CentOS 7.0-7.5
- Ubuntu 18.04 LTS
- Ubuntu 16.04 LTS
- Ubuntu 14.04 LTS
Important driver notes to NVIDIA vGPU 7.1
In pass-through mode, GPUs based on the Pascal architecture support only 64-bit guest operating systems. No 32-bit guest operating systems are supported in pass-through mode for these GPUs.
- ESXi 6.0 Update 3 is required for pass-through mode on GPUs based on the Pascal architecture.
- Windows 7 and Windows Server 2008 R2 are not supported in pass-through mode on GPUs based on the Pascal architecture.
- Only Tesla M6 is supported as the primary display device in a bare-metal deployment.
- Red Hat Enterprise Linux with KVM 7.0 and 7.1 are supported only on Tesla M6, Tesla M10, and Tesla M60 GPUs.
- Red Hat Enterprise Linux with KVM supports Windows guest operating systems only under specific Red Hat subscription programs. For details, see Certified guest operating systems for Red Hat Enterprise Linux with KVM.
- Windows 7, Windows Server 2008 R2, 32-bit Windows 10, and 32-bit Windows 8.1 are supported only on Tesla M6, Tesla M10, and Tesla M60 GPUs.
Guide – Update existing NVIDIA vGPU Manager (Hypervisor)
Citrix Hypervisor (aka XenServer)
NVIDIA vGPU Manager 410.91 for Citrix XenServer 7.0 & 7.1
If you have a NVIDIA M6, M10, M60, P4, P6, P40, P100 vGPU manager installed in Citrix XenServer. Upgrade with one of below methodology:
Methodology 1 – the manual way “No GUI”
Upgrading an existing installation of the NVIDIA vGPU driver on Citrix XenServer 7, use the rpm -U command to upgrade:
If you have NVIDIA TESLA M6 / M10 / M60 / P4 / P6 / P40 / P100 / V100 / T4
[root@localhost ~]# rpm -Uv NVIDIA-vGPU-xenserver-7.0-410.91.x86_64.rpm (#if you have for XenServer 7)
[root@localhost ~]# rpm -Uv NVIDIA-vGPU-xenserver-7.1-410.91.x86_64.rpm (#if you have for XenServer 7.1)
Preparing packages for installation…
The recommendation from NVIDIA is to shutdown all VMs using a GPU. The machine does continue to work during the update, but since you need to reboot the XenServer itself, it’s better to gracefully shutdown the VMs. So after your VMs have been shutdown and you upgraded the NVIDIA driver, you can reboot your host.
[root@localhost ~]# xe host-disable
[root@localhost ~]# xe host-reboot
Methodology 2 – the “GUI” way
Select Install Update… from the Tools menu
Click Next after going through the instructions on the Before You Start section
Click Add on the Select Update section and open NVIDIA’s XenServer Supplemental Pack ISO
If you have NVIDIA M6/M10/M60/P4/P6/P40/P100/V100/T4 select following file:
“NVIDIA-vGPU-xenserver-7.0-410.91.x86_64.iso ” (#if you have XenServer 7)
“NVIDIA-vGPU-xenserver-7.1-410.91.x86_64.iso ” (#if you have XenServer 7.1)
Click Next on the Select Update section
In the Select Servers section select all the XenServer hosts on which the Supplemental Pack should be installed on and click Next
Click Next on the Upload section once the Supplemental Pack has been uploaded to all the XenServer hosts
Getting Started
Click Next on the Prechecks section
Click Install Update on the Update Mode section
Click Finish on the Install Update section
After the XenServer platform has rebooted, verify that the vGPU package installed and loaded correctly by checking for the NVIDIA kernel driver in the list of kernel loaded modules.
Validate from putty or XenCenter CLI
run lsmod | grep nvidia
Verify that the NVIDIA kernel driver can successfully communicate with the vGPU physical GPUs in your system by running the nvidia-smi command, which should produce a listing of the GPUs in your platform:
Check driver version is 410.68, if it is then your host is ready for GPU awesomeness and make your VM rock.
NVIDIA vGPU Manager 410.91 for Citrix XenServer 7.5 or 7.6
If you have a NVIDIA vGPU M6, M10, M60, P4, P6, P40, P100 vGPU manager installed in Citrix XenServer. Upgrade with one of below methodology:
Methodology 1 – the manual way “No GUI”
Upgrading an existing installation of the NVIDIA driver on Citrix XenServer 7.6, use the rpm -U command to upgrade:
If you have NVIDIA TESLA M6 / M10 / M60 / P4 / P6 / P40 / P100 / V100
[root@localhost ~]# rpm -Uv NVIDIA-vGPU-xenserver-7.6-410.68x86_64.rpm
Preparing packages for installation…
The recommendation from NVIDIA is to shutdown all VMs using a GPU. The machine does continue to work during the update, but since you need to reboot the XenServer itself, it’s better to gracefully shutdown the VMs. So after your VMs have been shutdown and you upgraded the NVIDIA driver, you can reboot your host.
[root@localhost ~]# xe host-disable
[root@localhost ~]# xe host-reboot
Methodology 2 – the “GUI” way
Select Install Update… from the Tools menu
Click Next after going through the instructions on the Before You Start section
Click Add on the Select Update section and open NVIDIA’s XenServer Supplemental Pack ISO
If you have NVIDIA GRID M6/ M10/M60/P4/P6/P40/P100/V100/T4 select following file:
“NVIDIA-vGPU-xenserver-7.5-410.91.x86_64.iso ” if XenServer 7.5
“NVIDIA-vGPU-xenserver-7.6-410.91.x86_64.iso ” if XenServer 7.6
Click Next on the Select Update section
In the Select Servers section select all the XenServer hosts on which the Supplemental Pack should be installed on and click Next
Click Next on the Upload section once the Supplemental Pack has been uploaded to all the XenServer hosts
Getting Started
Click Next on the Prechecks section
Click Install Update on the Update Mode section
Click Finish on the Install Update section
After the XenServer platform has rebooted, verify that the GRID package installed and loaded correctly by checking for the NVIDIA kernel driver in the list of kernel loaded modules.
Validate from putty or XenCenter CLI
run lsmod | grep nvidia
Verify that the NVIDIA kernel driver can successfully communicate with the GRID physical GPUs in your system by running the nvidia-smi command, which should produce a listing of the GPUs in your platform:
Check driver version is 410.68, if it is then your host is ready for GPU awesomeness and make your VM rock.
GRID vGPU Manager 410.91 for VMware vSphere 6.0
To update the NVIDIA GPU VIB, you must uninstall the currently installed VIB and install the new VIB.
To uninstall the currently installed VIB:
- Stop all virtual machines using 3D acceleration.
- Place the ESXi host into Maintenance mode.
- Open a command prompt on the ESXi host.
- Stop the xorg service by running the command:/etc/init.d/xorg stop
- Remove the NVIDIA VMkernel driver by running the command:vmkload_mod -u nvidia
- Identify the NVIDIA VIB name by running this command:esxcli software vib list | grep NVIDIA
- Remove the VIB by running the command:esxcli software vib remove -n nameofNVIDIAVIBYou can now install a new NVIDIA GPU VIB
- Use the esxcli command to install the vGPU Manager package:
If you have NVIDIA GRID TESLA M6 / M10 / M60 / P4 / P6 / P40 / P100 / v100 / T4 select following file:
[root@lesxi ~] esxcli software vib install -v /NVIDIA-vGPU-VMware_ESXi_6.0_Host_Driver_410.68-1OEM.600.0.0.2494585.vib
After the ESXi host has rebooted, verify that the GRID package installed and loaded correctly by checking for the NVIDIA kernel driver in the list of kernel loaded modules.
[root@lesxi ~]# vmkload_mod -l | grep nvidia
Preparing packages for installation…
Validate
run nvidia-smi
Verify that the NVIDIA kernel driver can successfully communicate with the GRID physical GPUs in your system by running the nvidia-smi command, which should produce a listing of the GPUs in your platform:
Check driver version is 410.68 if it is then your host is ready for GPU awesomeness and make your VM rock.
GRID vGPU Manager 410.91 for VMware vSphere 6.5
To update the NVIDIA GPU VIB, you must uninstall the currently installed VIB and install the new VIB.
To uninstall the currently installed VIB:
- Stop all virtual machines using 3D acceleration.
- Place the ESXi host into Maintenance mode.
- Open a command prompt on the ESXi host.
- Stop the xorg service by running the command:/etc/init.d/xorg stop
- Remove the NVIDIA VMkernel driver by running the command:vmkload_mod -u nvidia
- Identify the NVIDIA VIB name by running this command:esxcli software vib list | grep NVIDIA
- Remove the VIB by running the command:esxcli software vib remove -n nameofNVIDIAVIBYou can now install a new NVIDIA GPU VIB
- Use the esxcli command to install the vGPU Manager package:
If you have NVIDIA GRID TESLA M6 / M10 / M60 / P4 / P6 / P40 / P100 / v100 / T4 select following file:
[root@lesxi ~] esxcli software vib install -v /NVIDIA-vGPU-VMware_ESXi_6.5_Host_Driver_410.91-1OEM.650.0.0.2494585.vib
After the ESXi host has rebooted, verify that the GRID package installed and loaded correctly by checking for the NVIDIA kernel driver in the list of kernel loaded modules.
[root@lesxi ~]# vmkload_mod -l | grep nvidia
Preparing packages for installation…
Validate
run nvidia-smi
Verify that the NVIDIA kernel driver can successfully communicate with the GRID physical GPUs in your system by running the nvidia-smi command, which should produce a listing of the GPUs in your platform:
Check driver version is 410.68 if it is then your host is ready for GPU awesomeness and make your VM rock.
GRID vGPU Manager 410.91 for VMware vSphere 6.7
To update the NVIDIA GPU VIB, you must uninstall the currently installed VIB and install the new VIB.
To uninstall the currently installed VIB:
- Stop all virtual machines using 3D acceleration.
- Place the ESXi host into Maintenance mode.
- Open a command prompt on the ESXi host.
- Stop the xorg service by running the command:/etc/init.d/xorg stop
- Remove the NVIDIA VMkernel driver by running the command:vmkload_mod -u nvidia
- Identify the NVIDIA VIB name by running this command:esxcli software vib list | grep NVIDIA
- Remove the VIB by running the command:esxcli software vib remove -n nameofNVIDIAVIBYou can now install a new NVIDIA GPU VIB
- Use the esxcli command to install the vGPU Manager package:
If you have NVIDIA TESLA M6 / M10 / M60 / P4 / P6 / P40 / P100 / v100/T4 select following file:
[root@lesxi ~] esxcli software vib install -v /NVIDIA-vGPU-VMware_ESXi_6.7_Host_Driver_410.91-1OEM.650.0.0.2494585.vib
After the ESXi host has rebooted, verify that the GRID package installed and loaded correctly by checking for the NVIDIA kernel driver in the list of kernel loaded modules.
[root@lesxi ~]# vmkload_mod -l | grep nvidia
Preparing packages for installation…
Validate
run nvidia-smi
Verify that the NVIDIA kernel driver can successfully communicate with the NVIDIA physical GPUs in your system by running the nvidia-smi command, which should produce a listing of the GPUs in your platform:
Check driver version is 410.91 if it is then your host is ready for GPU awesomeness and make your VM rock.
Update existing NVIDIA vGPU Driver for (Virtual Machine)
When the hypervisor NVIDIA vGPU manager is updated, next is updating the Virtual Machines vGPU driver.
- 412.16_grid_win8_win7_32bit_international.exe
- 412.16_grid_win8_win7_server2012R2_server2008R2_64bit_international.exe
- 412.16_grid_win10_32bit_international.exe
- 412.16_grid_win10_server2016_64bit_international.exe
- NVIDIA-Linux-x86_64-410.71-grid.run
The vGPU driver for Windows 7, 8, 8.1, 10 is available with NVIDIA GRID vGPU download. This is available for both M6/M10/M60/P4/P6/P40/P100/V100/T4
Update your Golden Images and reprovisioning the new virtual machines with updated vGPU drivers, if you have stateless machines update vGPU drivers on each.
#HINT – Express upgrade of drivers is the recommended option according to the setup. If you use the “Custom” option, you will have the option to do a “clean” installation. The downside of the “clean installation” is that it will remove all profiles and custom settings. The pro of using the clean installation option is that it will reinstall the complete driver, meaning that there will be no old driver files left on the system. I most of the time recommends using a “Clean” installation to keep it vanilla 🙂
#HINT (Citrix XenDesktop 7.12/7.13/7.14/7.15/7.16/7.17/7.18/7 1808.2/1811 customers)
The NVIDIA vGPU API provides direct access to the frame buffer of the GPU, providing the fastest possible frame rate for a smooth and interactive user experience. If you install NVIDIA drivers before you install a VDA with HDX 3D Pro, NVIDIA vGPU is enabled by default.
To enable NVIDIA vGPU on a VM, disable Microsoft Basic Display Adapter from the Device Manager. Run the following command and then restart the VDA: NVFBCEnable.exe -enable -noreset
If you install NVIDIA drivers after you install a VDA with HDX 3D Pro, NVIDIA vGPU is disabled. Enable NVIDIA vGPU by using the NVFBCEnable tool provided by NVIDIA.
To disable NVIDIA vGPU, run the following command and then restart the VDA: NVFBCEnable.exe -disable -noreset
Source
NVIDIA Virtual GPU Software Documentation
https://docs.nvidia.com/grid/index.html
NVIDIA Virtual GPU Software Supported Products
NVIDIA Virtual GPU Software Quick Start Guide
NVIDIA Tesla M6/M10/M60/P4/P6/P40/P100/V100/T4 – sources
vGPU vGPU Manager + Drivers are only available to customers and NVIDIA NPN partners for M6/M10/M60/P4/P6/P40/P100/V100/T4
Download if you are a NPN partner
Download if you are a GRID M6, M10, M60, P4, P6, P40, P100, V100, T4 customer