Skip to main content

NVIDIA vGPU Software v 7.2

Hi All

Its time to plan updating your NVIDIA TESLA M6, M10, M60, P4, P6, P40, P100, V100, T4 with NVIDIA vGPU software 7.2

NVIDIA have released new drivers for NVIDIA vGPU 7.2 for March 2019

I have in this article also included which Public Cloud instance is available with NVIDIA GPUs and which license is BYO or provided by the public cloud provider such as Azure, AWS, GCP.

NVIDIA is now matching their vGPU Manager version with Linux driver version, last NVIDIA did this was from vGPU 6.1 and backwards to see previously versions please read more information here

For a list of validated server platforms, refer to NVIDIA GRID Certified Servers.

Important:

  • Citrix XenServer 7.0, 7.1, 7.5 is not supported with NVIDIA Tesla T4
  • NVIDIA vGPU 7.2 is supported with VMware Horizon 7.8, 7.7, 7.6, 7.5, 7.4, 7.3, 7.2, 7.1, 7.0, 6.2
  • NVIDIA vGPU 7.2 is only supported with Citrix Virtual Apps & Desktops (aka XenDesktop) 7.15, 7.17, 7.18, 7 1808, 7 1903 in HDX 3D Pro mode
  • VMware vSphere ESXi 5.5 is no longer supported with NVIDIA vGPU 7.2
  • If you are a customer using XenServer 7.2, 7.3, 7.4 its no longer supported with NVIDIA vGPU 7.2 and should plan upgrading to XenServer 7.5 or 7.6.
  • Customer using Citrix XenServer 7.2, 7.3 is supported with NVIDIA vGPU 6.0 & 6.1
  • Customer using Citrix XenServer 7.4 is supported with NVIDIA vGPU 6.0 & 6.1 & 6.2
  • Customers using Citrix XenServer 7.0, 7.1, 7.5, 7.6 is supported with NVIDIA vGPU 7.2
  • Customers using Citrix XenServer 7.0, 7.1 isn’t supported with XenMotion with vGPU

This release includes the following software:

  • NVIDIA vGPU  Manager version 410.107 for the Citrix XenServer, VMware vSphere, RHEL KVM, Nutanix AHV
  • NVIDIA Windows driver version 412.31
  • NVIDIA Linux driver version 410.107

New in this Release:

  • Support for Citrix Virtual Apps and Desktop version 7 1903
  • Support for VMware Horizon 7.8
  • Support for Nutanix AHV 5.10.1,
    • AHV 5.10.1 supports now NVIDIA Tesla P4, V100 PCIe 32GB
  • Miscellaneous bug fixes
  • Security updates

Security updates – Since 7.2: Restricting Access to GPU Performance Counters

The NVIDIA graphics driver contains a vulnerability (CVE-2018-6260) that may allow access to application data processed on the GPU through a side channel exposed by the GPU performance counters. To address this vulnerability, update the driver and restrict access to GPU performance counters to allow access only by administrator users and users who need to use CUDA profiling tools.

The GPU performance counters that are affected by this vulnerability are the hardware performance monitors used by the CUDA profiling tools such as CUPTI, Nsight Graphics, and Nsight Compute. These performance counters are exposed on the hypervisor host and in guest VMs only as follows: 

  • On the hypervisor host, they are always exposed. However, the Virtual GPU Manager does not access these performance counters and, therefore, is not affected.
  • In Windows and Linux guest VMs, they are exposed only in VMs configured for GPU pass through. They are not exposed in VMs configured for NVIDIA vGPU.

Security updates – Windows: Restricting Access to GPU Performance Counters for One User by Using NVIDIA Control Panel

Perform this task from the guest VM to which the GPU is passed through.Ensure that you are running NVIDIA Control Panel version 8.1.950.

  1. Open NVIDIA Control Panel:
    • Right-click on the Windows desktop and select NVIDIA Control Panel from the menu.
    • Open Windows Control Panel and double-click the NVIDIA Control Panel icon.
  2. In NVIDIA Control Panel, select the Manage GPU Performance Counters task in the Developer section of the navigation pane.
  3. Complete the task by following the instructions in the Manage GPU Performance Counters > Developer topic in the NVIDIA Control Panelhelp.

Security updates – Windows: Restricting Access to GPU Performance Counters Across an Enterprise by Using a Registry Key

You can use a registry key to restrict access to GPU Performance Counters for all users who log in to a Windows guest VM. By incorporating the registry key information into a script, you can automate the setting of this registry for all Windows guest VMs across your enterprise.

Perform this task from the guest VM to which the GPU is passed through.CAUTION:Only enterprise administrators should perform this task. Changes to the Windows registry must be made with care and system instability can result if registry keys are incorrectly set.

  1. Set the RmProfilingAdminOnly Windows registry key to 1.[HKLM\SYSTEM\CurrentControlSet\Services\nvlddmkm\Global\NVTweak] Value: “RmProfilingAdminOnly” Type: DWORD Data: 00000001The data value 1 restricts access, and the data value 0 allows access, to application data processed on the GPU through a side channel exposed by the GPU performance counters.
  2. Restart the VM.

Security updates – Linux Guest VMs and Hypervisor Host: Restricting Access to GPU Performance Counters

On systems where unprivileged users don’t need to use GPU performance counters, restrict access to these counters to system administrators, namely users with the CAP_SYS_ADMIN capability set. By default, the GPU performance counters are not restricted to users with the CAP_SYS_ADMIN capability.

Perform this task from the guest VM to which the GPU is passed through or from your hypervisor host machine.

In Linux guest VMs, this task requires sudo privileges. On your hypervisor host machine, this task must be performed as the root user on the machine.

  1. Log in to the guest VM or open a command shell on your hypervisor host machine.
  2. Set the kernel module parameter NVreg_RestrictProfilingToAdminUsers to 1 by adding this parameter to the /etc/modprobe.d/nvidia.conf file.
    • If you are setting only this parameter, add an entry for it to the /etc/modprobe.d/nvidia.conf file as follows:options nvidia NVreg_RegistryDwords=”NVreg_RestrictProfilingToAdminUsers=1″
    • If you are setting multiple parameters, set them in a single entry as in the following example:options nvidia NVreg_RegistryDwords=”RmPVMRL=0x0″ “NVreg_RestrictProfilingToAdminUsers=1”
    If the /etc/modprobe.d/nvidia.conf file does not already exist, create it.
  3. Restart the VM or reboot your hypervisor host machine.

Other important notes about NVIDIA vGPU 7.2

  • Citrix XenServer 7.2, 7.3, 7.4 are no longer supported.
  • VMware vSphere ESXi 5.5 is no longer supported with NVIDIA vGPU 7.2
  • Nutanix AHV 5.6 is no longer supported.

Supported NVIDIA GPUs with vGPU 7.2

  • Tesla M6
  • Tesla M10
  • Tesla M60
  • Tesla P4
  • Tesla P6
  • Tesla P40
  • Tesla P100 PCIe 16 GB
  • Tesla P100 SXM2 16 GB
  • Tesla P100 PCIe 12GB
  • Tesla V100 SXM2
  • Tesla V100 SXM2 32GB
  • Tesla V100 PCIe
  • Tesla V100 PCIe 32GB
  • Tesla V100 32GB
  • Tesla V100 FHHL
  • Tesla T4

Supported Hypervisors with NVIDIA vGPU 7.2

  • Citrix XenServer
    Citrix XenServer  7.0,7.1, 7.5, 7.6 (supported with Tesla M6, M10, M60, P4, P6, P40, P100,V100,T4)
  • VMware vSphere
    VMware vSphere 6.7 (supported with Tesla M6, M10, M60, P4, P6, P40, P100,V100,T4)
    VMware vSphere 6.5 (supported with Tesla M6, M10, M60, P4, P6, P40, P100,V100,T4)
    Vmware vSphere 6.0 update3, update 2, update 1, RTM b2494585 (supported with Tesla M6, M10, M60, P4, P6, P40, P100,V100,T4)
  • Microsoft Hyper-V 2016
    Microsoft Windows Server 2016 with Hyper-V role (supported with Tesla M6, M10, M60, P4, P6, P40, P100,T4)
    note: Microsoft Windows Server with Hyper-V role supports GPU pass-through over Microsoft Virtual PCI bus. This bus is supported through paravirtualized drivers.
  • Red Hat Enterprise Linux with KVM
    Red Hat Enterprise Linux with KVM 7.5, 7.6 (vGPU supported with Tesla M6, M10, M60, P4, P6, P40, P100, V100,T4)
    Red Hat Enterprise Linux with KVM 7.2, 7.3,7.4 (Passthrough only supported with Tesla M6, M10, M60, P4, P6, P40, P100, V100,T4)
    Red Hat Enterprise Linux with KVM 7.0, 7.1 (supported with Tesla M6,M10,M60)
    Red Hat Virtualization (RHV) 4.1, 4.2 (vGPU supported with Tesla M6, M10, M60, P4, P6, P40, P100, V100,T4)
  • Nutanix AHV
    Nutanix AOS Hypervisor (AHV) 5.5, 5.6, 5.8,5.9,5.10
    • (supported with Tesla M10, M60, P40)
  • Nutanix AOS Hypervisor (AHV) 5.10.1 (supported with Tesla M10, M60, P40, P4, V100 PCIe 32GB)

Supported Cloud Services for vGPU

NVIDIA virtual GPU software is supported on several cloud services with bring your own license (BYOL) licensing and licensing provided by the cloud service.

  • Amazon Web Services Elastic Compute Cloud (AWS EC2)
  • Google Cloud Platform (GCP)
  • Microsoft Azure

Amazon Web Services Elastic Compute Cloud (AWS EC2)

GPUSupported AWS EC2 InstancesSupported Guest Operating SystemsNVIDIA Licensing
Tesla M60g3.4xlarge
g3.8xlarge
g3.16xlarge
g3s.xlarge
Microsoft Windows Server 2016 1607, 1709
Microsoft Windows Server 2012 R2
Red Hat Enterprise Linux 7.x (64-bit)
Ubuntu 16.04 LTS (64-bit)
Provided by AWS
Tesla M60g3.4xlarge
g3.8xlarge
g3.16xlarge
g3s.xlarge
Microsoft Windows Server 2016 1607, 1709
Microsoft Windows Server 2012 R2
Red Hat Enterprise Linux 7.x (64-bit)
Ubuntu 14.04 LTS/16.04 LTS/18.04 LTS (64-bit)
BYOL
Tesla V100P3.2xlarge
P3.8xlarge
P3.16xlarge
Related AWS EC2 Documentation

Linux Accelerated Computing Instances
Windows Accelerated Computing Instances


Google Cloud Platform (GCP)

GPUSupported GCP InstancesSupported Guest Operating SystemsNVIDIA Licensing
Tesla P4Any predefined machine type
Any custom machine type that can be created in a zone
Microsoft Windows Server 2016 1607, 1709
Microsoft Windows Server 2012 R2
Microsoft Windows Server 2008 R2
Provided by GCP
Tesla P100
Tesla P4Any predefined machine type
Any custom machine type that can be created in a zone
Microsoft Windows Server 2016 1607, 1709
Microsoft Windows Server 2012 R2
Microsoft Windows Server 2008 R2
Red Hat Enterprise Linux 7.x (64-bit)
Ubuntu 16.04 LTS/18.04 LTS (64-bit)
BYOL
Tesla P100
Tesla V100
Related GCP Documentation

NVIDIA and Google Cloud Platform
GPUs on Compute Engine


Microsoft Azure

GPUSupported Microsoft Azure VM SizesSupported Guest Operating SystemsNVIDIA Licensing
Tesla M60NV6
NV12
NV24NV6s_v2
NV12s_v2
NV24s_v2
Microsoft Windows Server 2016 1607, 1709
Microsoft Windows Server 2012 R2
Microsoft Windows 10
Red Hat Enterprise Linux 7.x (64-bit)
SUSE Linux Enterprise Server 12 SP2
Ubuntu 18.04 LTS (64-bit)
Provided by Microsoft Azure
Tesla M60NV6
NV12
NV24
NV6s_v2
NV12s_v2
NV24s_v2
Microsoft Windows Server 2016 1607, 1709
Microsoft Windows Server 2012 R2
Microsoft Windows 10
Microsoft Windows 8.1
Microsoft Windows 7
Red Hat Enterprise Linux 7.x (64-bit)
SUSE Linux Enterprise Server 12 SP2
Ubuntu 16.04 LTS/18.04 LTS (64-bit)
BYOL
Tesla P40ND6s
ND12s
ND24s
Tesla P100NC6s_v2
NC12s_v2
NC24s_v2
Tesla V100NC6s_v3
NC12s_v3
NC24s_v3
Related Microsoft Azure Documentation

GPU optimized virtual machine sizes
NVIDIA GPU Driver Extension for Windows
NVIDIA GPU Driver Extension for Linux
Install NVIDIA GPU drivers on N-series VMs running Windows
Install NVIDIA GPU drivers on N-series VMs running Linux

Related NVIDIA Knowledge Base Articles

Known issue: Microsoft Azure Linux image fails to acquire an NVIDIA virtual GPU software license

Supported Hypervisor with migration of vGPU across hypervisors

  • XenMotion with vGPU is supported with Citrix XenServer 7.5, 7.6
  • vMotion with vGPU is supported with VMware vSphere 6.7 CU1

Supported VMware vSphere Hypervisor (ESXi) releases:

  • Release 6.7 U1 and compatible updates support vMotion with vGPU and suspend-resume with vGPU.
  • Release 6.7 supports only suspend-resume with vGPU.
  • Releases earlier than 6.7 do not support any form of vGPU migration.

Supported guest OS releases: Windows and Linux

This release of NVIDIA vGPU software provides support for the following NVIDIA GPUs on Citrix XenServer, running on validated server hardware platform

  • Tesla M6
  • Tesla M10
  • Tesla M60
  • Tesla P4
  • Tesla P6
  • Tesla P40
  • Tesla P100 PCIe 16 GB (XenMotion with vGPU is not supported.)
  • Tesla P100 SXM2 16 GB (XenMotion with vGPU is not supported.)
  • Tesla P100 PCIe 12GB (XenMotion with vGPU is not supported.)
  • Tesla V100 SXM2 (XenMotion with vGPU is not supported.)
  • Tesla V100 SXM2 32GB (XenMotion with vGPU is not supported.)
  • Tesla V100 PCIe (XenMotion with vGPU is not supported.)
  • Tesla V100 PCIe 32GB (XenMotion with vGPU is not supported.)
  • Tesla V100 FHHL (XenMotion with vGPU is not supported.)
  • Tesla T4

Quadro Virtual Workstation on Microsoft Azure

Supported Microsoft Azure VM Sizes

This release of Quadro Virtual Workstation is supported with the Microsoft Azure VM sizes listed in the table. Each VM size is configured with a specific number of NVIDIA GPUs in GPU pass through mode.

NCv2 Series VM Sizes

VM SizeNVIDIA GPUQuantity
NC6 v2Tesla P1001
NC12 v2Tesla P1002
NC24 v2Tesla P1004

NCv3 Series VM Sizes

VM SizeNVIDIA GPUQuantity
NC6 v3Tesla V1001
NC12 v3Tesla V1002
NC24 v3Tesla V1004

ND Series VM Sizes

VM SizeNVIDIA GPUQuantity
ND6Tesla P401
ND12Tesla P402
ND24Tesla P404

Note: If an attempt is made to use Quadro Virtual Workstation with an unsupported VM size, a warning is displayed at console login time that the VM size is unsupported.

Guest OS Support

Quadro Virtual Workstation is available on Microsoft Azure images preconfigured with a choice of 64-bit Windows releases and Linux distributions as a guest OS.

Windows Guest OS Support

Quadro Virtual Workstation is available on Microsoft Azure VMs preconfigured only with following 64-bit Windows releases as a guest OS:

Note:

If a specific release, even an update release, is not listed, it’s not supported.

  • Windows Server 2016

Linux Guest OS Support

Quadro Virtual Workstation is available on Microsoft Azure VMs preconfigured only with the following Linux releases as a guest OS:

Note:

If a specific release, even an update release, is not listed, it’s not supported.

  • Ubuntu 18.04 LTS

What is MULTI-vGPU

Supported Hypervisor with Multiple vGPU support

Following Hypervisors is supported with assigning multiple vGPU to a single VM:

  • Nutanix AHV 5.5, 5.8, 5.9, 5.10, 5.10.1
  • RHEL KVM 5.6 & 5.7

The assignment of more than one vGPU device to a VM is supported only on a subset of vGPUs and Red Hat Enterprise Linux with KVM releases and Nutanix AHV releases.

Supported vGPUs profile with (Multiple vGPU support functionality)

Only Q-series vGPUs that are allocated all of the physical GPU’s frame buffer are supported.

GPU ArchitectureBoardvGPU
VoltaV100 SXM2 32GBV100DX-32Q
V100 PCIe 32GBV100D-32Q
V100 SXM2V100X-16Q
V100 PCIeV100-16Q
V100 FHHLV100L-16Q
PascalP100 SXM2P100X-16Q
P100 PCIe 16GBP100-16Q
P100 PCIe 12GBP100C-12Q
P40P40-24Q
P6P6-8Q
P4P4-8Q
MaxwellM60M60-8Q
M10M10-8Q
M6M6-8Q

Maximum vGPUs per VM

NVIDIA vGPU software supports up to a maximum of 16 vGPUs per VM.

Whats new in NVIDIA vGPU 7.2- 410.107-412.31-410.107

NVIDIA have released a new version of GRID 7.2 – 410.107-412.31-410.107 for NVIDIA vGPU  (Tesla M6, M10, M60, P4, P6, P40, P100,V100,T4 platform)

Included in this release is

  • NVIDIA Virtual GPU Manager versions 410.107 for Citrix XenServer 7.0
  • NVIDIA Virtual GPU Manager versions 410.107 for Citrix XenServer 7.1
  • NVIDIA Virtual GPU Manager versions 410.107 for Citrix XenServer 7.5
  • NVIDIA Virtual GPU Manager versions 410.107 for Citrix XenServer 7.6
  • NVIDIA Virtual GPU Manager version 410.107 for VMware vSphere 6.0 Hypervisor (ESXi)
  • NVIDIA Virtual GPU Manager version 410.107 for VMware vSphere 6.5 Hypervisor (ESXi)
  • NVIDIA Virtual GPU Manager version 410.107 for VMware vSphere 6.7 Hypervisor (ESXi)
  • NVIDIA Virtual GPU Manager version 410.107 for Nutanix AHV 5.5, 5.8, 5.9, 5.10, 5.10.1
  • NVIDIA Virtual GPU Manager version 410.107 for Huawei UVP version RC520
  • NVIDIA Windows drivers for vGPU version 412.31
  • NVIDIA Linux drivers for vGPU version 410.107

Important:

The GRID vGPU Manager and Windows guest VM drivers must be installed together. Older VM drivers will not function correctly with this release of GRID vGPU Manager. Similarly, older GRID vGPU Managers will not function correctly with this release of Windows guest drivers

Windows Guest OS support in NVIDIA vGPU 7.2 – 412.31

GRID vGPU 412.31 supports following Windows release as a guest OS

  • Microsoft Windows 7 (32/64bit)
  • Microsoft Windows 8 (32/64bit)
  • Microsoft Windows 8.1 (32/64bit)
  • Microsoft Windows 10 (32/64bit) (1507, 1511, 1607, 1703, 1709, 1803)
  • Microsoft Windows Server 2008R2
  • Microsoft Windows Server 2012 R2
  • Microsoft Windows Server 2016 (1607, 1709)

Linux Guest OS support in NVIDIA vGPU 7.2 – 410.107

NVIDIA vGPU 410.107 supports following Linux distributions as a guest OS only on supported Tesla GPUs

  • Red Hat Enterprise Linux 7.0-7.5
  • CentOS 7.0-7.5
  • Ubuntu 18.04 LTS
  • Ubuntu 16.04 LTS
  • Ubuntu 14.04 LTS

Important driver notes to NVIDIA vGPU 7.2

In pass-through mode, GPUs based on the Pascal architecture support only 64-bit guest operating systems. No 32-bit guest operating systems are supported in pass-through mode for these GPUs.

  • ESXi 6.0 Update 3 is required for pass-through mode on GPUs based on the Pascal architecture.
  • Windows 7 and Windows Server 2008 R2 are not supported in pass-through mode on GPUs based on the Pascal architecture.
  • Only Tesla M6 is supported as the primary display device in a bare-metal deployment.
  • Red Hat Enterprise Linux with KVM 7.0 and 7.1 are supported only on Tesla M6, Tesla M10, and Tesla M60 GPUs.
  • Red Hat Enterprise Linux with KVM supports Windows guest operating systems only under specific Red Hat subscription programs. For details, see Certified guest operating systems for Red Hat Enterprise Linux with KVM.
  • Windows 7, Windows Server 2008 R2, 32-bit Windows 10, and 32-bit Windows 8.1 are supported only on Tesla M6, Tesla M10, and Tesla M60 GPUs.

Guide – Update existing NVIDIA vGPU Manager (Hypervisor)

Citrix Hypervisor (aka XenServer)

NVIDIA vGPU Manager 410.107 for Citrix XenServer 7.0 & 7.1

If you have a NVIDIA M6, M10, M60, P4, P6, P40, P100 vGPU manager installed in Citrix XenServer. Upgrade with one of below methodology:

Methodology 1 – the manual way “No GUI”

Upgrading an existing installation of the NVIDIA vGPU driver on Citrix XenServer 7, use the rpm -U command to upgrade:

If you have NVIDIA TESLA M6 / M10 / M60 / P4 / P6 / P40 / P100 / V100 / T4

[root@localhost ~]

# rpm -Uv NVIDIA-vGPU-xenserver-7.0-410.107.x86_64.rpm (#if you have for XenServer 7)

[root@localhost ~]

# rpm -Uv NVIDIA-vGPU-xenserver-7.1-410.107.x86_64.rpm (#if you have for XenServer 7.1)

Preparing packages for installation…

The recommendation from NVIDIA is to shutdown all VMs using a GPU. The machine does continue to work during the update, but since you need to reboot the XenServer itself, it’s better to gracefully shutdown the VMs. So after your VMs have been shutdown and you upgraded the NVIDIA driver, you can reboot your host.

[root@localhost ~]

# xe host-disable

[root@localhost ~]

# xe host-reboot

Methodology 2 – the “GUI” way

Select Install Update… from the Tools menu
„ Click Next after going through the instructions on the Before You Start section
„ Click Add on the Select Update section and open NVIDIA’s XenServer Supplemental Pack ISO

If you have NVIDIA M6/M10/M60/P4/P6/P40/P100/V100/T4 select following file:

“NVIDIA-vGPU-xenserver-7.0-410.107.x86_64.iso ” (#if you have XenServer 7)

“NVIDIA-vGPU-xenserver-7.1-410.107x86_64.iso ” (#if you have XenServer 7.1)

Click Next on the Select Update section
„ In the Select Servers section select all the XenServer hosts on which the Supplemental Pack should be installed on and click Next
„ Click Next on the Upload section once the Supplemental Pack has been uploaded to all the XenServer hosts
Getting Started
„ Click Next on the Prechecks section
„ Click Install Update on the Update Mode section
„ Click Finish on the Install Update section

After the XenServer platform has rebooted, verify that the vGPU package installed and loaded correctly by checking for the NVIDIA kernel driver in the list of kernel loaded modules.

Validate from putty or XenCenter CLI

run lsmod | grep nvidia

Verify that the NVIDIA kernel driver can successfully communicate with the vGPU physical GPUs in your system by running the nvidia-smi command, which should produce a listing of the GPUs in your platform:

Check driver version is 410.107, if it is then your host is ready for GPU awesomeness and make your VM rock.

NVIDIA vGPU Manager 410.107 for Citrix XenServer 7.5 or 7.6

If you have a NVIDIA vGPU M6, M10, M60, P4, P6, P40, P100 vGPU manager installed in Citrix XenServer. Upgrade with one of below methodology:

Methodology 1 – the manual way “No GUI”

Upgrading an existing installation of the NVIDIA driver on Citrix XenServer 7.6, use the rpm -U command to upgrade:

If you have NVIDIA TESLA M6 / M10 / M60 / P4 / P6 / P40 / P100 / V100

[root@localhost ~]

# rpm -Uv NVIDIA-vGPU-xenserver-7.6-410.107x86_64.rpm

Preparing packages for installation…

The recommendation from NVIDIA is to shutdown all VMs using a GPU. The machine does continue to work during the update, but since you need to reboot the XenServer itself, it’s better to gracefully shutdown the VMs. So after your VMs have been shutdown and you upgraded the NVIDIA driver, you can reboot your host.

[root@localhost ~]

# xe host-disable

[root@localhost ~]

# xe host-reboot

Methodology 2 – the “GUI” way

Select Install Update… from the Tools menu
„ Click Next after going through the instructions on the Before You Start section
„ Click Add on the Select Update section and open NVIDIA’s XenServer Supplemental Pack ISO

If you have NVIDIA GRID M6/ M10/M60/P4/P6/P40/P100/V100/T4 select following file:

“NVIDIA-vGPU-xenserver-7.5-410.107.x86_64.iso ” if XenServer 7.5

“NVIDIA-vGPU-xenserver-7.6-410.107.x86_64.iso ” if XenServer 7.6

Click Next on the Select Update section
„ In the Select Servers section select all the XenServer hosts on which the Supplemental Pack should be installed on and click Next
„ Click Next on the Upload section once the Supplemental Pack has been uploaded to all the XenServer hosts
Getting Started
„ Click Next on the Prechecks section
„ Click Install Update on the Update Mode section
„ Click Finish on the Install Update section

After the XenServer platform has rebooted, verify that the GRID package installed and loaded correctly by checking for the NVIDIA kernel driver in the list of kernel loaded modules.

Validate from putty or XenCenter CLI

run lsmod | grep nvidia

Verify that the NVIDIA kernel driver can successfully communicate with the GRID physical GPUs in your system by running the nvidia-smi command, which should produce a listing of the GPUs in your platform:

Check driver version is 410.107 if it is then your host is ready for GPU awesomeness and make your VM rock.

GRID vGPU Manager 410.107 for VMware vSphere 6.0

To update the NVIDIA GPU VIB, you must uninstall the currently installed VIB and install the new VIB.

To uninstall the currently installed VIB:

  1. Stop all virtual machines using 3D acceleration.
  2. Place the ESXi host into Maintenance mode.
  3. Open a command prompt on the ESXi host.
  4. Stop the xorg service by running the command:/etc/init.d/xorg stop
  5. Remove the NVIDIA VMkernel driver by running the command:vmkload_mod -u nvidia
  6. Identify the NVIDIA VIB name by running this command:esxcli software vib list | grep NVIDIA
  7. Remove the VIB by running the command:esxcli software vib remove -n nameofNVIDIAVIBYou can now install a new NVIDIA GPU VIB
  8. Use the esxcli command to install the vGPU Manager package:

If you have NVIDIA GRID TESLA M6 / M10 / M60 / P4 / P6 / P40 / P100 / v100 / T4 select following file:

[root@lesxi ~]

esxcli software vib install -v /NVIDIA-vGPU-VMware_ESXi_6.0_Host_Driver_410.107-1OEM.600.0.0.2494585.vib

After the ESXi host has rebooted, verify that the GRID package installed and loaded correctly by checking for the NVIDIA kernel driver in the list of kernel loaded modules.

[root@lesxi ~]

# vmkload_mod -l | grep nvidia

Preparing packages for installation…

Validate

run nvidia-smi

Verify that the NVIDIA kernel driver can successfully communicate with the GRID physical GPUs in your system by running the nvidia-smi command, which should produce a listing of the GPUs in your platform:

Check driver version is 410.107 if it is then your host is ready for GPU awesomeness and make your VM rock.

GRID vGPU Manager 410.107 for VMware vSphere 6.5

To update the NVIDIA GPU VIB, you must uninstall the currently installed VIB and install the new VIB.

To uninstall the currently installed VIB:

  1. Stop all virtual machines using 3D acceleration.
  2. Place the ESXi host into Maintenance mode.
  3. Open a command prompt on the ESXi host.
  4. Stop the xorg service by running the command:/etc/init.d/xorg stop
  5. Remove the NVIDIA VMkernel driver by running the command:vmkload_mod -u nvidia
  6. Identify the NVIDIA VIB name by running this command:esxcli software vib list | grep NVIDIA
  7. Remove the VIB by running the command:esxcli software vib remove -n nameofNVIDIAVIBYou can now install a new NVIDIA GPU VIB
  8. Use the esxcli command to install the vGPU Manager package:

If you have NVIDIA GRID TESLA M6 / M10 / M60 / P4 / P6 / P40 / P100 / v100 / T4 select following file:

[root@lesxi ~]

esxcli software vib install -v /NVIDIA-vGPU-VMware_ESXi_6.5_Host_Driver_410.107-1OEM.650.0.0.2494585.vib

After the ESXi host has rebooted, verify that the GRID package installed and loaded correctly by checking for the NVIDIA kernel driver in the list of kernel loaded modules.

[root@lesxi ~]

# vmkload_mod -l | grep nvidia

Preparing packages for installation…

Validate

run nvidia-smi

Verify that the NVIDIA kernel driver can successfully communicate with the GRID physical GPUs in your system by running the nvidia-smi command, which should produce a listing of the GPUs in your platform:

Check driver version is 410.107 if it is then your host is ready for GPU awesomeness and make your VM rock.

GRID vGPU Manager 410.107 for VMware vSphere 6.7

To update the NVIDIA GPU VIB, you must uninstall the currently installed VIB and install the new VIB.

To uninstall the currently installed VIB:

  1. Stop all virtual machines using 3D acceleration.
  2. Place the ESXi host into Maintenance mode.
  3. Open a command prompt on the ESXi host.
  4. Stop the xorg service by running the command:/etc/init.d/xorg stop
  5. Remove the NVIDIA VMkernel driver by running the command:vmkload_mod -u nvidia
  6. Identify the NVIDIA VIB name by running this command:esxcli software vib list | grep NVIDIA
  7. Remove the VIB by running the command:esxcli software vib remove -n nameofNVIDIAVIBYou can now install a new NVIDIA GPU VIB
  8. Use the esxcli command to install the vGPU Manager package:

If you have NVIDIA TESLA M6 / M10 / M60 / P4 / P6 / P40 / P100 / v100/T4 select following file:

[root@lesxi ~]

esxcli software vib install -v /NVIDIA-vGPU-VMware_ESXi_6.7_Host_Driver_410.107-1OEM.650.0.0.2494585.vib

After the ESXi host has rebooted, verify that the GRID package installed and loaded correctly by checking for the NVIDIA kernel driver in the list of kernel loaded modules.

[root@lesxi ~]

# vmkload_mod -l | grep nvidia

Preparing packages for installation…

Validate

run nvidia-smi

Verify that the NVIDIA kernel driver can successfully communicate with the NVIDIA physical GPUs in your system by running the nvidia-smi command, which should produce a listing of the GPUs in your platform:

Check driver version is 410.107if it is then your host is ready for GPU awesomeness and make your VM rock.

Update existing NVIDIA vGPU Driver for (Virtual Machine)

When the hypervisor NVIDIA vGPU manager is updated, next is updating the Virtual Machines vGPU driver.

  • 412.31_grid_win8_win7_32bit_international.exe
  • 412.31_grid_win8_win7_server2012R2_server2008R2_64bit_international.exe
  • 412.31_grid_win10_32bit_international.exe
  • 412.31_grid_win10_server2016_64bit_international.exe
  • NVIDIA-Linux-x86_64-410.107-grid.run

The vGPU driver for Windows 7, 8, 8.1, 10 is available with NVIDIA vGPU download. This is available for both M6/M10/M60/P4/P6/P40/P100/V100/T4

Update your Golden Images and reprovisioning the new virtual machines with updated vGPU drivers, if you have stateless machines update vGPU drivers on each.

#HINT – Express upgrade of drivers is the recommended option according to the setup. If you use the “Custom” option, you will have the option to do a “clean” installation. The downside of the “clean installation” is that it will remove all profiles and custom settings. The pro of using the clean installation option is that it will reinstall the complete driver, meaning that there will be no old driver files left on the system. I most of the time recommends using a “Clean” installation to keep it vanilla 🙂

#HINT (Citrix XenDesktop 7.12/7.13/7.14/7.15/7.16/7.17/7.18/7 1808.2/1811/1903 customers)

The NVIDIA vGPU API provides direct access to the frame buffer of the GPU, providing the fastest possible frame rate for a smooth and interactive user experience. If you install NVIDIA drivers before you install a VDA with HDX 3D Pro, NVIDIA vGPU is enabled by default.

To enable NVIDIA vGPU on a VM, disable Microsoft Basic Display Adapter from the Device Manager. Run the following command and then restart the VDA: NVFBCEnable.exe -enable -noreset

If you install NVIDIA drivers after you install a VDA with HDX 3D Pro, NVIDIA vGPU is disabled. Enable NVIDIA vGPU by using the NVFBCEnable tool provided by NVIDIA.

To disable NVIDIA vGPU, run the following command and then restart the VDA: NVFBCEnable.exe -disable -noreset

Source

NVIDIA Virtual GPU Software Documentation

https://docs.nvidia.com/grid/index.html

NVIDIA Virtual GPU Software Supported Products

NVIDIA Virtual GPU Software Quick Start Guide

NVIDIA Tesla M6/M10/M60/P4/P6/P40/P100/V100/T4 – sources

vGPU vGPU Manager + Drivers are only available to customers and NVIDIA NPN partners for M6/M10/M60/P4/P6/P40/P100/V100/T4

Download if you are a NPN partner

Download if you are a GRID M6, M10, M60, P4, P6, P40, P100, V100, T4 customer

Leave a Reply

Your email address will not be published. Required fields are marked *

Turn on pictures to see the captcha *