Findings Video conference with Azure Virtual Desktop using Zoom

Introduction
Over the last couple of years, there has been an impressive flux with many businesses and institutions adopting and relying on large-scale remote working and remote learning environments to maintain workforce and learning continuity. During this time, it’s generally been recognised that this type of remote working/learning has been quite successful, with many businesses and institutions continuing remoting working/learning practices or introducing hybrid models with a combination of remote and office work for their staff.
One of the reasons why remote working/learning has been successful is the availability of supporting technologies that have delivered a high standard of human communication and engagement across large numbers of workers or students/faculty in remote environments. Video conferencing applications, which includes video conferencing, screen sharing, IMs and more, are such technologies that have contributed to viable remote working/learning environment success.
But to use these applications to their fullest potential, a robust IT infrastructure is also a must. Many large enterprise companies, as well as SMB and other institutions have centralised their IT environment into virtualized desktop infrastructure (VDI), either as an on-premises solution or as a managed-service by cloud service providers (CSP). Centralizing resources, applications and data into a single infrastructure allows for better IT management and security of vital resources and data which can help improve workforce productivity, data security and IT efficiencies.
Investigation overview
This blog details a recent technical investigation where popular video conferencing applications are deployed on AMD-based Azure instances to determine the performance of each application, the number of deployable users in a multi-session environment, and the user experience each person would receive. The AMD-based instances includes both CPU-only based instances and CPU+GPU based instances to understand the impact of GPU-enabled resources to the density and experience of the users.
So next let’s look at the various parameters for the investigation.
The Lab:
For the investigation, we had three areas of consideration:
![]() | ![]() | ![]() |
1) Azure session host | 2) Application | 3) End-point devices |
Azure Session Host
For the host, we used Microsoft Azure. The system used Windows 10 Multisession 1909, running on Microsoft Remote Desktop Protocol (RDP) in Azure Virtual Desktop. The instances used are NV32asV4 (32 vCPU/112 GB/ 1xGPU), D32av4 (32 vCPU/112 GB) and D32sv4 (32 vCPU/112 GB). The host was in West Europe (Amsterdam) and the tests we conducted in Hinnerup, Denmark – with a rough distance of 500 miles from the datacenter.
Applications
The applications we used as part of this investigation are listed below. These are the most used video conferencing applications used to for remote working/learning.
1) Zoom
Zoom VDI client is installed according to zoom docs to get optimization for AVD supported in a best configuration so it offloads to the endpoints that supports this*.
Zoom Media Plugin client is installed on 11 Windows endpoint.
#Note there is no Zoom Media Plugin for MacOs or ChromeOS.
## The Zoom meeting client for VDI has similar features and functionality compared to its other solutions, but it also has some key differences, If you would like to learn more, read below article from Zoom.
https://support.zoom.us/hc/en-us/articles/360031441671-VDI-client-features-comparison
Endpoint devices
The investigation looked at using physical end point devices as opposed to virtual, giving a truer representation of the environment and experience. This as a result limited the sample size to 15+1 concurrent users, due to lab space.
Within the sample size, we used 11x Windows PCs, 5x Chromebooks and 1x MacBook Air laptop. Each device used the latest OS. In relation to the workload, we have 15 users connected and 1 additional user as host of the video call.
The workload:
For the investigation we looked at three types of workloads typically seen with video conferencing and run over a 30-minute period:
Workload 1 (length of time: 30mins)
1x host and 14x guest video conference (video and audio sharing) +Screen sharing static (PDF) content
Workload 2 (length of time: 30mins)
1x host and 14x guest video conference (video and audio sharing) +Screen sharing dynamic (video) content
Workload 3 (length of time: 30mins)
1x host and 14x guest video conference (video and audio sharing) +Screen sharing dynamic (video) content+Guests are multi-tasking, taking notes on office 365 suite
These workloads will become more demanding because of the increase in workload requirements – with video and audio sharing to dynamic screen-sharing to to multi-tasking. This gives a set of different types of common uses cases seen in remote working/learning environments.
Methodology
There are two areas in which we are collecting the data for this investigation.
How are we collecting the data?
![]() Session host | ![]() End-point device |
CPU utilization, Memory utilization, GPU utilization, GPU memory utilization | In/Out Frames Per Second (FPS) Encode time In/output bandwidth per user Output Bandwidth pr User Input Bandwidth pr User Latency per user |
The data was captured each 3 seconds using the Windows OS system from Sepago’s Azure Monitor for AVD monitoring tools. This allowed for cross checking of the data delivered
Considerations
Not all applications and devices are built the same
Before we look at the results of this investigation, we also need to reflect on the parameters and support each of the applications has for the different end-point devices as well as how they are viewed/installed.
Zoom VDI
Zoom VDI is installed on the Azure (Session Host’s)
https://support.zoom.us/hc/en-us/articles/360031768011-Release-notes-for-Virtual-Desktop-Infrastructure-VDI-

Its important to choose the right endpoint client for “Azure Virtual Desktop”
In below table you can see that currently only Windows endpoint is supported for this investigation, but if you have IGELOS or Ubuntu or HP ThinPro there is also Zoom media plugin for these available as of December 2021.
https://support.zoom.us/hc/en-us/articles/4415057249549-VDI-releases-and-downloads

How Zoom VDI is optimized
In the most optimal case, the zoom media plugin will offload video encoding and decoding and communicate over the network directly to the Zoom cloud, bypassing the AVD infrastructure. Control information such as authentication and window location is always sent over the VDI channel.

Zoom Media Plugin client is installed on 11 Windows endpoint.
#Note there is no Zoom Media Plugin for MacOs or ChromeOS.
Importantly, “Windows desktop” is in this investigation, the only endpointOS platform that supports ZOOM media optimization – Which pushes the audio and video processing locally to the Zoom Plugin client for Zoom calls and meetings freeing up more host resources for greater numbers of users on the VM. This is done by installing a dedicated zoom Plugin client for AVD on the endpoint. Its still required to have installed on the endpoint Windows Desktop Client to access AVD.
For the other devices connected to the host, in this case the 5x Chromebooks and 1x MacBook devices as they are not supported by the ZOOM VDI optimization it means the audio and visual processing is done on the host – taking more host resources as a result.
## The Zoom meeting client for VDI has similar features and functionality compared to its other solutions, but it also has some key differences
Azure Virtual Desktop redirection support
Another area to consider is Azure Virtual Desktop support for audio and camera redirection with end-point devices. Redirection helps to improve latency with camera and audio as its essentially a pass-through to the host.

Speakers – with AVD, speaker redirect support is across all platforms, whether is Windows, ChromeOS, MacOS and HTML5.

Camera – redirection camera support for AVD is supported with Windows devices and MacOS – this means no redirect support for ChromeOS and HMTL5 (Web client). So there is an expectation to see more latency with ChromeBooks and devices connected via a Web client.
Investigation findings
In the section we are going to review the findings from the investigation. Just to reiterate, there was 3x session hosts with 1x applications tested against 3x workloads, giving a total of 9 findings. Let’s begin….
Microsoft Azure NV32as_v4
Metrics captured each 3seconds with 15 concurrent users using AVD
![]() Session host | CPU Utilization | RAM memory | GPU Utilization | GPU memory |
Workload 1 | 30% | 9,1GB | 99% | 3.3GB |
Workload 2 | 37% | 9,4GB | 98% | 2.4GB |
Workload 3 | 28% | 13,5GB | 98% | 3.3GB |
![]() End-point device | Input FPS | Output FPS | Encode time | Input Bandwidth | Output Bandwidth | Latency |
Workload 1 | 28 FPS | 24 FPS | 4 MS | 17 MB/s | 12.5 MB/s | 62ms |
Workload 2 | 23 FPS | 25 FPS | 14 MS | 47 MB/s | 11 MB/s | 88ms |
Workload 3 | 21 FPS | 23 FPS | 2 MS | 22 MB/s | 12 MB/s | 67ms |
Observations:
- ALL 3 workloads works great across devices, video in sync and audio.
- Delivered a good user experience
- 2->endpoints camera redirection takes 100% gpu. 2 endpoints takes 80% video encode*
Microsoft Azure D32as_v4
Metrics captured each 3seconds with 15 concurrent users using AVD
![]() Session host | CPU Utilization | RAM memory | – | – |
Workload 1 | 51% | 9GB | – | – |
Workload 2 | 71% | 9.7GB | – | – |
Workload 3 | 44% | 10.9GB | – | – |
![]() End-point device | Input FPS | Output FPS | Encode time | Input Bandwidth | Output Bandwidth | Latency |
Workload 1 | 17 FPS | 23 FPS | 5 MS | 17 MB/s | 13 MB/s | 75ms |
Workload 2 | 19 FPS | 24 FPS | 7 MS | 51 MB/s | 13 MB/s | 87ms |
Workload 3 | 21 FPS | 28 FPS | 4 MS | 32 MB/s | 13 MB/s | 84ms |
Observations:
- ALL 3 workloads works great across devices, video in sync and audio.
- Delivered a good user experience
Microsoft Azure D32s_v4
Metrics captured each 3seconds with 15 concurrent users using AVD
![]() Session host | CPU Utilization | RAM memory | – | – |
Workload 1 | 35% | 7.8GB | – | – |
Workload 2 | 43% | 7.9GB | – | – |
Workload 3 | 37.7% | 9.3GB | – | – |
![]() End-point device | Input FPS | Output FPS | Encode time | Input Bandwidth | Output Bandwidth | Latency |
Workload 1 | 19 FPS | 23 FPS | 4 MS | 20 MB/s | 13 MB/s | 89ms |
Workload 2 | 17 FPS | 20 FPS | 13 MS | 43 MB/s | 12 MB/s | 127ms |
Workload 3 | 19 FPS | 29 FPS | 5 MS | 40 MB/s | 10 MB/s | 109ms |
Observations:
- ALL 3 workloads works great across devices, video in sync and audio.
- Delivered a good user experience
Summary, scalability, user experience and recommendations
I will cover in my findings what does the data means when it comes to scalability and user experience.
Scalability
Lets look at some raw data. Instance was benchmarked with 15 CCU and if you divide CPU util with 100% and multiply with 15x this is how I did the raw estimate.
Please keep in mind this is a raw estimate so keep the data with caution
Microsoft Azure NV32as_v4
- Workload 1 could potential be scaled up to 49 CCU according to 100% CPU utilization pr instance
- Workload 2 could potential be scaled up to 40 CCU according to 100% CPU utilization pr instance
- Workload 3 could potential be scaled up to 52 CCU according to 100% CPU utilization pr instance
Microsoft Azure D32as_v4
- Workload 1 could potential be scaled up to 28 CCU according to 100% CPU utilization pr instance
- Workload 2 could potential be scaled up to 21 CCU according to 100% CPU utilization pr instance
- Workload 3 could potential be scaled up to 33 CCU according to 100% CPU utilization pr instance
Microsoft Azure D32s_v4
- Workload 1 could potential be scaled up to 42 CCU according to 100% CPU utilization pr instance
- Workload 2 could potential be scaled up to 34 CCU according to 100% CPU utilization pr instance
- Workload 3 could potential be scaled up to 40 CCU according to 100% CPU utilization pr instance
Summary scalability
AMD instance NV32as_v4 gets the highest density users compared to D32as_v4 / D32s_v4
NV32as_v4 (GPU) instance gets approx 25%* more users on Workload 1, Workload 2 & Workload 3 compared to nonGPU instances D32as_v4, D32s_v4.
Zoom utilise GPU when maxed reverted to s/w rasterise on CPU.
D32s_v4 delivers more users than D32as_v4 with Zoom.
Lower utilization of CPU with Zoom compared without GPU
Summary User Experience
User experience works great across all 3 instance type.
GPU delivers less latency which means that user input is faster than NonGPU.
NV32as_v4 delivers 25% more frames than D32s_v4 and D32as_v4
NV32as_v4 delivers less bandwidth than D32s_v4 and D32as_v4
Recommendations
It’s recommended to use Zoom VDI optimization to get offload of video encode to Windows endpoints so the encode can be used for other device such as MacOS/Android. Zoom VDI also reduce the offloading on CPU/GPU so these ressources can be used for other applications such as Office/Windows GUI.
Windows endpoint delivers the best experience and best encoding capability with AVD and Zoom. Windows endpoint with AVD client + Zoom media plugin supports Zoom offload. MacOS endpoint requires alot of GPU encode when 1-2 device are in use with Video encode. MacOS dont have Zoom media plugin “optimization” which is why it see it uses a lot of GPU encode and not happening on the endpoint.
ChromeOS works good with AVD but there is notice some UX lag compared to MacOS and Windows Endpoint with keyboard/mouse. But screen/video is great performing. There is no camera redirection supported yet on Android or HTML5 so ChromeOS is limited when it comes to if users wants to use camera on these devices and use them in Zoom in Azure Virtual Desktop. There is also no optimization for Zoom media plugin for ChromeOS.
Tony Cai
Great experiment Mr Poppelgaard! Thanks for sharing the results!