How We Test Meeting Services
The UC Test Lab has four distinct evaluations to test meeting service quality. This article will touch on each of them and why they are pieces of the overall puzzle to assess a meeting service’s overall quality.
The four evaluations we perform are:
- Video Quality Evaluation
- Audio Quality Evaluation
- Resource Allocation Evaluation
- Experience Evaluation
Our audio and video quality evaluations utilize a no-reference testing methodology to achieve accurate results with meeting services. Video quality testing utilizes a scoring system on a 0 to infinite basis with the lower number being the better score. Audio quality is scored in typical MOS fashion on a 5-point basis with 5 being the best score an audio file can achieve. The audio evaluation includes an assessment of noise suppression / cancellation with specific tests to assess how a meeting service handles various static and acute noises. With both the audio and video evaluation, we gather dozens of samples for each test and scores are averaged to account for any anomalies.
Resource allocation testing is performed by closely monitoring and documenting how meeting services use an endpoint’s CPU, GPU, RAM and bandwidth while performing typical functions in a video meeting.
The experience evaluation is a deep dive into the speeds and feeds of a meeting service. Here we test every feature of a meeting service and compare it to competitors. We perform several time-based tests in this evaluation, like how long it takes to schedule a meeting with in-org participants in the client or how long it takes for remote participants to see a content share and many others. We also perform a feature analysis - if your meeting service contain the tools your competitors have, if tools are logically laid out and user friendly among other things. This evaluation length varies on the needs of the vendor, but typically includes between 75-100 individual tests.
Every evaluation can be customized with input from the vendor and each can be performed on its own or can be combined with another. Each evaluation is intended to be run simultaneously against other competing meeting services. The goal is to determine how one meeting service compares against another with data gathered from an independent third party source. Results are for internal vendor use only and are not intended to be made public.
For more information reach out to the lab at bryan@uctestlab.com