Reference vs No-Reference Video Quality testing
The UC Test Lab performs only no-reference video quality testing for meeting services. This article discusses ref vs no-ref and why we feel no-reference testing is better suited to gauging video quality, specifically for meeting services like Microsoft Teams, Webex or Zoom.
Video quality testing for meeting services can be tricky and that’s why few companies do it. We need to ensure that all frames of a given distorted video sample are accurately scored. There are two ways to approach this. First is using a reference video and manipulating it through a codec to create a distorted file. These two files are compared against each other to show how much the codec degraded the stream. This is an accurate means of testing, but only if you can control the result so that frame one of the reference file is frame one of the distorted file and so on. Any mismatch of any frames between the two files will yield an incorrect assessment. If one uses a codec that ensures this and just re-encodes the original to a new format, it’s great, however when dealing with video conferencing, there are too many factors that will throw off frames. This is where we feel a no-reference testing model is the best way of testing the quality of the video stream.
Our no-reference methodology combined with repeatable testing conditions create an accurate assessment of video quality. We are able to capture a region of interest in the incoming stream (typically a virtual camera coming from the transmitting endpoint) and score it without the need to match frames for comparison. This not only maintains accuracy of the scoring, but increases the speed in which results can be gathered. We do not have to go through the effort of attempting to match frames and possibly on accident degrading the sample further. Our easy to use capture and scoring method (NIQE for those interested) provides scores in seconds, allowing us to take dozens or hundreds of samples for averaging across different meeting services and platforms (client vs browser for example).
One thing we’re asked about is why we take a region of interest instead of the entire video feed to score. This is due to each meeting service having their own custom user interface with icons or graphics created by the client. Any capturing of these artifacts will yield an incorrect score so capturing a region of interest is critical to achieve an accurate score.
In addition to the above, with a no-reference methodology we can vary network conditions to see how it affects the video quality, which is impossible to do with a full reference model. Manipulating the network can create latency, buffering or even freezing of the video stream. In a reference model, the frames are now so far off between the original video and the captured degraded video that it becomes impossible to accurately score.
For more information about video quality testing of meeting services or any of our other services, reach out to us at bryan@uctestlab.com