Video Multimethod Assessment Fusion


Video Multimethod Assessment Fusion is an objective full-reference video quality metric developed by Netflix in cooperation with the University of Southern California and the Laboratory for Image and Video Engineering at The University of Texas at Austin. It predicts subjective video quality based on a reference and distorted video sequence. The metric can be used to evaluate the quality of different video codecs, encoders, encoding settings, or transmission variants.

History

The metric is based on initial work from the group of Professor C.-C. Jay Kuo at the University of Southern California. Here, the applicability of fusion of different video quality metrics using support vector machines has been investigated, leading to a "FVQA Index" that has been shown to outperform existing image quality metrics on a subjective video quality database.
The method has been further developed in cooperation with Netflix, using different subjective video datasets, including a Netflix-owned dataset. Subsequently renamed "Video Multimethod Assessment Fusion", it was announced on the Netflix TechBlog in June 2016 and version 0.3.1 of the reference implementation was made available under a permissive open-source license.
In 2017, the metric was updated to support a custom model that includes an adaptation for cellular phone screen viewing, generating higher quality scores for the same input material. In 2018, a model that predicts the quality of up to 4K resolution content was released. The datasets on which these models were trained have not been made available to the public.

Components

VMAF uses existing image quality metrics and other features to predict video quality:
The above features are fused using a SVM-based regression to provide a single output score in the range of 0–100 per video frame, with 100 being quality identical to the reference video. These scores are then temporally pooled over the entire video sequence using the arithmetic mean to provide an overall differential mean opinion score.
Due to the public availability of the training source code, the fusion method can be re-trained and evaluated based on different video datasets and features.

Performance

An early version of VMAF has been shown to outperform other image and video quality metrics such as SSIM, PSNR-HVS and VQM-VFD on three of four datasets in terms of prediction accuracy, when compared to subjective ratings. Its performance has also been analyzed in another paper, which found that VMAF did not perform better than SSIM and MS-SSIM on a video dataset. In 2017, engineers from RealNetworks reported good reproducibility of Netflix' performance findings.

Software

A reference implementation written in C and Python is published as free software under the terms of BSD+Patent license. Its source code and additional material are available on GitHub.