| Welcome to the first column on the ACM SIGMM Records from the Video Quality Experts Group (VQEG). |
VQEG is an international and independent organisation of technical experts in perceptual video quality assessment from industry, academia, and government organisations.
This column briefly introduces the mission and main activities of VQEG, establishing a starting point of a series of columns that will provide regular updates of the advances within the current ongoing projects, as well as reports of the VQEG meetings.
The editors of these columns are Jesús Gutiérrez (upper photo, email@example.com), co-chair of the Immersive Media Group of VQEG and Kjell Brunnström (lower photo, firstname.lastname@example.org), general co-chair of VQEG. Feel free to contact them for any further questions, comments or information, and also to check the VQEG website: www.vqeg.org.
The Video Quality Experts Group (VQEG) was born from a need to bring together experts in subjective video quality assessment and objective quality measurement. The first VQEG meeting, held in Turin in 1997, was attended by a small group of experts drawn from ITU-T and ITU-R Study Groups. VQEG was first grounded in basic subjective methodology and objective tool development/verification for video quality assessment such that the industry could be moved forward with standardization and implementation. At the beginning it was focused around measuring the perceived video quality since the distribution path for video and audio were limited and known.
Over the last 20 years from the formation of VQEG the ecosystem has changed dramatically and thus so must the work. Multimedia is now pervasive on all devices and methods of distribution from broadcast to cellular data networks. This shift has the expertise within VQEG to move from the visual (no-audio) quality of video to Quality of Experience (QoE).
The march forward of technologies means that VQEG needs to react and be on the leading edge of developing, defining and deploying methods and tools that help address these new technologies and move the industry forward. This also means that we need to embrace both qualitative and quantitative ways of defining these new spaces and terms. Taking a holistic approach to QoE will enable VQEG to drive forward and faster with unprecedented collaboration and execution
VQEG is open to all interested from industry, academia, government organizations and Standard-Developing Organizations (SDOs). There are no fees involved, no membership applications and no invitations are needed to participate in VQEG activities. Subscription to the main VQEG email list (email@example.com) constitutes membership in VQEG.
VQEG conducts work via discussions over email reflectors, regularly scheduled conference calls and, in general, two face-to-face meetings per year. There are currently more than 500 people registered across 11 email reflectors, including a main reflector for general announcements relevant to the entire group, and different project reflectors dedicated to technical discussions of specific projects. A LinkedIn group exists as well.
The main objectives of VQEG are:
- To provide a forum, via email lists and face-to-face meetings for video quality assessment experts to exchange information and work together on common goals.
- To formulate test plans that clearly and specifically define the procedures for performing subjective assessment tests and objective models validations.
- To produce open source databases of multimedia material and test results, as well as software tools.
- To conduct subjective studies of multimedia and immersive technologies and provide a place for collaborative model development to take place.
Currently, several working groups are active within VQEG, classified under four main topics:
- Subjective Methods: Based on collaborative efforts to improve subjective video quality test methods.
- Audiovisual HD (AVHD), project “Advanced Subjective Methods” (AVHD-SUB): This group investigates improved audiovisual subjective quality testing methods. This effort may lead to a revision of ITU-T Rec. P.911. As examples of its activities, the group has investigated alternative experiment designs for subjective tests, to validate subjective testing of long video sequences that are only viewed once by each subject. In addition, it conducted a joint investigation into the impact of the environment on mean opinion scores (MOS).
- Psycho-Physiological Quality Assessment (PsyPhyQA): The aim of this project is to establish novel psychophysiology based techniques and methodologies for video quality assessment and real-time interaction of humans with advanced video communication environments. Specifically, some of the aspects that the project is looking at include: video quality assessment based on human psychophysiology (including, eye gaze, EEG, EKG, EMG, GSR, etc.), computational video quality models based on psychophysiological measurements, signal processing and machine learning techniques for psychophysiology based video quality assessment, experimental design and methodologies for psychophysiological assessment, correlates of psychophysics and psychophysiology. PsyPhyQA has published a dataset and testplan for a common framework for the evaluation of psychophysiological visual quality assessment.
- Statistical Analysis Methods (SAM): This group addresses problems related to how to better analyze and improve data quality coming from subjective experiments and how to consider uncertainty in objective media quality predictors/models development. Its main goals are: to improve methods used to draw conclusions from subjective experiments, to understand the process of expressing opinion in a subjective experiment, to improve subjective experiment design to facilitate analysis and applications, to improve the analysis of objective model performances, and to revisit standardised methods for the assessment of the performance of objective model performances.
- Objective Metrics: Working towards developing and validating objective video quality metrics.
- Audiovisual HD (AVHD), project “AVHD-AS / P.NATS phase 2”: It is a joint project of VQEG and ITU Study Group 12 Question 14. The main goal is to develop a multitude of objective models, varying in terms of complexity/type of input/use-cases for the assessment of video quality in HTTP/TCIP based adaptive bitrate streaming services (e.g., YouTube, Vimeo, Amazon Video, Netflix, etc). For these services quality experienced by the end user is affected by video coding degradations, and delivery degradations due to initial buffering, re-buffering and media adaptations caused by the changes in bitrate, resolution, and frame rate
- Computer Generated Imagery (CGI): focuses on the computer generated content for both images and videos material. The main goals are as follows: creating a large database of computer generated content, analyzing the content (feature extraction before and after rendering), analyzing the performance of objective quality metrics, evaluating/developing existing/new quality metrics/models for CGI material, studying rendering adaptation techniques (depending on the network constraints). This activity is in-line with the ITU-T work item P.BBQCG (Parametric Bitstream-based Quality Assessment of Cloud Gaming Services).
- No Reference Metrics (NORM): This group is an open collaborative for developing No-Reference metrics and methods for monitoring use case specific visual service quality. The NORM group is a complementary, industry-driven alternative of QoE to measure automatically the visual quality by using perceived indicators. Its main activities are to maintain a list of real-world use cases for visual quality monitoring, a list of potential algorithms and methods for no reference MOS and/or key indicators (visual artifact detection) for each use case, a list of methods (including datasets) to train and validate the algorithms for each use case, and a list of methods to provide root cause indication for each use case. In addition, the group encourages open discussions and knowledge sharing on all aspects related to no-reference metric research and development.
- Joint Effort Group (JEG) – Hybrid: This group is an open collaboration working together to develop a robust Hybrid Perceptual/Bit-Stream model. It has developed and made available routines to create and capture bit-stream data and parse bit-streams into HMIX files. Efforts are underway into developing subjectively rated video quality datasets with bit-stream data that can be used by all JEG researchers. The goal is to produce one model that combines metrics developed separately by a variety of researchers.
- Quality Assessment for Computer Vision Applications (QACoViA): the goal of this group is to study the visual quality requirements for computer vision methods, especially focusing on: testing methodologies and frameworks to identify the limit of computer vision methods with respect to the visual quality of the ingest; the minimum quality requirements and objective visual quality measure to estimate if a visual content is the operating region of computer vision; and delivering implementable algorithms being a proof/demonstrate of the new proposal concept of an objective video quality assessment methods for recognition tasks.
- Industry and Applications: Focused on seeking improved understanding of new video technologies and applications.
- 5G Key Performance Indicators (5GKPI): Studies the relationship between the Key Performance Indicators (KPI) of new communication networks (namely 5G, but extensible to others) and the QoE of the video services on top of them. With this aim, this group addresses: the definition of relevant use cases (e.g., video for industrial applications, or mobility scenarios), the study of global QoE aspects for video in mobility and industrial scenarios, the identification of the relevant network KPIs(e.g., bitrate, latency, etc.) and application-level video KPIs (e.g., picture quality, A/V sync, etc.) and the generation of open datasets for algorithm testing and training.
- IMG (Immersive Media Group): This group researches on quality assessment of immersive media, with the main goals of generating datasets of immersive media content, validating subjective test methods, and baseline quality assessment of immersive systems providing guidelines for QoE evaluation. The technologies covered by this group include: 360-degree content, virtual/augmented mixed reality, stereoscopic 3D content, Free Viewpoint Video, multiview technologies, light field content, etc.
- Support and Outreach: Responsible for the support for VQEG’s activities.
- eLetter: The goal of VQEG eLetter is to provide up-to-date technical advances on video quality related topics. Each issue of VQEG eletter features a collection of papers authored by well-known researchers. These papers are contributed by invited authors or authors responding to a call-for-paper, and they can be: technical papers, summary/review of other publications, best practice anthologies, reprints of difficult to obtain articles, and responses to other articles. VQEG wants the eLetter to be interactive in nature.
- Human Factors for Visual Experiences (HFVE): The objectives of this group is to uphold the liaison relation between VQEG and the IEEE standardization group P3333.1. Some examples of the activities going on within this group are the standard for the (deep learning-based) assessment based on human factors of visual experiences with virtual/augmented/mixed reality and the standards on human factors for the quality assessment of light field imaging (IEEE P3333.1.4) and on quality assessment of high dynamic range technologies.
- Independent Lab Group (ILG): The ILG act as independent arbitrators, whose generous contributions make possible the VQEG validation tests. Their goal is to ensure that all VQEG validation testing is unbiased and done to high quality standards.
- Joint Effort Group (JEG): is an activity within VQEG that promotes collaborative efforts addressed to: validate metrics through both subjective dataset completion and metric design, extend subjective datasets in order to better identify the limitations of quality metrics, improve subjective methodologies to address new scenarios and use cases that involve QoE issues, and increase the knowledge about both subjective and objective video quality assessment.
- Joint Qualinet-VQEG team on Immersive Media: The objectives of this joint team from Qualinet and VQEG are: to uphold the liaison relation between both bodies, to inform both QUALINET and VQEG on the activities in respective organizations (especially on the topic of immersive media), to promote collaborations on other topics (i.e., form new joint teams), and to uphold the liaison relation with ITU-T SG12, in particular on topics around interactive, augmented and virtual reality QoE.
- Tools and Subjective Labs Setup: The objective of this project is to provide the video quality research community with a wide variety of software tools and guidance in order to facilitate research. Tools are available in the following categories: quality analysis (software to run quality analyses), encoding (video encoding tools), streaming (streaming and extracting information from video streams), subjective test software (tools for running and analyzing subjective tests), and helper tools (miscellaneous helper tools).
In addition, the Intersector Rapporteur Group on Audiovisual Quality Assessment (IRG-AVQA) studies topics related to video and audiovisual quality assessment (both subjective and objective) among ITU-R Study Group 6 and ITU-T Study Group 12. VQEG colocates meetings with the IRG-AVQA to encourage a wider range of experts to contribute to Recommendations.
For more details and previous closed projects please check: https://www.its.bldrdoc.gov/vqeg/projects-home.aspx
VQEG activities are documented in reports and submitted to relevant ITU Study Groups (e.g., ITU-T SG9, ITU-T SG12, ITU-R WP6C), and other SDOs as appropriate. Several VQEG studies have resulted in ITU Recommendations.
|VQEG Project||Description||ITU Recommendations|
|Full Reference Television (FRTV) Phase I||Examined the performance of FR and NR models on standard definition video. The test materials used in this test plan and the subjective tests data are freely available to researchers.||ITU-T J.143 (2000), ITU-T J.144 (2001), ITU-T J.149 (2004)|
|Full Reference Television (FRTV) Phase II||Examined the performance of FR and NR models on standard definition video, using the DSCQS methodology.||ITU-T J.144 (2004)|
ITU-R BT.1683 (2004)
|Multimedia (MM) Phase I||Examined the performance of FR, RR and NR models for VGA, CIF and QCIF video (no audio).||ITU-T J.148 (2003)|
ITU-T P.910 (2008)
ITU-T J.246 (2008)
ITU-T J.247 (2008)
ITU-T J.340 (2010)
ITU-R BT.1683 (2004)
|Reduced Reference / No Reference Television (RRNR-TV)||Examined the performance of RR and NR models on standard definition video||ITU-T J.244 (2008)|
ITU-T J.249 (2010)
ITU-R BT.1885 (2011)
|High Definition Television (HDTV)||Examined the performance of FR, RR and NR models for HDTV. Some of the video sequences used in this test are publicly available in the Consumer Digital Video Library.||ITU-T J.341 (2011)|
ITU-T J.342 (2011)
|QART||Studied the subjective quality evaluation of video used for recognition tasks and task-based multimedia applications.|| ITU-T P.912 (2008)|
|Hybrid Perceptual Bitstream||Examined the performance of Hybri models for VGA/WVGA and HDTV||ITU-T J.343 (2014)|
ITU-T J.343.1-6 (2014)
|3DTV||Investigated how to assess 3DTV subjective video quality, covering methodologies, display requirements and evaluation of visual discomfort and fatigue.||ITU-T P.914 (2016)|
ITU-T P.915 (2016)
ITU-T P.916 (2016)
|Audiovisual HD (AVHD)||On one side, addressed the subjective evaluation of audio-video quality metrics. |
On the other side, developed model standards for video quality assessment of streaming services over reliable transport for resolutions up to 4K/UHD, in collaboration with ITU-T SG12.
|ITU-T P.913 (2014)|
ITU-T P.1204 (2020)
ITU-T P.1204.3 (2020)
ITU-T P.1204.4 (2020)
ITU-T P.1204.5 (2020)
The contribution to current ITU standardization efforts is still ongoing. For example, updated texts have been contributed by VQEG on statistical analysis in ITU-T Rec. P.1401, and on subjective quality assessment of 360-degree video in ITU-T P.360-VR.
Apart from this, VQEG is supporting the research on QoE by providing for the research community tools and datasets. For instance, it is worth noting the wide variety of software tools and guidance in order to facilitate research provided by VQEG Tools and Subjective Labs Setup via GitHub. Another example, is the VQEG Image Quality Evaluation Tool (VIQET), which is an objective no-reference photo quality evaluation tool. Finally, several datasets have been published which can be found in the websites of the corresponding projects, in the Consumer Digital Video Library or in other repositories.
General articles for the interested reader about the work of VQEG, especially covering the previous works are [1, 2].
 Q. Huynh-Thu, A. Webster, K. Brunnström, and M. Pinson, “VQEG: Shaping Standards on Video Quality”, in 1st International Conference on Advanced Imaging, Tokyo, Japan, 2015.
 K. Brunnström, D. Hands, F. Speranza, and A. Webster, “VQEG Validation and ITU Standardisation of Objective Perceptual Video Quality Metrics”, IEEE Signal Processing Magazine, vol. 26, no. 3, pp. 96-101, May 2009.