VQEG Column: VQEG Meeting Jun. 2021 (virtual/online)

Introduction

Welcome to the fifth column on the ACM SIGMM Records from the Video Quality Experts Group (VQEG).
The last VQEG plenary meeting took place online from 7 to 11 June 2021. As the previous meeting celebrated in December 2020, it was organized online (this time by Kingston University) with multiple sessions spread over five days, allowing remote participation of people from 22 different countries of America, Asia, and Europe. More than 100 participants registered to the meeting and they could attend the 40 presentations and several discussions that took place in all working groups. 
This column provides an overview of the recently completed VQEG plenary meeting, while all the information, minutes and files (including the presented slides) from the meeting are available online in the VQEG meeting website

Group picture of the VQEG Meeting 7-11 June 2021.

Several interesting presentations of state-of-the-art works can be of interest to the SIGMM community, in addition to the contributions to several working items of ITU from various VQEG groups. The progress on the new activities launched in the last VQEG plenary meeting (in relation to Live QoE assessment, SI/TI clarification, implementers guide for video quality metrics for coding applications, and the inclusion of video quality metrics as metadata in compressed streams), as well as the proposal for a new joint work on evaluation of immersive communication systems from a task-based or interactive perspective within the Immersive Media Group.

We encourage those readers interested in any of the activities going on in the working groups to check their websites and subscribe to the corresponding reflectors, to follow them and get involved.

Overview of VQEG Projects

Audiovisual HD (AVHD)

AVHD group works on improved subjective and objective methods for video-only and audiovisual quality of commonly available systems. Currently, after the project AVHD/P.NATS2 (a joint collaboration between VQEG and ITU SG12) finished in 2020 [1], two projects are ongoing within AVHD group: QoE Metrics for Live Video Streaming Applications (Live QoE), which was launched in the last plenary meeting, and Advanced Subjective Methods (AVHD-SUB).
The main discussion during the AVHD sessions was related to the Live QoE project, which was led by Shahid Satti (Opticom) and Rohit Puri (Twitch). In addition to the presentation of the project proposal, the main decisions reached until now were exposed (e.g., use of videos of 20-30 seconds with resolution 1080p and framerates up to 60fps, use ACR as subjective test methodology, generation of test conditions, etc.), as well as open questions were brought up for discussion, especially in relation to how to acquire premium content and network traces. 
In addition to this discussion, Steve Göring (TU Ilmenau) presented and open-source platform (AVrate Voyager) for crowdsourcing/online subjective tests [2], and Shahid Satti (Opticom) presented the performance results of the Opticom models on the project AVHD/P.NATS Phase 2. Finally, Ioannis Katsavounidis (Facebook) presented the subjective testing validation of the AV1 performance from the Alliance for Open Media (AOM) to gather feedback on the test plan and possible interested testing labs from VQEG. It is also worth noting that this session was recorded to be used as raw multimedia data for the Live QoE project. 

Quality Assessment for Health applications (QAH)

The session related to the QAH group group allocated three presentations apart from the project summary provided by Lucie Lévêque (Polytech Nantes). In particular, Meriem Outtas (INSA Rennes) provided a review on objective quality assessment of medical images and videos. This is is one of the topics jointly addressed by the group, which is working on an overview paper in line with the recent review on subjective medical image quality assessment [3]. Moreover, Zohaib Amjad Khan (Université Sorbonne Paris Nord) presented a work on video quality assessment of laparoscopic videos, while Aditja Raj and Maria Martini (Kingston University) presented their work on multivariate regression-based convolutional neural network model for fundus image quality assessment.

Statistical Analysis Methods (SAM)

The SAM session consisted of three presentations followed by discussions on the topics. One of this was related to the description of subjective experiment consistency by p-value p-p plot [4], which was presented by Jakub Nawała (AGH University of Science and Technology). In addition, Zhi Li (Netflix) and Rafał Figlus (AGH University of Science and Technology) presented the progress on the contribution from SAM to the ITU-T to modify the recommendation P.913 to include the MLE model for subject behavior in subjective experiments [5] and the recently available implementation of this model in Excel. Finally, Pablo Pérez (Nokia Bell Labs) and Lucjan Janowski (AGH University of Science and Technology) presented their work on the possibility of performing subjective experiments with four subjects [6].

Computer Generated Imagery (CGI)

Nabajeet Barman (Kingston University) presented a report on the current activities of the CGI group. The main current working topics are related to gaming quality assessment methodologies and quality prediction, and codec comparison for CG content. This group is closely collaborating with the ITU-T SG12, as reflected by its support on the completion of the 3 work items: ITU-T Rec. G.1032 on influence factors on gaming quality of experience, ITU-T Rec. P.809 on subjective evaluation methods for gaming quality, and ITU-T Rec. G.1072 on opinion model for gaming applications. Furthermore, CGI is contributing to 3 new work items: ITU-T work item P.BBQCG on parametric bitstream-based quality assessment of cloud gaming services, ITU-T work item G.OMMOG on opinion models for mobile online gaming applications, and ITU-T work item P.CROWDG on subjective evaluation of gaming quality with a crowdsourcing approach. 
In addition, four presentations were scheduled during the CGI slots. The first one was delivered by Joel Jung (Tencent Media Lab) and David Lindero (Ericsson), who presented the details of the ITU-T work item P.BBQCG. Another one was related to the evaluation of MPEG-5 Part 2 (LCEVC) for gaming video streaming applications, which was presented by Nabajeet Barman (Kingston University) and Saman Zadtootaghaj (Dolby Laboratories). Also Nabajeet together with Maria Martini (Kingston University) presented a dataset, codec comparison and challenges related to user generated HDR gaming video streaming [7]. Finally, JP Tauscher (Technische Universität Braunschweig) presented his work on EEG-based detection of deep fake images. 

No Reference Metrics (NORM)

The session for NORM group included a presentation on the impact of Spatial and Temporal Information (SI and TI) on video quality and compressibility [8], delivered by Werner Robitza (AVEQ GmbH), which was followed by a fruitful discussion on the compression complexity and on the activity related to SI/TI clarification launched in the last VQEG plenary meeting. In addition, there was another presentation from Mikołaj Leszczuk (AGH University of Science and Technology) on content type indicators for technologies supporting video sequence summarization. Finally, Ioannis Katsavounidis (Facebook) led a discussion on the inclusion of video quality metrics as metadata in compressed streams, with a report on the progress on this activity that was started in the last meeting. 

Joint Effort Group (JEG) – Hybrid

The JEG-Hybrid group is currently working on the development of a generally applicable no-reference hybrid perceptual/bitstream model. In this sense, Enrico Masala and Lohic Fotio Tiotsop (Politecnico di Tornio) presented the progress on designing a neural-network approach to model single observers using existing subjectively-annotated image and video datasets [9] (the design of subjective tests tailored for the training of this approach is envisioned for future work). In addition to this activity, the group is working in collaboration with the Sky Group on the “Hodor Project”, which is based on developing a measure that could allow to automatically identify video sequences for which quality metrics are likely to deliver inaccurate Mean Opinion Score (MOS) estimation.
Apart from these joint activities Dr. Yendo Hu (Carnation Communications Inc. and Jimei University) delivered a presentation proposing to work on a benchmarking standard to bring quality, bandwidth, and latency into a common measurement domain.

Quality Assessment for Computer Vision Applications (QACoViA)

In addition to a progress report, the QACoViA group scheduled two interesting presentations on enhancing artificial intelligence resilience to image coding artifacts through expert training (by Alban Marie from INSA Rennes) and on providing datasets to rain no-reference metrics for computer vision applications (by Carolina Whitaker from NTIA/ITS). 

5G Key Performance Indicators (5GKPI)

The 5GKPI session consisted of a presentation by Pablo Pérez (Nokia Bell-Labs) of the progress achieved by the group since the last plenary meeting in the following efforts: 1) the contribution to ITU-T Study Group 12 Question 13 related through the Technical Report about QoE in 5G video services (GSTR-5GQoE), which addresses QoE requirements and factors for some use cases like Tele-operated Driving (ToD), wireless content production, mixed reality offloading and first responder networks; 2) the contribution to the 5G Automotive Association (5GAA) through a high-level contribution on general QoE requirements for remote driving, considering for the near future the execution of subjective tests for ToD video quality; and 3) the long-term plan on working on a methodology to create simple opinion models to estimate average QoE for a network and use case.

Immersive Media Group (IMG)

Several presentations were delivered during the IMG session that were divided into two blocks: one covering technologies and studies related to the evaluation of immersive communication systems from a task-based or interactive perspective, and another one covering other topics related to the assessment of QoE of immersive media. 
The first set of presentations is related to a new proposal for a joint work within IMG related to the ITU-T work item P.QXM on QoE assessment of eXtended Reality meetings. Thus, Irene Viola (CWI) presented an overview of this work item. In addition, Carlos Cortés (Universidad Politécncia de Madrid) presented his work on evaluating the impact of delay on QoE in immersive interactive environments, Irene Viola (CWI) presented a dataset of point cloud dynamic humans for immersive telecommunications, Pablo César (CWI) presented their pipeline for social virtual reality [10], and Narciso García (Universidad Politécncia de Madrid) presented their real-time free-viewpoint video system (FVVLive) [11]. After these presentations, Jesús Gutiérrez (Universidad Politécncia de Madrid) led the discussion on joint next steps with IMG, which, in addition, to identify interested parties in joining the effort to study the evaluation of immersive communication systems, also covered the further analyses to be done from the subjective tests carried out with short 360-degree videos [12] and the studies carried out to assess quality and other factors (e.g., presence) with long omnidirectional sequences. In this sense, Marta Orduna (Universidad Politécnica de Madrid) presented her subjective study to validate a methodology to assess quality, presence, empathy, attitude, and attention in Social VR [13]. Future progress on these joint activities will be discussed in the group audio-calls. 
Within the other block of presentations related to immersive media topics, Maria Martini (Kingston University), Chulhee Lee (Yonsei University), and Patrick Le Callet (Université de Nantes) presented the status of IEEE standardization on QoE for immersive experiences (IEEE P3333.1.4 – Light Field, and IEEE P3333.1.3, deep learning-based quality assessment), Kjell Brunnström (RISE) presented their work on legibility and readability in augmented reality [14], Abdallah El Ali (CWI) presented his work on investigating the relationship between momentary emotion self-reports and head and eye movements in HMD-based 360° videos [15], Elijs Dima (Mid Sweden University) exposed his study on quality of experience in augmented telepresence considering the effects of viewing positions and depth-aiding augmentation [16], Silvia Rossi (UCL) presented her work towards behavioural analysis of 6-DoF user when consuming immersive media [17], and Yana Nehme (INSA Lyon) presented a study on exploring crowdsourcing for subjective quality assessment of 3D Graphics.

Intersector Rapporteur Group on Audiovisual Quality Assessment (IRG-AVQA) and Q19 Interim Meeting

During the IRG-AVQA session, an overview on the progress and recent works within ITU-R SG6 and ITU-T SG12 was provided. In particular, Chulhee Lee (Yonsei University) in collaboration with other ITU rapporteurs presented the progress of ITU-R WP6C on recommendations for HDR content, the work items within: ITU-T SG12 Question 9 on audio-related work items, SG12 Question 13 on gaming and immersive technologies (e.g., augmented/extended reality) among others, SG12 Question 14 recommendations and work items related to the development of video quality models, and SG12 Question 19 on work items related to television and multimedia. In addition, the progress of the group “Implementers Guide for Video Quality Metrics (IGVQM)”, launched in the last plenary meeting by Ioannis Katsavounidis (Facebook) was discussed addressing specific points to push the collection of video quality models and datasets to be used to develop an implementer’s guide for objective video quality metrics for coding applications. 

Other updates

The next VQEG plenary meeting will take place online in December 2021.

In addition, VQEG is investigating the possibility to disseminate the videos from all the talks from these plenary meetings via platforms such as Youtube and Facebook.

Finally, given that some modifications are being made to the public FTP of VQEG, if the links to the presentations included in this column are not opened by the browser, the reader can download all the presentations in one compressed file.

References

[1] A. Raake, S. Borer, S. Satti, J. Gustafsson, R.R.R. Rao, S. Medagli, P. List, S. Göring, D. Lindero, W. Robitza, G. Heikkilä, S. Broom, C. Schmidmer, B. Feiten, U. Wüstenhagen, T. Wittmann, M. Obermann, and R. Bitto, “Multi-model standard for bitstream-, pixel-based and hybrid video quality assessment of UHD/4K: ITU-T P.1204”, IEEE Access, vol. 8, pp. 193020-193049, Oct. 2020.
[2] R.R.R. Rao, S. Göring, and A. Raake, “Towards High Resolution Video Quality Assessment in the Crowd”, IEEE Int. Conference on Quality of Multimedia Experience (QoMEX), Jun. 2021.
[3] L. Lévêque, M. Outtas, H. Liu, and L. Zhang, “Comparative study of the methodologies used for subjective medical image quality assessment”, Physics in Medicine & Biology, Jul. 2021 (Accepted).
[4] J. Nawala, L. Janowski, B. Cmiel, and K. Rusek, “Describing Subjective Experiment Consistency by p-Value P–P Plot”, ACM International Conference on Multimedia (ACM MM), Oct. 2020.
[5] Z. Li, C. G. Bampis, L. Krasula, L. Janowski, and I. Katsavounidis, “A Simple Model for Subject Behavior in Subjective Experiments”, arXiv:2004.02067v3, May 2021.
[6] P. Perez, L. Janowski, N. Garcia, M. Pinson, “Subjective Assessment Experiments That Recruit Few Observers With Repetitions (FOWR)”, arXiv:2104.02618, Apr. 2021.
[7] N. Barman, and M. G. Martini, “User Generated HDR Gaming Video Streaming: Dataset, Codec Comparison and Challenges”, IEEE Transactions on Circuits and Systems for Video Technology, May 2021.
[8] W. Robitza, R.R.R. Rao, S. Göring, and A. Raake, “Impact of Spatial and Temporal Information on Video Quality and Compressibility”, IEEE Int. Conference on Quality of Multimedia Experience (QoMEX), Jun. 2021.
[9] L. Fotio Tiotsop, T. Mizdos, M. Uhrina, M. Barkowsky, P. Pocta, and E. Masala, “Modeling and estimating the subjects’ diversity of opinions in video quality assessment: a neural network based approach”, Multimedia Tools and Applications, vol. 80, pp. 3469–3487, Sep. 2020.
[10] J. Jansen, S. Subramanyam, R. Bouqueau, G. Cernigliaro, M. Martos Cabré, F. Pérez, and P. Cesar, “A Pipeline for Multiparty Volumetric Video Conferencing: Transmission of Point Clouds over Low Latency DASH”, ACM Multimedia Systems Conference (MMSys), May 2020.
[11] P. Carballeira, C. Carmona, C. Díaz, D. Berjón, D. Corregidor, J. Cabrera, F. Morán, C. Doblado, S. Arnaldo, M.M. Martín, and N. García, “FVV Live: A real-time free-viewpoint video system with consumer electronics hardware”, IEEE Transactions on Multimedia, May 2021.
[12] J. Gutiérrez, P. Pérez, M. Orduna, A. Singla, C. Cortés, P. Mazumdar, I. Viola, K. Brunnström, F. Battisti, N. Cieplińska, D. Juszka, L. Janowski, M. Leszczuk, A. Adeyemi-Ejeye, Y. Hu, Z. Chen, G. Van Wallendael, P. Lambert, C. Díaz, J. Hedlund, O. Hamsis, S. Fremerey, F. Hofmeyer, A. Raake, P. César, M. Carli, N. García, “Subjective evaluation of visual quality and simulator sickness of short 360° videos: ITU-T Rec. P.919”, IEEE Transactions on Multimedia, Jul. 2021 (Early Access).
[13] M. Orduna, P. Pérez, J. Gutiérrez, and N. García, “Methodology to Assess Quality, Presence, Empathy, Attitude, and Attention in Social VR: International Experiences Use Case”, arXiv:2103.02550, 2021.
[14] J. Falk, S. Eksvärd, B. Schenkman, B. Andrén, and K. Brunnström “Legibility and readability in Augmented Reality”, IEEE Int. Conference on Quality of Multimedia Experience (QoMEX), Jun. 2021.
[15] T. Xue,  A. El Ali,  G. Ding,  and P. Cesar, “Investigating the Relationship between Momentary Emotion Self-reports and Head and Eye Movements in HMD-based 360° VR Video Watching”, Extended Abstracts of the CHI Conference on Human Factors in Computing Systems, May 2021.
[16] E. Dima, K. Brunnström, M. Sjöström, M. Andersson, J. Edlund, M. Johanson, and T. Qureshi, “Joint effects of depth-aiding augmentations and viewing positions on the quality of experience in augmented telepresence”, Quality and User Experience, vol. 5, Feb. 2020.
[17] S. Rossi, I. Viola, J. Jansen, S. Subramanyam, L. Toni, and P. Cesar, “Influence of Narrative Elements on User Behaviour in Photorealistic Social VR”, International Workshop on Immersive Mixed and Virtual Environment Systems (MMVE), Sep. 28, 2021.

JPEG Column: 91st JPEG Meeting

JPEG Committee issues a Call for Proposals on Holography coding

The 91st JPEG meeting was held online from 19 to 23 April 2021. This meeting saw several activities relating to holographic coding, notably the release of the JPEG Pleno Holography Call for Proposals, consolidated with the definition of the use cases and requirements for holographic coding and common test conditions that will assure the evaluation of the future proposals.

Reconstructed hologram from B-com database (http://plenodb.jpeg.org/).

The 91st meeting was also marked by the start of a new exploration initiative on Non-Fungible Tokens (NFTs), due to the recent interest in this technology in a large number of applications and in particular in digital art. Since NFTs rely on decentralized networks and JPEG has been analysing the implications of Blockchains and distributed ledger technologies in imaging, it is a natural next step to explore how JPEG standardization can facilitate interoperability between applications that make use of NFTs.

The following presents an overview of the major achievements carried out during the 91st JPEG meeting.

The 91st JPEG meeting had the following highlights:

  • JPEG launches call for proposals for the first standard in holographic coding,
  • JPEG NFT,
  • JPEG Fake Media,
  • JPEG AI,
  • JPEG Systems,
  • JPEG XS,
  • JPEG XL,
  • JPEG DNA,
  • JPEG Reference Software.

JPEG launches call for proposals for the first standard in holographic coding

JPEG Pleno aims to provide a standard framework for representing new imaging modalities, such as light field, point cloud, and holographic content. JPEG Pleno Holography is the first standardization effort for a versatile solution to efficiently compress holograms for a wide range of applications ranging from holographic microscopy to tomography, interferometry, and printing and display, as well as their associated hologram types. Key functionalities include support for both lossy and lossless coding, scalability, random access, and integration within the JPEG Pleno system architecture, with the goal of supporting a royalty free baseline.

The final Call for Proposals (CfP) on JPEG Pleno Holography – a milestone in the roll-out of the JPEG Pleno framework – has been issued as the main result of the 91st JPEG meeting, Online, 19-23 April 2021. The deadline for expressions of interest and registration is 1 August 2021. Submissions to the Call for Proposals are due on 1 September 2021.

A second milestone reached at this meeting was the promotion to International Standard of JPEG Pleno Part 2: Light Field Coding (ISO/IEC 21794-2). This standard provides light field coding tools originating from either microlens cameras or camera arrays. Part 1 of this standard, which was promoted to International Standard earlier, provides the overall file format syntax supporting light field, holography and point cloud modalities.

During the 91st JPEG meeting, the JPEG Committee officially began an exciting phase of JPEG Pleno Point Cloud coding standardisation with a focus on learning-based point cloud coding.

The scope of the JPEG Pleno Point Cloud activity is the creation of a learning-based coding standard for point clouds and associated attributes, offering a single-stream, compact compressed domain representation, supporting advanced flexible data access functionalities. The JPEG Pleno Point Cloud standard targets both interactive human visualization, with significant compression efficiency over state of the art point cloud coding solutions commonly used at equivalent subjective quality, and also enables effective performance for 3D processing and computer vision tasks. The JPEG Committee expects the standard to support a royalty-free baseline.

The standard is envisioned to provide a number of unique benefits, including an efficient single point cloud representation for both humans and machines. The intent is to provide humans with the ability to visualise and interact with the point cloud geometry and attributes while providing machines the ability to perform 3D processing and computer vision tasks in the compressed domain, enabling lower complexity and higher accuracy through the use of compressed domain features extracted from the original instead of the lossily decoded point cloud.

JPEG NFT

Non-Fungible Tokens have been the focus of much attention in recent months. Several digitals assets that NFTs point to are either in existing JPEG formats or can be represented in current and emerging formats under development by the JPEG Committee. Furthermore, several trust and security issues have been raised regarding NFTs and the digital assets they rely on. Here again, JPEG Committee has a significant track record in security and trust in imaging applications. Building on this background, the JPEG Committee has launched a new exploration initiative around NFTs to better understand the needs in terms of imaging requirements and how existing as well as potential JPEG standards can help bring security and trust to NFTs in a wide range of applications and notably those that rely on contents that are represented in JPEG formats in still and animated pictures and 3D contents. The first steps in this initiative involve outreach to stakeholders in NFTs and its application and organization of a workshop to discuss challenges and current solutions in NFTs, notably in the context of applications relevant to the scope of the JPEG Standardization Committee. JPEG Committee invites interested parties to subscribe to the mailing list of the JPEG NFT exploration via http://listregistration.jpeg.org.

JPEG Fake Media

The JPEG Fake Media exploration activity continues its work to assess standardization needs to facilitate secure and reliable annotation of media asset creation and modifications in good faith usage scenarios as well as in those with malicious intent. At the 91st meeting, the JPEG Committee released an updated version of the “JPEG Fake Media Context, Use Cases and Requirements” document. This new version includes several refinements including an improved and coherent set of definitions covering key terminology. The requirements have been extended and reorganized into three main identified categories: media creation and modification descriptions, metadata embedding framework and authenticity verification framework. The presentations and video recordings of the 2nd Workshop on JPEG Fake Media are now available on the JPEG website. JPEG invites interested parties to regularly visit https://jpeg.org/jpegfakemedia for the latest information and subscribe to the mailing list via http://listregistration.jpeg.org.

JPEG AI

At the 91st meeting, the results of the JPEG AI exploration experiments for the image processing and computer vision tasks defined at the previous 90th meeting were presented and discussed. Based on the analysis of the results, the exploration experiments description was improved. This activity will allow the definition of a performance assessment framework to use in the learning-based image codecs latent representation in several visual analysis tasks, such as compressed domain image classification and compressed domain material and texture recognition. Moreover, the impact of such experiments on the current version of the Common Test Conditions (CTC) was discussed. 

Moreover, the draft of the Call for Proposals was analysed, notably regarding the training dataset and training procedures as well as the submission requirements. The timeline of the JPEG AI work item was discussed and it was agreed that the final Call for Proposals (CfP) will be issued as an outcome of the 93rd JPEG Meeting. The deadline for expression of interest and registration is 5 November 2021. Further, the submission of bitstreams and decoded images for the test dataset are due on 30 January 2022.

JPEG Systems

During the 91st meeting, the Draft International Standard (DIS) text of JLINK (ISO/IEC 19566-7) and Committee Draft (CD) text of JPEG Snack (ISO/IEC 19566-8) were completed and will be submitted for ballot. Amendments for JUMBF (ISO/IEC 19566-5 AMD1) and JPEG 360 (ISO/IEC 19566-6 AMD1) received a final review and are being released for publication. In addition, new extensions to JUMBF (ISO/IEC 19566-5) are under consideration to support rapidly emerging use cases related to content authenticity and integrity; updated use cases and requirements are being drafted. Finally, discussions have started to create awareness on how to interact with JUMBF boxes and the information they contain, without breaking integrity or interoperability. Interested parties are invited to subscribe to the mailing list of the JPEG Systems AHG in order to contribute to the above activities via http://listregistration.jpeg.org.

JPEG XS

The second editions of JPEG XS Part 1 (Core coding system) and Part 3 (Transport and container formats) were prepared for Final Draft International Standard (FDIS) balloting, with the intention of having both standards published by October 2021. The second editions integrate new coding and signalling capabilities to support RAW Bayer colour filter array (CFA) images, 4:2:0 sampled images and mathematically lossless coding of up to 12-bits per component. The associated profiles and buffer models are handled in Part 2, which is currently under DIS ballot. The focus now has shifted to work on the second editions of Part 4 (Conformance testing) and Part 5 (Reference software). Finally, the JPEG Committee defined a study to investigate future improvements to high dynamic range (HDR) and mathematically lossless compression capabilities, while still honouring the low-complexity and low-latency requirements. In particular, for RAW Bayer CFA content, the JPEG Committee will work on extensions of JPEG XS supporting lossless compression of CFA patterns at sample bit depths above 12 bits.

JPEG XL

The JPEG Committee has finalized JPEG XL Part 2 (File format), which is now at the FDIS stage. A Main profile has been specified in draft Amendment 1 to Part 1, which entered the draft amendment (DAM) stage of the approval process at the current meeting. The draft Main profile has two levels: Level 5 for end-user image delivery and Level 10 for generic use cases, including image authoring workflows. Now that the criteria for conformance have been determined, the JPEG Committee has defined new core experiments to define a set of test codestreams that provides full coverage of the coding tools. Part 4 (Reference software) is now at the DIS stage. With the first edition FDIS texts of both Part 1 and Part 2 now complete, JPEG XL is ready for wide adoption.

JPEG DNA

The JPEG Committee has continued its exploration of coding of images in quaternary representation, particularly suitable for DNA storage. After a successful third workshop presentation by stakeholders, two new use cases were identified along with a large number of new requirements, and a new version of the JPEG DNA overview document was issued and is now made publicly available. It was decided to continue this exploration by organizing the fourth workshop and conducting further outreach to stakeholders, as well as continuing with improving the JPEG DNA overview document.

Interested parties are invited to refer to the following URL and to consider joining the effort by registering to the mailing list of JPEG DNA here: https://jpeg.org/jpegdna/index.html.

JPEG Reference Software

The JPEG Committee is pleased to announce that its standard on the JPEG reference software, 2nd edition, reached the state of International Standard and will be publicly available from both ITU and ISO/IEC.

This standard, to appear as ITU-T T.873 | ISO/IEC 10918-7 (2nd Edition) provides reference implementations to the first JPEG standard, used daily throughout the world. The software included in this document guides vendors on how JPEG (ISO/IEC 10918-1) can be implemented and may serve as a baseline and starting point for JPEG
encoders or decoders.

This second edition updates the two reference implementations to their latest versions, fixing minor defects in the software.

Final Quote

“JPEG standards continue to be a motor of innovation and an enabler of new applications in imaging as witnessed by the release of the first standard for coding of holographic content.” said Prof. Touradj Ebrahimi, the Convenor of the JPEG Committee.

Future JPEG meetings are planned as follows:

  • No. 92, will be held online from 7 to 13 July 2021.
  • No 93, is planned to be held in Berlin, Germany during 16-22 October 2021.

MPEG Column: 134th MPEG Meeting (virtual/online)

The original blog post can be found at the Bitmovin Techblog and has been modified/updated here to focus on and highlight research aspects.

The 134th MPEG meeting was once again held as an online meeting, and the official press release can be found here and comprises the following items:

  • First International Standard on Neural Network Compression for Multimedia Applications
  • Completion of the carriage of VVC and EVC
  • Completion of the carriage of V3C in ISOBMFF
  • Call for Proposals: (a) new Advanced Genomics Features and Technologies, (b) MPEG-I Immersive Audio, and (c) coded Representation of Haptics
  • MPEG evaluated Responses on Incremental Compression of Neural Networks
  • Progression of MPEG 3D Audio Standards
  • The first milestone of development of Open Font Format (2nd amendment)
  • Verification tests: (a) low Complexity Enhancement Video Coding (LCEVC) Verification Test and (b) more application cases of Versatile Video Coding (VVC)
  • Standardization work on Version 2 of VVC and VSEI started

In this column, the focus is on streaming-related aspects including a brief update about MPEG-DASH.

First International Standard on Neural Network Compression for Multimedia Applications

Artificial neural networks have been adopted for a broad range of tasks in multimedia analysis and processing, such as visual and acoustic classification, extraction of multimedia descriptors, or image and video coding. The trained neural networks for these applications contain many parameters (i.e., weights), resulting in a considerable size. Thus, transferring them to several clients (e.g., mobile phones, smart cameras) benefits from a compressed representation of neural networks.

At the 134th MPEG meeting, MPEG Video ratified the first international standards on Neural Network Compression for Multimedia Applications (ISO/IEC 15938-17), designed as a toolbox of compression technologies. The specification contains different methods for

  • parameter reduction (e.g., pruning, sparsification, matrix decomposition),
  • parameter transformation (e.g., quantization), and
  • entropy coding 

methods that can be assembled to encoding pipelines combining one or more (in the case of reduction) methods from each group.

The results show that trained neural networks for many common multimedia problems such as image or audio classification or image compression can be compressed by a factor of 10-20 with no performance loss and even by more than 30 with performance trade-off. The specification is not limited to a particular neural network architecture and is independent of the neural network exchange format choice. The interoperability with common neural network exchange formats is described in the annexes of the standard.

As neural networks are becoming increasingly important, the communication thereof over heterogeneous networks to a plethora of devices raises various challenges including efficient compression that is inevitable and addressed in this standard. ISO/IEC 15938 is commonly referred to as MPEG-7 (or the “multimedia content description interface”) and this standard becomes now part 15 of MPEG-7.

Research aspects: Like for all compression-related standards, research aspects are related to compression efficiency (lossy/lossless), computational complexity (runtime, memory), and quality-related aspects. Furthermore, the compression of neural networks for multimedia applications probably enables new types of applications and services to be deployed in the (near) future. Finally, simultaneous delivery and consumption (i.e., streaming) of neural networks including incremental updates thereof will become a requirement for networked media applications and services.

Carriage of Media Assets

At the 134th MPEG meeting, MPEG Systems completed the carriage of various media assets in MPEG-2 Systems (Transport Stream) and the ISO Base Media File Format (ISOBMFF), respectively.

In particular, the standards for the carriage of Versatile Video Coding (VVC) and Essential Video Coding (EVC) over both MPEG-2 Transport Stream (M2TS) and ISO Base Media File Format (ISOBMFF) reached their final stages of standardization, respectively:

  • For M2TS, the standard defines constraints to elementary streams of VVC and EVC to carry them in the packetized elementary stream (PES) packets. Additionally, buffer management mechanisms and transport system target decoder (T-STD) model extension are also defined.
  • For ISOBMFF, the carriage of codec initialization information for VVC and EVC is defined in the standard. Additionally, it also defines samples and sub-samples reflecting the high-level bitstream structure and independently decodable units of both video codecs. For VVC, signaling and extraction of a certain operating point are also supported.

Finally, MPEG Systems completed the standard for the carriage of Visual Volumetric Video-based Coding (V3C) data using ISOBMFF. Therefore, it supports media comprising multiple independent component bitstreams and considers that only some portions of immersive media assets need to be rendered according to the users’ position and viewport. Thus, the metadata indicating the relationship between the region in the 3D spatial data to be rendered and its location in the bitstream is defined. In addition, the delivery of the ISOBMFF file containing a V3C content over DASH and MMT is also specified in this standard.

Research aspects: Carriage of VVC, EVC, and V3C using M2TS or ISOBMFF provides an essential building block within the so-called multimedia systems layer resulting in a plethora of research challenges as it typically offers an interoperable interface to the actual media assets. Thus, these standards enable efficient and flexible provisioning or/and use of these media assets that are deliberately not defined in these standards and subject to competition.

Call for Proposals and Verification Tests

At the 134th MPEG meeting, MPEG issued three Call for Proposals (CfPs) that are briefly highlighted in the following:

  • Coded Representation of Haptics: Haptics provide an additional layer of entertainment and sensory immersion beyond audio and visual media. This CfP aims to specify a coded representation of haptics data, e.g., to be carried using ISO Base Media File Format (ISOBMFF) files in the context of MPEG-DASH or other MPEG-I standards.
  • MPEG-I Immersive Audio: Immersive Audio will complement other parts of MPEG-I (i.e., Part 3, “Immersive Video” and Part 2, “Systems Support”) in order to provide a suite of standards that will support a Virtual Reality (VR) or an Augmented Reality (AR) presentation in which the user can navigate and interact with the environment using 6 degrees of freedom (6 DoF), that being spatial navigation (x, y, z) and user head orientation (yaw, pitch, roll).
  • New Advanced Genomics Features and Technologies: This CfP aims to collect submissions of new technologies that can (i) provide improvements to the current compression, transport, and indexing capabilities of the ISO/IEC 23092 standards suite, particularly applied to data consisting of very long reads generated by 3rd generation sequencing devices, (ii) provide the support for representation and usage of graph genome references, (iii) include coding modes relying on machine learning processes, satisfying data access modalities required by machine learning and providing higher compression, and (iv) support of interfaces with existing standards for the interchange of clinical data.

Detailed information, including instructions on how to respond to the call for proposals, the requirements that must be considered, the test data to be used, and the submission and evaluation procedures for proponents are available at www.mpeg.org.

Call for proposals typically mark the beginning of the formal standardization work whereas verification tests are conducted once a standard has been completed. At the 134th MPEG meeting and despite the difficulties caused by the pandemic situation, MPEG completed verification tests for Versatile Video Coding (VVC) and Low Complexity Enhancement Video Coding (LCEVC).

For LCEVC, verification tests measured the benefits of enhancing four existing codecs of different generations (i.e., AVC, HEVC, EVC, VVC) using tools as defined in LCEVC within two sets of tests:

  • The first set of tests compared LCEVC-enhanced encoding with full-resolution single-layer anchors. The average bit rate savings produced by LCEVC when enhancing AVC were determined to be approximately 46% for UHD and 28% for HD. When enhancing HEVC approximately 31% for UHD and 24% for HD. Test results tend to indicate an overall benefit also when using LCEVC to enhance EVC and VVC.
  • The second set of tests confirmed that LCEVC provided a more efficient means of resolution enhancement of half-resolution anchors than unguided up-sampling. Comparing LCEVC full-resolution encoding with the up-sampled half-resolution anchors, the average bit-rate savings when using LCEVC with AVC, HEVC, EVC and VVC were calculated to be approximately 28%, 34%, 38%, and 32% for UHD and 27%, 26%, 21%, and 21% for HD, respectively.

For VVC, it was already the second round of verification testing including the following aspects:

  • 360-degree video for equirectangular and cubemap formats, where VVC shows on average more than 50% bit rate reduction compared to the previous major generation of MPEG video coding standard known as High Efficiency Video Coding (HEVC), developed in 2013.
  • Low-delay applications such as compression of conversational (teleconferencing) and gaming content, where the compression benefit is about 40% on average,
  • HD video streaming, with an average bit rate reduction of close to 50%.

A previous set of tests for 4K UHD content completed in October 2020 had shown similar gains. These verification tests used formal subjective visual quality assessment testing with “naïve” human viewers. The tests were performed under a strict hygienic regime in two test laboratories to ensure safe conditions for the viewers and test managers.

Research aspects: CfPs offer a unique possibility for researchers to propose research results for adoption into future standards. Verification tests provide objective or/and subjective evaluations of standardized tools which typically conclude the life cycle of a standard. The results of the verification tests are usually publicly available and can be used as a baseline for future improvements of the respective standards including the evaluation thereof.

DASH Update!

Finally, I’d like to provide a brief update on MPEG-DASH! At the 134th MPEG meeting, MPEG Systems recommended the approval of ISO/IEC FDIS 23009-1 5th edition. That is, the MPEG-DASH core specification will be available as 5th edition sometime this year. Additionally, MPEG requests that this specification becomes freely available which also marks an important milestone in the development of the MPEG-DASH standard. Most importantly, the 5th edition of this standard incorporates CMAF support as well as other enhancements defined in the amendment of the previous edition. Additionally, the MPEG-DASH subgroup of MPEG Systems is already working on the first amendment to its 5th edition entitled preroll, nonlinear playback, and other extensions. It is expected that the 5th edition will also impact related specifications within MPEG but also in other Standards Developing Organizations (SDOs) such as DASH-IF, i.e., defining interoperability points (IOPs) for various codecs and others, or CTA WAVE (Web Application Video Ecosystem), i.e., defining device playback capabilities such as the Common Media Client Data (CMCD). Both DASH-IF and CTA WAVE provide means for (conformance) test infrastructure for DASH and CMAF.

An updated overview of DASH standards/features can be found in the Figure below.

MPEG-DASH status as of April 2021.

Research aspects: MPEG-DASH has been ratified almost ten years ago which resulted in a plethora of research articles, mostly related to adaptive bitrate (ABR) algorithms and their impact on the streaming performance including the Quality of Experience (QoE). An overview of bitrate adaptation schemes is provided here including a list of open challenges and issues.

The 135th MPEG meeting will be again an online meeting in July 2021. Click here for more information about MPEG meetings and their developments.

Encouraging more Diverse Scientific Collaborations with the ConfFlow application

Introduction

ConfFlow is an application to encourage people with similar or complementary research interests to find each other at conferences. How scientific collaborations are initiated, how people meet and how an intention is developed to work together is an open question. The aim of this follow-up initiative to ConfLab: Meet the Chairs! held at ACM MM 2019 (conflab.ewi.tudelft.nl) is to help people in the multimedia community to connect with potential collaborators.

As a community, Multimedia is so diverse that it is easy for community members to miss out on very useful expertise and potentially fruitful collaborations. There is a lot of latent knowledge and potential synergies that could exist if we were to offer conference attendees an alternative perspective on their similarities to other attendees. As researchers, we typically find connections through talking to people at the conference either through scientific presentations, personal introductions, or by chance.

The aim of ConfFlow is to allow attendees to browse their similarity to other attendees by harvesting publicly available information about them related to their research interests. Depending on the richness of experience that users are looking for, ConfFlow aims to offer an alternative way for researchers to make new research connections with a similar space. At the basic level, we define the similarity of attendees with an approach similar to paper-reviewer assignment tools, such as the Toronto Paper Matching System (TPMS). Usually, TPMS is used to match reviewers to papers. In an analogous way, ConfFlow creates a visualised similarity space using the publications of the conference attendees. This will allow attendees to interactively explore and find new connections with researchers with complementary research interests (or similar ones).  More details about ConfFlow can be found in the associated demo paper [1]. An example snapshot of the application is shown in Figure 1 below.

ConfFlow was funded by the SIGMM special initiatives fund which supports initiatives related to boosting excellence and strength of SIGMM, addressing opportunities for growth in the community and SIGMM related activities, as well as nurturing new talent. The aim of ConfFlow is to target building on excellence, strengths, and community. 

Figure 1: Visualisation of ConfFlow

This report records our experience and practical issues related to running ConfFlow at ACM Multimedia last year.

Method

Privacy and Ethical Practices

The aim of ConfFlow was to adhere to the highest levels of ethical practice. One of the debates online relates to what is considered private data. One could consider that deriving novel information from publicly available data can still be considered an invasion of privacy [2]. So ConfFlow was proposed and designed to be opt-in only. This means that unlike the visualisation seen in Figure 1, all the identities for anyone visiting the ConfFlow application appeared as just an icon unless the person had activated their account and gave permission for others to see it. While this might seem quite strict, there can be unforeseen privacy related questions when social information is extracted from publicly available information as those who do not choose to opt-in can still become exposed. 

Due to this opt-in strict procedure, we needed to find an active way to engage conference attendees by advertising the application through the conference and also getting access to the conference attendee list so we could target and encourage those people to activate their accounts. This required close coordination with the General Chairs of ACM Multimedia 2020.

Application Realization

ConfFlow was rolled out at ACM Multimedia 2020 for conference attendees. Shortly after the building of this application was approved, the Corona Virus pandemic hit and ACM Multimedia became a virtual conference. Since the embedding space of ConfFlow needs to be built apriori, we needed to have access to the conference attendee list. The workload for the conference organisers increased significantly as a result of the pandemic so we did not manage to get the logistical support to optimise the impact of the application. Since we could not get this, we defaulted to visualising the much larger accepted author list. Each identity in ConfFlow needs to be manually verified which also takes considerable effort.

However, there remained the issue that the application was opt-in. For those who tested the application, they were disappointed because many people were not visible. Many of the authors in any case did not attend the conference, which exacerbated the sparsity issue. Advertising ConfFlow and encouraging participants to activate their account was extremely hard due to the virtual format of the conference and because it was hard to reach the actual conference attendees. 

The demo paper for the application was presented at ACM Multimedia 2020 and received positively.

Discussion and Recommendations

The instantiation of the app was well-received by community members and the SIGMM board. There were some teething problems that we aim to resolve in a follow up to the 2021 edition where we will revise the opt-in policy to something that can allow for a better user experience whilst being careful with individual privacy. We also want to make the possibility for users to connect with people they see in the embedding space directly in the app so that the use of ConfFlow as a social connector tool becomes more explicit. We also plan to focus on different ways to advertise and communicate the application for a wider userbase. Finally, due to the considerable effort required to verify the identities of all individuals in the visualisations, we would like to build a more efficient procedure to make visualisations in future years less manually intensive. To this end, the SIGMM board has funded a second edition of ConfFlow in order for these improvements to be made so we can realise the full potential of the idea while also minimising too much additional logistical support from conference general chairs. We look forward to seeing its impact on future research collaborations.

Acknowledgements

ConfFlow was supported in part by the SIGMM New Initiatives Fund and the Dutch NWO funded MINGLE project number 639.022.606. We thank users who gave feedback on the application during prototyping and implementation and the General Chairs of ACM Multimedia 2020 for their support.

References

[1] Ekin Gedik and Hayley Hung. 2020. ConfFlow: A Tool to Encourage New Diverse Collaborations. In Proceedings of the 28th ACM International Conference on Multimedia (MM ’20). Association for Computing Machinery, New York, NY, USA, 4562–4564. DOI:https://doi.org/10.1145/3394171.3414459.
[2] Townsend, L., & Wallace, C, 2016. Social Media Research: A Guide to Ethics.

An interview with Irene Viola

Irene at the beginning of her research career.

Describe your journey into research from your youth up to the present. What foundational lessons did you learn from this journey? Why were you initially attracted to multimedia?

My passion for multimedia stems from graphic design, actually. As a teenager, I taught myself Photoshop and I was playing around with coding websites. I chose Cinema and Media Engineering as my bachelor to combine the programming aspects with a media-based sensibility, and there I discovered that all the filters I had used in Photoshop had clear mathematical bases. I was hooked! I think the fact that I was coming from a more graphics background led me to always keep in mind the users who would see the end product. Applying filters and changing the appearance of an image or video needs to consider how the final user will engage with the content, how they will experience it. I think it has been very helpful in my research in the quality of experience for multimedia content.

Tell us more about your vision and objectives behind your current roles? What do you hope to accomplish and how will you bring this about?

I am currently working on immersive multimedia systems, and in particular on real-time communication systems. The vision is to make remote communication more lifelike, and interaction more natural. I think we’re all aware of how different a video call feels from a face-to-face meeting. Immersive multimedia can help users feel more present and connected, even when displaced in different corners of the globe. What I aim to accomplish is to bring this technology to everyday users, overcoming the current limitations.

Can you profile your current research, its challenges, opportunities, and implications?

My research is currently focused on the quality of experience for immersive media systems. There are several aspects to it: one aspect is to improve media delivery systems, be it by creating new compression solutions, or by improving the transmission efficiency through user-adaptive solutions, for example. The core idea is that we need to optimize transmission by keeping in mind how the users will interact with the content. Then there’s the aspect of quantifying the reaction of the users to the contents they’re visualizing, identifying the influencing factors and building models that can predict them. It’s quite challenging because we don’t fully understand yet how, and why, humans react the way they do to certain stimuli. But that’s also what makes it fascinating.

How would you describe your top innovative achievements in terms of the problems you were trying to solve, your solutions, and the impact it has today and into the future?

In terms of impact, I would say my top achievements would be the contributions to standardization bodies. My subjective methodologies were adopted to conduct the evaluation of the JPEG Pleno Call for Proposals for Light Field Compression, and along with my colleagues in VQEG, I have contributed to ITU recommendations. It’s quite gratifying to know that your research can serve the scientific community this way.

Over your distinguished career, what are the top lessons you want to share with the audience?

I think my message would be: don’t be afraid to switch up. Throughout my studies, I changed focus many times: in my bachelor, the focus was on sociological aspects of media, as well as technological ones; in my master, I dived deeper into the engineering side of it; in my PhD, I tried to understand the user reaction to media. Switching up allows you to see the same problem from different sides, which can be extremely useful in order to do successful research.

What is the best joke you know?

Since an image is worth a thousand words, I will leave you with my favourite comic strip, by artist Lee Gatlin:

A comic strip by Lee Gatlin (Original post)

If you were conducting this interview, what questions would you ask, and then what would be your answers?

My question would be: how do you best balance work and life? Which is a question I don’t have an answer for, and I’d like to read what other people do about it. I think research is pretty tough in this sense because you always have the feeling that there’s more that you could do, and if you just spend half an hour more, you can reach greater results. So, it’s hard to step back, and your work becomes your life. I try to be mindful of it and remind myself to disconnect, which also helps to get a fresh perspective.

A recent photo of Irene.

Bio: Irene Viola is a tenure-track researcher at the Centrum Wiskunde & Informatica. Her research interests include multimedia compression, transmission, and quality evaluation (https://www.ireneviola.com).