Although the Covid-19 pandemic has forced international researchers and practitioners to share their research at virtual conferences, ACM Interactive Media Experiences (IMX) 2021 clearly invested significant time and effort to provide all attendees with an accessible, interactive, and vibrant online academic feast. Serving on the Organizing Committee of IMX 2021 as the Student Volunteer Chair as well as a Doctoral Consortium student, I was happy and honoured to take part in the conference, to help support it, and to see how attendees enjoyed and benefited from it.
I was also delighted to receive the ACM SIGMM Best Social Media Reporter Award which offered me the opportunity to write this report as a summary of my experiences with IMX 2021 (and of course a free ACM SIGMM conference registration!!).
OhYay Platform
IMX 2020 was the first time for the conference to go entirely virtual. In its second year as an entirely virtual conference, IMX 2021 collaborated with OhYay to create a very realistic and immersive experience for the conference attendees. On OhYay, attendees felt like they were in a real conference venue in New York City. There was a reception, lobbies, main hall, showcase rooms, rooftop, pool, and so forth. In addition to the high-fidelity environment, IMX 2021 and the OhYay development team added many interaction features into the platform to help attendees have a more human-centred and engaging experience: for example, attendees were able to “whisper” to each other without others being able to hear; they could send reactions, like applause emoji with sound effects; they could join some social events together, such as lip-sync, jigsaw.
Reception
Workshop Entry
East Lobby
Showcase Room Entry
Student Lounge
Panel
Informative Conference
IMX 2021 contained a high number of inspiring talks, insightful discussions, and quality communication. On Day 1, IMX hosted a series of workshops: XR in Games, Life Improvement in Quality by Ubiquitous Experiences (LIQUE), DataTV and SensoryX. I had a three-hour doctoral consortium (DC) in the morning on Day 1 as well. 8 PhD students presented ongoing dissertation research and had 2 one-on-one sessions with distinguished researchers as mentors! I was so excited to meet people in a ‘real’ virtual space and the OhYay platform also enabled DC attendees to take group pictures in the photo booth. I could not help but Tweet my first-day experience with lots of photos.
My Tweet of DC in IMX
On Day 2 and Day 3, with artist Sougwen Chung’s amazing keynote “Where does ‘AI’ end and ‘we’ begin?” kicking off the main conference, a set of paper sessions and panel discussions regarding mixed-reality (AR/VR), AI, gaming and inclusive design brought inspiration, new ideas and state-of-the-art research topics to attendees. Admittedly, AR/VR as well as AI technology as the focus of the current development of science and technology, lead the progress of civilization of the times. IMX helped us to see this trend of balance and integration of AI, AR, VR and MR in the future: the downstream of the hyper-reality terminal products dips into various fields, including games, consumer applications, enterprise applications, health care, education and others. With the increase of downstream application scenarios, the market space is expected to further expand. This opens up a broader world for all researchers, designers and practitioners including IMXers to explore how we can put warmth into products delivered by the developing technologies which come with many unknowns and create a need for establishing best practices, standards, and design patterns for as many people as reasonably possible.
My Tweet of the IMX main conference: Enjoyed a great deal of quality discussion and amazing interactive social events.
Every time I tweeted, I picked up representative screenshots, made them into a pretty collage, and gave infectious enthusiasm to the text. That may be my secret of winning the Social Media Award to help disseminate IMX information.
Novelties
Social Events
In addition to the world-leading interactive media research sessions, panels, speakers and showcases presented, IMX 2021 also aimed for some interactive fun for networking and chilling for attendees. There was a virtual elevator that could be seen as an events hub for attendees to select which event they wanted to join. Various social events were provided to enrich breaks in between research sessions: Mukbang, Yoga, Lip Sync, Jigsaw, etc. For example, attendees sometimes needed to collaborate with Jigsaw, which spontaneously enhanced mutual understanding through the interactive collaborative engagement even if IMX was a virtual conference.
The Elevator
Rooftop
Mukbang
Yoga
Lip Sync/Karaoke
Jigsaw
In this sense, IMX 2021 succeeded in its aim to allow attendees to have an “in-person” and immersive experience as much as possible because there were many opportunities for attendees to communicate more deeply, network, and socialize.
Doctoral Consortium
IMX 2021 DC provided an opportunity for 8 PhD students to present, explore and develop our research interests, under the mentorship of a panel of 14 distinguished researchers, including 2 one-on-one sessions. The virtual conference enabled mentors from all over the world to make exchanges views with students without geographical limitations. We were also able to have in-depth communication to obtain valuable instruction on dissertation research in such an immersive environment. Moreover, each student not only gave a presentation at the DC before the main conference but also presented a poster at the conference, enabling wider visibility of our work.
Doctoral Consortium Reception Room
Accessibility
It is noteworthy that IMX 2021 made accessibility design an integral part of the conference. Except for closed-caption for ready-made videos, IMX 2021 had a captioner to provide an accurate real-time caption for a live discussion. In addition, some attendees were excited to find out that an ASL option was also offered!
Optional ASL and live caption
IMX also took efforts to make the platform more friendly to screen-reader users.
Conclusion
In conclusion, IMX 2021 was an excellent example of an engaging, interactive, fun, informative and nice virtual conference. The organizing team clearly not only made significant efforts to represent the diversity in which interactive media is used in our lives but also already presented an amazing show of how interactive the media could be to even benefit our online communication. I look forward to IMX 2022!
Welcome to the fifth column on the ACM SIGMM Records from the Video Quality Experts Group (VQEG). The last VQEG plenary meeting took place online from 7 to 11 June 2021. As the previous meeting celebrated in December 2020, it was organized online (this time by Kingston University) with multiple sessions spread over five days, allowing remote participation of people from 22 different countries of America, Asia, and Europe. More than 100 participants registered to the meeting and they could attend the 40 presentations and several discussions that took place in all working groups. This column provides an overview of the recently completed VQEG plenary meeting, while all the information, minutes and files (including the presented slides) from the meeting are available online in the VQEG meeting website.
Group picture of the VQEG Meeting 7-11 June 2021.
Several interesting presentations of state-of-the-art works can be of interest to the SIGMM community, in addition to the contributions to several working items of ITU from various VQEG groups. The progress on the new activities launched in the last VQEG plenary meeting (in relation to Live QoE assessment, SI/TI clarification, implementers guide for video quality metrics for coding applications, and the inclusion of video quality metrics as metadata in compressed streams), as well as the proposal for a new joint work on evaluation of immersive communication systems from a task-based or interactive perspective within the Immersive Media Group.
We encourage those readers interested in any of the activities going on in the working groups to check their websites and subscribe to the corresponding reflectors, to follow them and get involved.
Overview of VQEG Projects
Audiovisual HD (AVHD)
AVHD group works on improved subjective and objective methods for video-only and audiovisual quality of commonly available systems. Currently, after the project AVHD/P.NATS2 (a joint collaboration between VQEG and ITU SG12) finished in 2020 [1], two projects are ongoing within AVHD group: QoE Metrics for Live Video Streaming Applications (Live QoE), which was launched in the last plenary meeting, and Advanced Subjective Methods (AVHD-SUB). The main discussion during the AVHD sessions was related to the Live QoE project, which was led by Shahid Satti (Opticom) and Rohit Puri (Twitch). In addition to the presentation of the project proposal, the main decisions reached until now were exposed (e.g., use of videos of 20-30 seconds with resolution 1080p and framerates up to 60fps, use ACR as subjective test methodology, generation of test conditions, etc.), as well as open questions were brought up for discussion, especially in relation to how to acquire premium content and network traces. In addition to this discussion, Steve Göring (TU Ilmenau) presented and open-source platform (AVrate Voyager) for crowdsourcing/online subjective tests [2], and Shahid Satti (Opticom) presented the performance results of the Opticom models on the project AVHD/P.NATS Phase 2. Finally, Ioannis Katsavounidis (Facebook) presented the subjective testing validation of the AV1 performance from the Alliance for Open Media (AOM) to gather feedback on the test plan and possible interested testing labs from VQEG. It is also worth noting that this session was recorded to be used as raw multimedia data for the Live QoE project.
Quality Assessment for Health applications (QAH)
The session related to the QAH group group allocated three presentations apart from the project summary provided by Lucie Lévêque (Polytech Nantes). In particular, Meriem Outtas (INSA Rennes) provided a review on objective quality assessment of medical images and videos. This is is one of the topics jointly addressed by the group, which is working on an overview paper in line with the recent review on subjective medical image quality assessment [3]. Moreover, Zohaib Amjad Khan (Université Sorbonne Paris Nord) presented a work on video quality assessment of laparoscopic videos, while Aditja Raj and Maria Martini (Kingston University) presented their work on multivariate regression-based convolutional neural network model for fundus image quality assessment.
Statistical Analysis Methods (SAM)
The SAM session consisted of three presentations followed by discussions on the topics. One of this was related to the description of subjective experiment consistency by p-value p-p plot [4], which was presented by Jakub Nawała (AGH University of Science and Technology). In addition, Zhi Li (Netflix) and Rafał Figlus (AGH University of Science and Technology) presented the progress on the contribution from SAM to the ITU-T to modify the recommendation P.913 to include the MLE model for subject behavior in subjective experiments [5] and the recently available implementation of this model in Excel. Finally, Pablo Pérez (Nokia Bell Labs) and Lucjan Janowski (AGH University of Science and Technology) presented their work on the possibility of performing subjective experiments with four subjects [6].
Computer Generated Imagery (CGI)
Nabajeet Barman (Kingston University) presented a report on the current activities of the CGI group. The main current working topics are related to gaming quality assessment methodologies and quality prediction, and codec comparison for CG content. This group is closely collaborating with the ITU-T SG12, as reflected by its support on the completion of the 3 work items: ITU-T Rec. G.1032 on influence factors on gaming quality of experience, ITU-T Rec. P.809 on subjective evaluation methods for gaming quality, and ITU-T Rec. G.1072 on opinion model for gaming applications. Furthermore, CGI is contributing to 3 new work items: ITU-T work item P.BBQCG on parametric bitstream-based quality assessment of cloud gaming services, ITU-T work item G.OMMOG on opinion models for mobile online gaming applications, and ITU-T work item P.CROWDG on subjective evaluation of gaming quality with a crowdsourcing approach. In addition, four presentations were scheduled during the CGI slots. The first one was delivered by Joel Jung (Tencent Media Lab) and David Lindero (Ericsson), who presented the details of the ITU-T work item P.BBQCG. Another one was related to the evaluation of MPEG-5 Part 2 (LCEVC) for gaming video streaming applications, which was presented by Nabajeet Barman (Kingston University) and Saman Zadtootaghaj (Dolby Laboratories). Also Nabajeet together with Maria Martini (Kingston University) presented a dataset, codec comparison and challenges related to user generated HDR gaming video streaming [7]. Finally, JP Tauscher (Technische Universität Braunschweig) presented his work on EEG-based detection of deep fake images.
The JEG-Hybrid group is currently working on the development of a generally applicable no-reference hybrid perceptual/bitstream model. In this sense, Enrico Masala and Lohic Fotio Tiotsop (Politecnico di Tornio) presented the progress on designing a neural-network approach to model single observers using existing subjectively-annotated image and video datasets [9] (the design of subjective tests tailored for the training of this approach is envisioned for future work). In addition to this activity, the group is working in collaboration with the Sky Group on the “Hodor Project”, which is based on developing a measure that could allow to automatically identify video sequences for which quality metrics are likely to deliver inaccurate Mean Opinion Score (MOS) estimation. Apart from these joint activities Dr. Yendo Hu (Carnation Communications Inc. and Jimei University) delivered a presentation proposing to work on a benchmarking standard to bring quality, bandwidth, and latency into a common measurement domain.
Quality Assessment for Computer Vision Applications (QACoViA)
The 5GKPI session consisted of a presentation by Pablo Pérez (Nokia Bell-Labs) of the progress achieved by the group since the last plenary meeting in the following efforts: 1) the contribution to ITU-T Study Group 12 Question 13 related through the Technical Report about QoE in 5G video services (GSTR-5GQoE), which addresses QoE requirements and factors for some use cases like Tele-operated Driving (ToD), wireless content production, mixed reality offloading and first responder networks; 2) the contribution to the 5G Automotive Association (5GAA) through a high-level contribution on general QoE requirements for remote driving, considering for the near future the execution of subjective tests for ToD video quality; and 3) the long-term plan on working on a methodology to create simple opinion models to estimate average QoE for a network and use case.
Immersive Media Group (IMG)
Several presentations were delivered during the IMG session that were divided into two blocks: one covering technologies and studies related to the evaluation of immersive communication systems from a task-based or interactive perspective, and another one covering other topics related to the assessment of QoE of immersive media. The first set of presentations is related to a new proposal for a joint work within IMG related to the ITU-T work item P.QXM on QoE assessment of eXtended Reality meetings. Thus, Irene Viola (CWI) presented an overview of this work item. In addition, Carlos Cortés (Universidad Politécncia de Madrid) presented his work on evaluating the impact of delay on QoE in immersive interactive environments, Irene Viola (CWI) presented a dataset of point cloud dynamic humans for immersive telecommunications, Pablo César (CWI) presented their pipeline for social virtual reality [10], and Narciso García (Universidad Politécncia de Madrid) presented their real-time free-viewpoint video system (FVVLive) [11]. After these presentations, Jesús Gutiérrez (Universidad Politécncia de Madrid) led the discussion on joint next steps with IMG, which, in addition, to identify interested parties in joining the effort to study the evaluation of immersive communication systems, also covered the further analyses to be done from the subjective tests carried out with short 360-degree videos [12] and the studies carried out to assess quality and other factors (e.g., presence) with long omnidirectional sequences. In this sense, Marta Orduna (Universidad Politécnica de Madrid) presented her subjective study to validate a methodology to assess quality, presence, empathy, attitude, and attention in Social VR [13]. Future progress on these joint activities will be discussed in the group audio-calls. Within the other block of presentations related to immersive media topics, Maria Martini (Kingston University), Chulhee Lee (Yonsei University), and Patrick Le Callet (Université de Nantes) presented the status of IEEE standardization on QoE for immersive experiences (IEEE P3333.1.4 – Light Field, and IEEE P3333.1.3, deep learning-based quality assessment), Kjell Brunnström (RISE) presented their work on legibility and readability in augmented reality [14], Abdallah El Ali (CWI) presented his work on investigating the relationship between momentary emotion self-reports and head and eye movements in HMD-based 360° videos [15], Elijs Dima (Mid Sweden University) exposed his study on quality of experience in augmented telepresence considering the effects of viewing positions and depth-aiding augmentation [16], Silvia Rossi (UCL) presented her work towards behavioural analysis of 6-DoF user when consuming immersive media [17], and Yana Nehme (INSA Lyon) presented a study on exploring crowdsourcing for subjective quality assessment of 3D Graphics.
Intersector Rapporteur Group on Audiovisual Quality Assessment (IRG-AVQA) and Q19 Interim Meeting
During the IRG-AVQA session, an overview on the progress and recent works within ITU-R SG6 and ITU-T SG12 was provided. In particular, Chulhee Lee (Yonsei University) in collaboration with other ITU rapporteurs presented the progress of ITU-R WP6C on recommendations for HDR content, the work items within: ITU-T SG12 Question 9 on audio-related work items, SG12 Question 13 on gaming and immersive technologies (e.g., augmented/extended reality) among others, SG12 Question 14 recommendations and work items related to the development of video quality models, and SG12 Question 19 on work items related to television and multimedia. In addition, the progress of the group “Implementers Guide for Video Quality Metrics (IGVQM)”, launched in the last plenary meeting by Ioannis Katsavounidis (Facebook) was discussed addressing specific points to push the collection of video quality models and datasets to be used to develop an implementer’s guide for objective video quality metrics for coding applications.
Other updates
The next VQEG plenary meeting will take place online in December 2021.
In addition, VQEG is investigating the possibility to disseminate the videos from all the talks from these plenary meetings via platforms such as Youtube and Facebook.
Finally, given that some modifications are being made to the public FTP of VQEG, if the links to the presentations included in this column are not opened by the browser, the reader can download all the presentations in one compressed file.
JPEG Committee issues a Call for Proposals on Holography coding
The 91st JPEG meeting was held online from 19 to 23 April 2021. This meeting saw several activities relating to holographic coding, notably the release of the JPEG Pleno Holography Call for Proposals, consolidated with the definition of the use cases and requirements for holographic coding and common test conditions that will assure the evaluation of the future proposals.
The
91st meeting was also marked by the start of a new exploration initiative
on Non-Fungible Tokens (NFTs), due to the recent interest in this technology in
a large number of applications and in particular in digital art. Since NFTs
rely on decentralized networks and JPEG has been analysing the implications of
Blockchains and distributed ledger technologies in imaging, it is a natural next
step to explore how JPEG standardization can facilitate interoperability
between applications that make use of NFTs.
The following presents an overview of the major achievements carried out during the 91st JPEG meeting.
The 91st JPEG meeting had the following highlights:
JPEG launches call for proposals for the first standard in holographic coding,
JPEG NFT,
JPEG Fake Media,
JPEG AI,
JPEG Systems,
JPEG XS,
JPEG XL,
JPEG DNA,
JPEG Reference Software.
JPEG launches
call for proposals for the first standard in holographic coding
JPEG Pleno aims to provide a standard framework for representing new
imaging modalities, such as light field, point cloud, and holographic content.
JPEG Pleno Holography is the first standardization effort for a versatile
solution to efficiently compress holograms for a wide range of applications ranging
from holographic microscopy to tomography, interferometry, and printing and
display, as well as their associated hologram types. Key functionalities
include support for both lossy and lossless coding, scalability, random access,
and integration within the JPEG Pleno system architecture, with the goal of
supporting a royalty free baseline.
The final Call for Proposals (CfP) on JPEG Pleno Holography – a
milestone in the roll-out of the JPEG Pleno framework – has been issued as the
main result of the 91st JPEG meeting, Online, 19-23 April 2021. The deadline
for expressions of interest and registration is 1 August 2021. Submissions to
the Call for Proposals are due on 1 September 2021.
A second milestone reached at this meeting was the promotion to International Standard of JPEG Pleno Part 2: Light Field Coding (ISO/IEC 21794-2). This standard provides light field coding tools originating from either microlens cameras or camera arrays. Part 1 of this standard, which was promoted to International Standard earlier, provides the overall file format syntax supporting light field, holography and point cloud modalities.
During the 91st JPEG meeting, the JPEG Committee officially
began an exciting phase of JPEG Pleno Point Cloud coding standardisation with a
focus on learning-based point cloud coding.
The scope of the JPEG Pleno Point Cloud activity is the creation of a learning-based
coding standard for point clouds and associated attributes, offering a
single-stream, compact compressed domain representation, supporting advanced
flexible data access functionalities. The JPEG Pleno Point Cloud standard
targets both interactive human visualization, with significant compression
efficiency over state of the art point cloud coding solutions commonly used at
equivalent subjective quality, and also enables effective performance for 3D
processing and computer vision tasks. The JPEG Committee expects the standard
to support a royalty-free baseline.
The standard is envisioned to provide a number of unique benefits,
including an efficient single point cloud representation for both humans and
machines. The intent is to provide humans with the ability to visualise and
interact with the point cloud geometry and attributes while providing machines
the ability to perform 3D processing and computer vision tasks in the
compressed domain, enabling lower complexity and higher accuracy through the
use of compressed domain features extracted from the original instead of the
lossily decoded point cloud.
JPEG NFT
Non-Fungible Tokens have been the focus of much attention in recent months. Several digitals assets that NFTs point to are either in existing JPEG formats or can be represented in current and emerging formats under development by the JPEG Committee. Furthermore, several trust and security issues have been raised regarding NFTs and the digital assets they rely on. Here again, JPEG Committee has a significant track record in security and trust in imaging applications. Building on this background, the JPEG Committee has launched a new exploration initiative around NFTs to better understand the needs in terms of imaging requirements and how existing as well as potential JPEG standards can help bring security and trust to NFTs in a wide range of applications and notably those that rely on contents that are represented in JPEG formats in still and animated pictures and 3D contents. The first steps in this initiative involve outreach to stakeholders in NFTs and its application and organization of a workshop to discuss challenges and current solutions in NFTs, notably in the context of applications relevant to the scope of the JPEG Standardization Committee. JPEG Committee invites interested parties to subscribe to the mailing list of the JPEG NFT exploration via http://listregistration.jpeg.org.
JPEG Fake Media
The JPEG Fake Media exploration activity continues its work to assess standardization needs to facilitate secure and reliable annotation of media asset creation and modifications in good faith usage scenarios as well as in those with malicious intent. At the 91st meeting, the JPEG Committee released an updated version of the “JPEG Fake Media Context, Use Cases and Requirements” document. This new version includes several refinements including an improved and coherent set of definitions covering key terminology. The requirements have been extended and reorganized into three main identified categories: media creation and modification descriptions, metadata embedding framework and authenticity verification framework. The presentations and video recordings of the 2nd Workshop on JPEG Fake Media are now available on the JPEG website. JPEG invites interested parties to regularly visit https://jpeg.org/jpegfakemedia for the latest information and subscribe to the mailing list via http://listregistration.jpeg.org.
JPEG AI
At the 91st meeting, the results of the JPEG AI exploration experiments
for the image processing and computer vision tasks defined at the previous 90th
meeting were presented and discussed. Based on the analysis of the results, the
exploration experiments description was improved. This activity will allow the definition
of a performance assessment framework to use in the learning-based image codecs
latent representation in several visual analysis tasks, such as compressed
domain image classification and compressed domain material and texture
recognition. Moreover, the impact of such experiments on the current version of
the Common Test Conditions (CTC) was discussed.
Moreover, the draft of the Call for Proposals was analysed, notably regarding the training dataset and training procedures as well as the submission requirements. The timeline of the JPEG AI work item was discussed and it was agreed that the final Call for Proposals (CfP) will be issued as an outcome of the 93rd JPEG Meeting. The deadline for expression of interest and registration is 5 November 2021. Further, the submission of bitstreams and decoded images for the test dataset are due on 30 January 2022.
JPEG Systems
During the 91st meeting, the Draft International Standard (DIS) text of
JLINK (ISO/IEC 19566-7) and Committee Draft (CD) text of JPEG Snack (ISO/IEC
19566-8) were completed and will be submitted for ballot. Amendments for JUMBF
(ISO/IEC 19566-5 AMD1) and JPEG 360 (ISO/IEC 19566-6 AMD1) received a final
review and are being released for publication. In addition, new extensions to
JUMBF (ISO/IEC 19566-5) are under consideration to support rapidly emerging use
cases related to content authenticity and integrity; updated use cases and
requirements are being drafted. Finally, discussions have started to create
awareness on how to interact with JUMBF boxes and the information they contain,
without breaking integrity or interoperability. Interested parties are invited
to subscribe to the mailing list of the JPEG Systems AHG in order to contribute
to the above activities via http://listregistration.jpeg.org.
JPEG XS
The second editions of JPEG XS Part 1 (Core coding system) and Part 3
(Transport and container formats) were prepared for Final Draft International
Standard (FDIS) balloting, with the intention of having both standards
published by October 2021. The second editions integrate new coding and signalling
capabilities to support RAW Bayer colour filter array (CFA) images, 4:2:0
sampled images and mathematically lossless coding of up to 12-bits per
component. The associated profiles and buffer models are handled in Part 2,
which is currently under DIS ballot. The focus now has shifted to work on the
second editions of Part 4 (Conformance testing) and Part 5 (Reference
software). Finally, the JPEG Committee defined a study to investigate future
improvements to high dynamic range (HDR) and mathematically lossless
compression capabilities, while still honouring the low-complexity and
low-latency requirements. In particular, for RAW Bayer CFA content, the JPEG Committee
will work on extensions of JPEG XS supporting lossless compression of CFA
patterns at sample bit depths above 12 bits.
JPEG XL
The JPEG Committee has finalized JPEG XL Part 2 (File format), which is now at the FDIS stage. A Main profile has been specified in draft Amendment 1 to Part 1, which entered the draft amendment (DAM) stage of the approval process at the current meeting. The draft Main profile has two levels: Level 5 for end-user image delivery and Level 10 for generic use cases, including image authoring workflows. Now that the criteria for conformance have been determined, the JPEG Committee has defined new core experiments to define a set of test codestreams that provides full coverage of the coding tools. Part 4 (Reference software) is now at the DIS stage. With the first edition FDIS texts of both Part 1 and Part 2 now complete, JPEG XL is ready for wide adoption.
JPEG DNA
The JPEG Committee has continued its exploration of coding of images in quaternary representation, particularly suitable for DNA storage. After a successful third workshop presentation by stakeholders, two new use cases were identified along with a large number of new requirements, and a new version of the JPEG DNA overview document was issued and is now made publicly available. It was decided to continue this exploration by organizing the fourth workshop and conducting further outreach to stakeholders, as well as continuing with improving the JPEG DNA overview document.
Interested parties are invited to refer to the following URL and to
consider joining the effort by registering to the mailing list of JPEG DNA
here: https://jpeg.org/jpegdna/index.html.
JPEG Reference Software
The JPEG Committee is pleased to announce that its standard on the JPEG
reference software, 2nd edition, reached the state of International Standard
and will be publicly available from both ITU and ISO/IEC.
This standard, to appear as ITU-T T.873 | ISO/IEC 10918-7 (2nd Edition) provides
reference implementations to the first JPEG standard, used daily throughout the
world. The software included in this document guides vendors on how JPEG
(ISO/IEC 10918-1) can be implemented and may serve as a baseline and starting
point for JPEG
encoders or decoders.
This second edition updates the two reference implementations to their latest versions, fixing minor defects in the software.
Final Quote
“JPEG standards continue to be a motor of innovation and an enabler of new applications in imaging as witnessed by the release of the first standard for coding of holographic content.” said Prof. Touradj Ebrahimi, the Convenor of the JPEG Committee.
Future JPEG meetings are planned as follows:
No. 92, will be held online from 7 to 13 July 2021.
No 93, is planned to be held in Berlin, Germany during 16-22 October 2021.
The original blog post can be found at the Bitmovin Techblog and has been modified/updated here to focus on and highlight research aspects.
The 134th MPEG meeting was once again held as an online meeting, and the official press release can be found here and comprises the following items:
First International Standard on Neural Network Compression for Multimedia Applications
Completion of the carriage of VVC and EVC
Completion of the carriage of V3C in ISOBMFF
Call for Proposals: (a) new Advanced Genomics Features and Technologies, (b) MPEG-I Immersive Audio, and (c) coded Representation of Haptics
MPEG evaluated Responses on Incremental Compression of Neural Networks
Progression of MPEG 3D Audio Standards
The first milestone of development of Open Font Format (2nd amendment)
Verification tests: (a) low Complexity Enhancement Video Coding (LCEVC) Verification Test and (b) more application cases of Versatile Video Coding (VVC)
Standardization work on Version 2 of VVC and VSEI started
In this column, the focus is on streaming-related aspects including a brief update about MPEG-DASH.
First International Standard on Neural Network Compression for Multimedia Applications
Artificial neural networks have been adopted for a broad range of tasks in multimedia analysis and processing, such as visual and acoustic classification, extraction of multimedia descriptors, or image and video coding. The trained neural networks for these applications contain many parameters (i.e., weights), resulting in a considerable size. Thus, transferring them to several clients (e.g., mobile phones, smart cameras) benefits from a compressed representation of neural networks.
At the 134th MPEG meeting, MPEG Video ratified the first international standards on Neural Network Compression for Multimedia Applications (ISO/IEC 15938-17), designed as a toolbox of compression technologies. The specification contains different methods for
parameter transformation (e.g., quantization), and
entropy coding
methods that can be assembled to encoding pipelines combining one or more (in the case of reduction) methods from each group.
The results show that trained neural networks for many common multimedia problems such as image or audio classification or image compression can be compressed by a factor of 10-20 with no performance loss and even by more than 30 with performance trade-off. The specification is not limited to a particular neural network architecture and is independent of the neural network exchange format choice. The interoperability with common neural network exchange formats is described in the annexes of the standard.
As neural networks are becoming increasingly important, the communication thereof over heterogeneous networks to a plethora of devices raises various challenges including efficient compression that is inevitable and addressed in this standard. ISO/IEC 15938 is commonly referred to as MPEG-7 (or the “multimedia content description interface”) and this standard becomes now part 15 of MPEG-7.
Research aspects: Like for all compression-related standards, research aspects are related to compression efficiency (lossy/lossless), computational complexity (runtime, memory), and quality-related aspects. Furthermore, the compression of neural networks for multimedia applications probably enables new types of applications and services to be deployed in the (near) future. Finally, simultaneous delivery and consumption (i.e., streaming) of neural networks including incremental updates thereof will become a requirement for networked media applications and services.
Carriage of Media Assets
At the 134th MPEG meeting, MPEG Systems completed the carriage of various media assets in MPEG-2 Systems (Transport Stream) and the ISO Base Media File Format (ISOBMFF), respectively.
In particular, the standards for the carriage of Versatile Video Coding (VVC) and Essential Video Coding (EVC) over both MPEG-2 Transport Stream (M2TS) and ISO Base Media File Format (ISOBMFF) reached their final stages of standardization, respectively:
For M2TS, the standard defines constraints to elementary streams of VVC and EVC to carry them in the packetized elementary stream (PES) packets. Additionally, buffer management mechanisms and transport system target decoder (T-STD) model extension are also defined.
For ISOBMFF, the carriage of codec initialization information for VVC and EVC is defined in the standard. Additionally, it also defines samples and sub-samples reflecting the high-level bitstream structure and independently decodable units of both video codecs. For VVC, signaling and extraction of a certain operating point are also supported.
Finally, MPEG Systems completed the standard for the carriage of Visual Volumetric Video-based Coding (V3C) data using ISOBMFF. Therefore, it supports media comprising multiple independent component bitstreams and considers that only some portions of immersive media assets need to be rendered according to the users’ position and viewport. Thus, the metadata indicating the relationship between the region in the 3D spatial data to be rendered and its location in the bitstream is defined. In addition, the delivery of the ISOBMFF file containing a V3C content over DASH and MMT is also specified in this standard.
Research aspects: Carriage of VVC, EVC, and V3C using M2TS or ISOBMFF provides an essential building block within the so-called multimedia systems layer resulting in a plethora of research challenges as it typically offers an interoperable interface to the actual media assets. Thus, these standards enable efficient and flexible provisioning or/and use of these media assets that are deliberately not defined in these standards and subject to competition.
Call for Proposals and Verification Tests
At the 134th MPEG meeting, MPEG issued three Call for Proposals (CfPs) that are briefly highlighted in the following:
Coded Representation of Haptics: Haptics provide an additional layer of entertainment and sensory immersion beyond audio and visual media. This CfP aims to specify a coded representation of haptics data, e.g., to be carried using ISO Base Media File Format (ISOBMFF) files in the context of MPEG-DASH or other MPEG-I standards.
MPEG-I Immersive Audio: Immersive Audio will complement other parts of MPEG-I (i.e., Part 3, “Immersive Video” and Part 2, “Systems Support”) in order to provide a suite of standards that will support a Virtual Reality (VR) or an Augmented Reality (AR) presentation in which the user can navigate and interact with the environment using 6 degrees of freedom (6 DoF), that being spatial navigation (x, y, z) and user head orientation (yaw, pitch, roll).
New Advanced Genomics Features and Technologies: This CfP aims to collect submissions of new technologies that can (i) provide improvements to the current compression, transport, and indexing capabilities of the ISO/IEC 23092 standards suite, particularly applied to data consisting of very long reads generated by 3rd generation sequencing devices, (ii) provide the support for representation and usage of graph genome references, (iii) include coding modes relying on machine learning processes, satisfying data access modalities required by machine learning and providing higher compression, and (iv) support of interfaces with existing standards for the interchange of clinical data.
Detailed information, including instructions on how to respond to the call for proposals, the requirements that must be considered, the test data to be used, and the submission and evaluation procedures for proponents are available at www.mpeg.org.
Call for proposals typically mark the beginning of the formal standardization work whereas verification tests are conducted once a standard has been completed. At the 134th MPEG meeting and despite the difficulties caused by the pandemic situation, MPEG completed verification tests for Versatile Video Coding (VVC) and Low Complexity Enhancement Video Coding (LCEVC).
For LCEVC, verification tests measured the benefits of enhancing four existing codecs of different generations (i.e., AVC, HEVC, EVC, VVC) using tools as defined in LCEVC within two sets of tests:
The first set of tests compared LCEVC-enhanced encoding with full-resolution single-layer anchors. The average bit rate savings produced by LCEVC when enhancing AVC were determined to be approximately 46% for UHD and 28% for HD. When enhancing HEVC approximately 31% for UHD and 24% for HD. Test results tend to indicate an overall benefit also when using LCEVC to enhance EVC and VVC.
The second set of tests confirmed that LCEVC provided a more efficient means of resolution enhancement of half-resolution anchors than unguided up-sampling. Comparing LCEVC full-resolution encoding with the up-sampled half-resolution anchors, the average bit-rate savings when using LCEVC with AVC, HEVC, EVC and VVC were calculated to be approximately 28%, 34%, 38%, and 32% for UHD and 27%, 26%, 21%, and 21% for HD, respectively.
For VVC, it was already the second round of verification testing including the following aspects:
360-degree video for equirectangular and cubemap formats, where VVC shows on average more than 50% bit rate reduction compared to the previous major generation of MPEG video coding standard known as High Efficiency Video Coding (HEVC), developed in 2013.
Low-delay applications such as compression of conversational (teleconferencing) and gaming content, where the compression benefit is about 40% on average,
HD video streaming, with an average bit rate reduction of close to 50%.
A previous set of tests for 4K UHD content completed in October 2020 had shown similar gains. These verification tests used formal subjective visual quality assessment testing with “naïve” human viewers. The tests were performed under a strict hygienic regime in two test laboratories to ensure safe conditions for the viewers and test managers.
Research aspects: CfPs offer a unique possibility for researchers to propose research results for adoption into future standards. Verification tests provide objective or/and subjective evaluations of standardized tools which typically conclude the life cycle of a standard. The results of the verification tests are usually publicly available and can be used as a baseline for future improvements of the respective standards including the evaluation thereof.
DASH Update!
Finally, I’d like to provide a brief update on MPEG-DASH! At the 134th MPEG meeting, MPEG Systems recommended the approval of ISO/IEC FDIS 23009-1 5th edition. That is, the MPEG-DASH core specification will be available as 5th edition sometime this year. Additionally, MPEG requests that this specification becomes freely available which also marks an important milestone in the development of the MPEG-DASH standard. Most importantly, the 5th edition of this standard incorporates CMAF support as well as other enhancements defined in the amendment of the previous edition. Additionally, the MPEG-DASH subgroup of MPEG Systems is already working on the first amendment to its 5th edition entitled preroll, nonlinear playback, and other extensions. It is expected that the 5th edition will also impact related specifications within MPEG but also in other Standards Developing Organizations (SDOs) such as DASH-IF, i.e., defining interoperability points (IOPs) for various codecs and others, or CTA WAVE (Web Application Video Ecosystem), i.e., defining device playback capabilities such as the Common Media Client Data (CMCD). Both DASH-IF and CTA WAVE provide means for (conformance) test infrastructure for DASH and CMAF.
An updated overview of DASH standards/features can be found in the Figure below.
MPEG-DASH status as of April 2021.
Research aspects: MPEG-DASH has been ratified almost ten years ago which resulted in a plethora of research articles, mostly related to adaptive bitrate (ABR) algorithms and their impact on the streaming performance including the Quality of Experience (QoE). An overview of bitrate adaptation schemes is provided here including a list of open challenges and issues.
The 135th MPEG meeting will be again an online meeting in July 2021. Click here for more information about MPEG meetings and their developments.
Welcome to the fourth column on the ACM SIGMM Records from the Video Quality Experts Group (VQEG). During the last VQEG plenary meeting (14-18 Dec. 2020) various interesting discussions arose regarding new topics not addressed up to then by VQEG groups, which led to launching three new sub-projects and a new project related to: 1) clarifying the computation of spatial and temporal information (SI and TI), 2) including video quality metrics as metadata in compressed bitstreams, 3) Quality of Experience (QoE) metrics for live video streaming applications, and 4) providing guidelines on implementing objective video quality metrics to the video compression community. The following sections provide more details about these new activities and try to encourage interested readers to follow and get involved in any of them by subscribing to the corresponding reflectors.
SI and TI Clarification
The VQEG No-Reference Metrics (NORM) group has recently focused on the topic of spatio-temporal complexity, revisiting the Spatial Information and Temporal Information (SI/TI) indicators, which are described in ITU-T Rec. P.910 [1]. They were originally developed for the T1A1 dataset in 1994 [2]. The metrics have found good use over the last 25 years – mostly employed for checking the complexity of video sources in datasets. However, SI/TI definitions contain ambiguities, so the goal of this sub-project is to provide revised definitions eliminating implementation inconsistencies.
Three main topics are discussed by VQEG in a series of online meetings:
Comparison of existing publicly available implementations for SI/TI: a comparison was made between several public open-source implementations for SI/TI, based on initial feedback from members of Facebook. Bugs and inconsistencies were identified with the handling of video frame borders, treatment of limited vs. full range content, as well as the reporting of TI values for the first frame. Also, the lack of standardized test vectors was brought up as an issue. As a consequence, a new reference library was developed in Python by members of TU Ilmenau, incorporating all bug fixes that were previously identified, and introducing a new test suite, to which the public is invited to contribute material. VQEG is now actively looking for specific test sequences that will be useful for both validating existing SI/TI implementations, but also extending the scope of the metrics, which is related to the next issue described below.
Study on how to apply SI/TI on different content formats: the description of SI/TI was found to be not suitable for extended applications such as video with a higher bit depth (> 8 Bit), HDR content, or spherical/3D video. Also, the question was raised on how to deal with the presence of scene changes in content. The community concluded that for content with higher bit depth, SI/TI functions should be calculated as specified, but that the output values could be mapped back to the original 8-Bit range to simplify comparisons. As for HDR, no conclusion was reached, given the inherent complexity of the subject. It was also preliminarily concluded that the treatment of scene changes should not be part of an SI/TI recommendation, instead focusing on calculating SI/TI for short sequences without scene changes, since the way scene changes would be dealt with may depend on the final application of the metrics.
Discussion on other relevant uses of SI/TI: it has been widely used for checking video datasets in terms of diversity and classifying content. Also, SI/TI have been used in some no-reference metrics as content features. The question was raised whether SI/TI could be used for predicting how well content could be encoded. The group noted that different encoders would deal with sources differently, e.g. related to noise in the video. It was stated that it would be nice to be able to find a metric that was purely related to content and not affected by encoding or representation.
As a first step, this revision of the topic of SI/TI has resulted in a harmonized implementation and in the identification of future application areas. Discussions on these topics will continue in the next months through audio-calls that are open to interested readers.
Video Quality Metadata Standard
Also within NORM group, another topic was launched related to the inclusion of video quality metadata in compressed streams [3].
Almost all modern transcoding pipelines use full-reference video quality metrics to decide on the most appropriate encoding settings. The computation of these quality metrics is demanding in terms of time and computational resources. In addition, estimation errors propagate and accumulate when quality metrics are recomputed several times along the transcoding pipeline. Thus, retaining the results of these metrics with the video can alleviate these constraints, requiring very little space and providing a “greener” way of estimating video quality. With this goal, the new sub-project has started working towards the definition of a standard format to include video quality metrics metadata both at video bitstream level and system layer [4].
In this sense, the experts involved in the new sub-project are working on the following items:
Identification of existing proposals and working groups within other standardisation bodies and organisations that address similar topics and propose amendments including new requirements. For example, MPEG has already worked on the adding of video quality metrics (e.g., PSNR, SSIM, MS-SSIM, VQM, PEVQ, MOS, FISG) metadata at system level (e.g, in MPEG2 streams [5], HTTP [6], etc.[7]).
Identification of quality metrics to be considered in the standard. In principle, validated and standardized metrics are of interest, although other metrics can be also considered after a validation process on a standard set of subjective data (e.g., using existing datasets). New metrics to those used in previous approaches are of special interest. (e.g., VMAF [8], FB-MOS [9]).
Consideration of the computation of multiple generations of full-reference metrics at different steps of the transcoding chain, of the use of metrics at different resolutions, different spatio-temporal aggregation methods, etc.
Definition of a standard video quality metadata payload, including relevant fields such as metric name (e.g., “SSIM”), version (e.g., “v0.6.1”), raw score (e.g., “0.9256”), mapped-to-MOS score (e.g., “3.89”), scaling method (e.g., “Lanczos-5”), temporal reference (e.g., “0-3” frames), aggregation method (e.g., “arithmetic mean”), etc [4].
More details and information on how to join this activity can be found in the NORM webpage.
QoE metrics for live video streaming applications
The VQEG Audiovisual HD Quality (AVHD) group launched a new sub-project on QoE metrics for live media streaming applications (Live QoE) in the last VQEG meeting [10].
The success of a live multimedia streaming session is defined by the experience of a participating audience. Both the content communicated by the media and the quality at which it is delivered matter – for the same content, the quality delivered to the viewer is a differentiating factor. Live media streaming systems undertake a lot of investment and operate under very tight service availability and latency constraints to support multimedia sessions for their audience. Both to measure the return on investment and to make sound investment decisions, it is paramount that we be able to measure the media quality offered by these systems. In this sense, given the large scale and complexity of media streaming systems, objective metrics are needed to measure QoE.
Therefore, the following topics have been identified and are studied [11]:
Creation of a high quality dataset, including media clips and subjective scores, which will be used to tune, train and develop objective QoE metrics. This dataset should represent the conditions that take place in typical live media streaming situations, therefore conditions and impairments comprising audio and video tracks (independently and jointly) will be considered. In addition, this datasets should cover a diverse set of content categories, including premium contentes (e.g., sports, movies, concerts, etc.) and user generated content (e.g., music, gaming, real life content, etc.).
Development of QoE objective metrics, especially focusing on no-reference or near-no-reference metrics, given the lack of access to the original video at various points in the live media streaming chain. Different types of models will be considered including signal-based (operate on the decoded signal), metadata-based (operate on available metadata, e.g. codecs, resolution, framerate, bitrate, etc.), bitstream-based (operate on the parsed bitstream), and hybrid models (combining signal and metadata) [12]. Also, machine-learning based models will be explored.
Certain challenges are envisioned to be faced when dealing with these two topics, such as separating “content” from “quality” (taking int account that content plays a big role on engagement and acceptability), spectrum expectations, role of network impairments and the collection of enough data to develop robust models [11]. Readers interested in joining this effort are encouraged to visit AVHD webpage for more details.
Implementer’s Guide to Video Quality Metrics
In the last meeting, a new dedicated group on Implementer’s Guide to Video Quality Metrics (IGVQM) was set up to work on introducing and provide guidelines on implementing objective video quality metrics to the video compression community.
During the development of new video coding standards, peak-signal-to-noise-ratio (PSNR) has been traditionally used as the main objective metric to determine which new coding tools to be adopted. It has been furthermore used to establish the bitrate savings that a new coding standard offers over its predecessor through the employment of the so-called “BD-rate” metric [13] that still relies on PSNR for measuring quality.
Although this choice was fully justified for the first image/video coding standards – JPEG (1992), MPEG1 (1994), MPEG2 (1996), JPEG2000 and even H.264/AVC (2004) – since there was simply no other alternative at that time, its continuing use for the development of H.265/HEVC (2013), VP9 (2013), AV1 (2018) and most recently EVC and VVC (2020) is questionable, given the rapid and continuous evolution of more perceptual image/video objective quality metrics, such as SSIM (2004) [14], MS-SSIM (2004) [15], and VMAF (2015) [8].
This project attempts to offer some guidance to the video coding community, including standards setting organisations, on how to better utilise existing objective video quality metrics to better capture the improvements offered by video coding tools. For this, the following goals have been envisioned:
Address video compression and scaling impairments only.
Explore and use “state-of-the-art” full-reference (pixel) objective metrics, examine applicability of no-reference objective metrics, and obtain reference implementations of them.
Offer temporal aggregation methods of image quality metrics into video quality metrics.
Present statistical analysis of existing subjective datasets, constraining them to compression and scaling artifacts.
Highlight differences among objective metrics and use-cases. For example, in case of very small differences, which metric is more sensitive? Which quality range is better served by what metric?
Offer standard logistic mappings of objective metrics to a normalised linear scale.
More details can be found in the working document that has been set up to launch the project [16] and on the VQEG website.
The original blog post can be found at the Bitmovin Techblog and has been modified/updated here to focus on and highlight research aspects.
The 133rd MPEG meeting was once again held as an online meeting, and this time, kicked off with great news, that MPEG is one of the organizations honored as a 72nd Annual Technology & Engineering Emmy® Awards Recipient, specifically the MPEG Systems File Format Subgroup and its ISO Base Media File Format (ISOBMFF) et al.
The official press release can be found here and comprises the following items:
6th Emmy® Award for MPEG Technology: MPEG Systems File Format Subgroup wins Technology & Engineering Emmy® Award
Essential Video Coding (EVC) verification test finalized
MPEG issues a Call for Evidence on Video Coding for Machines
Neural Network Compression for Multimedia Applications – MPEG calls for technologies for incremental coding of neural networks
MPEG Systems reaches the first milestone for supporting Versatile Video Coding (VVC) and Essential Video Coding (EVC) in the Common Media Application Format (CMAF)
MPEG Systems continuously enhances Dynamic Adaptive Streaming over HTTP (DASH)
MPEG Systems reached the first milestone to carry event messages in tracks of the ISO Base Media File Format
In this report, I’d like to focus on ISOBMFF, EVC, CMAF, and DASH.
MPEG Systems File Format Subgroup wins Technology & Engineering Emmy® Award
MPEG is pleased to report that the File Format subgroup of MPEG Systems is being recognized this year by the National Academy for Television Arts and Sciences (NATAS) with a Technology & Engineering Emmy® for their 20 years of work on the ISO Base Media File Format (ISOBMFF). This format was first standardized in 1999 as part of the MPEG-4 Systems specification and is now in its 6th edition as ISO/IEC 14496-12. It has been used and adopted by many other specifications, e.g.:
MP4 and 3GP file formats;
Carriage of NAL unit structured video in the ISO Base Media File Format which provides support for AVC, HEVC, VVC, EVC, and probably soon LCEVC;
MPEG-21 file format;
Dynamic Adaptive Streaming over HTTP (DASH) and Common Media Application Format (CMAF);
High-Efficiency Image Format (HEIF);
Timed text and other visual overlays in ISOBMFF;
Common encryption format;
Carriage of timed metadata metrics of media;
Derived visual tracks;
Event message track format;
Carriage of uncompressed video;
Omnidirectional Media Format (OMAF);
Carriage of visual volumetric video-based coding data;
Carriage of geometry-based point cloud compression data;
… to be continued!
This is MPEG’s fourth Technology & Engineering Emmy® Award (after MPEG-1 and MPEG-2 together with JPEG in 1996, Advanced Video Coding (AVC) in 2008, and MPEG-2 Transport Stream in 2013) and sixth overall Emmy® Award including the Primetime Engineering Emmy® Awards for Advanced Video Coding (AVC) High Profile in 2008 and High-Efficiency Video Coding (HEVC) in 2017, respectively.
Essential Video Coding (EVC) verification test finalized
At the 133rd MPEG meeting, a verification testing assessment of the Essential Video Coding (EVC) standard was completed. The first part of the EVC verification test using high dynamic range (HDR) and wide color gamut (WCG) was completed at the 132nd MPEG meeting. A subjective quality evaluation was conducted comparing the EVC Main profile to the HEVC Main 10 profile and the EVC Baseline profile to AVC High 10 profile, respectively:
Analysis of the subjective test results showed that the average bitrate savings for EVC Main profile are approximately 40% compared to HEVC Main 10 profile, using UHD and HD SDR content encoded in both random access and low delay configurations.
The average bitrate savings for the EVC Baseline profile compared to the AVC High 10 profile is approximately 40% using UHD SDR content encoded in the random-access configuration and approximately 35% using HD SDR content encoded in the low delay configuration.
Verification test results using HDR content had shown average bitrate savings for EVC Main profile of approximately 35% compared to HEVC Main 10 profile.
By providing significantly improved compression efficiency compared to HEVC and earlier video coding standards while encouraging the timely publication of licensing terms, the MPEG-5 EVC standard is expected to meet the market needs of emerging delivery protocols and networks, such as 5G, enabling the delivery of high-quality video services to an ever-growing audience.
In addition to verification tests, EVC, along with VVC and CMAF were subject to further improvements to their support systems.
Research aspects: as for every new video codec, its compression efficiency and computational complexity are important performance metrics. Additionally, the availability of (efficient) open-source implementations (i.e., x264, x265, soon x266, VVenC, aomenc, et al., etc.) are vital for its adoption in the (academic) research community.
MPEG Systems reaches the first milestone for supporting Versatile Video Coding (VVC) and Essential Video Coding (EVC) in the Common Media Application Format (CMAF)
At the 133rd MPEG meeting, MPEG Systems promoted Amendment 2 of the Common Media Application Format (CMAF) to Committee Draft Amendment (CDAM) status, the first major milestone in the ISO/IEC approval process. This amendment defines:
constraints to (i) Versatile Video Coding (VVC) and (ii) Essential Video Coding (EVC) video elementary streams when carried in a CMAF video track;
codec parameters to be used for CMAF switching sets with VVC and EVC tracks; and
support of the newly introduced MPEG-H 3D Audio profile.
It is expected to reach its final milestone in early 2022. For research aspects related to CMAF, the reader is referred to the next section about DASH.
MPEG Systems continuously enhances Dynamic Adaptive Streaming over HTTP (DASH)
At the 133rd MPEG meeting, MPEG Systems promoted Part 8 of Dynamic Adaptive Streaming over HTTP (DASH) also referred to as “Session-based DASH” to its final stage of standardization (i.e., Final Draft International Standard (FDIS)).
Historically, in DASH, every client uses the same Media Presentation Description (MPD), as it best serves the scalability of the service. However, there have been increasing requests from the industry to enable customized manifests for enabling personalized services. MPEG Systems has standardized a solution to this problem without sacrificing scalability. Session-based DASH adds a mechanism to the MPD to refer to another document, called Session-based Description (SBD), which allows per-session information. The DASH client can use this information (i.e., variables and their values) provided in the SBD to derive the URLs for HTTP GET requests.
An updated overview of DASH standards/features can be found in the Figure below.
MPEG DASH Status as of January 2021.
Research aspects: CMAF is mostly like becoming the main segment format to be used in the context of HTTP adaptive streaming (HAS) and, thus, also DASH (hence also the name common media application format). Supporting a plethora of media coding formats will inevitably result in a multi-codec dilemma to be addressed in the near future as there will be no flag day where everyone will switch to a new coding format. Thus, designing efficient bitrate ladders for multi-codec delivery will an interesting research aspect, which needs to include device/player support (i.e., some devices/player will support only a subset of available codecs), storage capacity/costs within the cloud as well as within the delivery network, and network distribution capacity/costs (i.e., CDN costs).
The 134th MPEG meeting will be again an online meeting in April 2021. Click here for more information about MPEG meetings and their developments.
The
90th JPEG meeting was held online from 18 to 22 January 2021. This meeting was distinguished
by very relevant activities, notably the new JPEG AI standardization project
planning, and the analysis of the Call for Evidence on JPEG Pleno Point Cloud Coding.
The
new JPEG AI Learning-based Image Coding System has become an official new work
item registered under ISO/IEC 6048 and aims at providing compression efficiency
in addition to image processing and computer visions
tasks without the need for decompression.
The
response to the Call for Evidence on JPEG Pleno Point Cloud Coding was a learning-based
method that was found to offer state of the art compression efficiency. Considering this response, the JPEG Pleno
Point Cloud activity will analyse the possibility of preparing a future call
for proposals on learning-based coding solutions that will also consider new functionalities,
building on the relevant use cases already identified that require machine
learning tasks processed in the compressed domain.
Meanwhile the new JPEG XL coding system has reached FDIS stage and it is ready for adoption. JPEG XL offers compression efficiency similar to the best state of the art in image coding, the best lossless compression performance, affordable low complexity and integration with the legacy JPEG image coding standard allowing a friendly transition between the two standards.
The new JPEG AI logo.
The 90th JPEG meeting had the following highlights:
JPEG AI,
JPEG Pleno Point Cloud response to the Call for Evidence,
JPEG XL Core Coding System reaches FDIS stage,
JPEG Fake Media exploration,
JPEG DNA continues the exploration on image coding suitable for DNA storage,
JPEG systems,
JPEG XS 2nd edition of Profiles reaches DIS stage.
JPEG AI
The
scope of the JPEG AI is the creation of a learning-based image coding standard
offering a single-stream, compact compressed domain representation, targeting
both human visualization with significant compression efficiency improvement
over image coding standards in common use at equivalent subjective quality, and
effective performance for image processing and computer vision tasks, with the
goal of supporting a royalty-free baseline.
JPEG
AI has made several advances during the 90th technical meeting. During this
meeting, the JPEG AI Use Cases and Requirements were discussed and
collaboratively defined. Moreover, the JPEG AI vision and the overall system
framework of an image compression solution with efficient compressed domain
representation was defined. Following this approach, a set of exploration
experiments were defined to assess the capabilities of the
compressed representation generated by learning-based image codecs,
considering some specific computer vision and image processing tasks.
Moreover,
the performance assessment of the most popular objective quality metrics, using
subjective scores obtained during the call for evidence were discussed, as well
as anchors and some techniques to perform spatial prediction and entropy
coding.
JPEG Pleno Point Cloud response to the Call for Evidence
JPEG Pleno is working towards the integration of various modalities of
plenoptic content under a single and seamless framework. Efficient and powerful
point cloud representation is a key feature within this vision. Point cloud
data supports a wide range of applications including computer-aided
manufacturing, entertainment, cultural heritage preservation, scientific
research and advanced sensing and analysis. During the 90th JPEG meeting, the
JPEG Committee reached an exciting major milestone and reviewed the results of its
Final Call for Evidence on JPEG Pleno Point Cloud Coding. With an innovative
Deep Learning based point cloud codec supporting scalability and random access
submitted, the Call for Evidence results highlighted the emerging role of Deep
Learning in point cloud representation and processing. Between the 90th and
91st meetings, the JPEG Committee will be refining the scope and direction of
this activity in light of the results of the Call for Evidence.
JPEG XL Core Coding System reaches FDIS stage
The JPEG Committee has
finalized JPEG XL Part 1 (Core Coding System), which is now at FDIS stage. The
committee has defined new core experiments to determine appropriate profiles
and levels for the codec, as well as appropriate criteria for defining
conformance. With Part 1 complete, and Part 2 close to completion, JPEG XL is
ready for evaluation and adoption by the market.
JPEG Fake Media exploration
The
JPEG Committee initiated the JPEG Fake Media JPEG exploration study with the
objective to create a standard that can facilitate secure and reliable
annotation of media asset generation and modifications. The initiative aims to
support usage scenarios that are in good faith as well as those with
malicious intent. During the 90th JPEG meeting, the committee released a new
version of the document entitled “JPEG Fake Media: Context Use Cases and
Requirements” which is available on the JPEG website. A first workshop on the
topic was organized on the 15th of December 2020. The program,
presentations and a video recording of this workshop are available on the JPEG
website. A second workshop will be organized around March 2021. More details
will be made available soon on JPEG.org.
JPEG invites interested parties to regularly visit https://jpeg.org/jpegfakemedia for the latest information and subscribe to the mailing list
via http://listregistration.jpeg.org.
JPEG DNA continues the exploration on image coding suitable for DNA storage
The
JPEG Committee continued its exploration for coding of images in quaternary
representation, particularly suitable for DNA storage. After a second
successful workshop presentation by stakeholders, additional requirements were
identified, and a new version of the JPEG DNA overview document was issued and
made publicly available. It was decided to continue this exploration by
organising a third workshop and further outreach to stakeholders, as well as a
proposal for an updated version of the JPEG overview document. Interested
parties are invited to refer to the following URL and to consider joining the
effort by registering to the mailing list of JPEG DNA here:
https://jpeg.org/jpegdna/index.html.
JPEG Systems
JUMBF (ISO/IEC 19566-5)
Amendment 1 draft review is complete and it is proceeding to international
standard and subsequent publication; additional features to support new
applications are under consideration. Likewise, JPEG 360 (ISO/IEC
19566-5) Amendment 1 draft review is complete, and it is proceeding to
international standard and subsequent publication. The JLINK (ISO/IEC
19566-7) standard completed the committee draft review and is preparing a DIS
study text ahead of the 91st meeting. The JPEG Snack (ISO/IEC 19566-8) will
make a second working draft. Interested parties can subscribe to the mailing list of the
JPEG Systems AHG in order to contribute to the above activities.
JPEG XS 2nd edition of Profiles reaches DIS stage
The 2nd edition of Part 2 (Profiles) is now at the DIS stage and defines the required new profiles and levels to support the compression of raw Bayer content, mathematically lossless coding of up to 12-bit per component images, and 4:2:0 sampled image content. With the second editions of Parts 1, 2, and 3 completed, and the scheduled second editions of Part 4 (Conformance) and 5 (Reference Software), JPEG XS will soon have received a complete backwards-compatible revision of its entire suite of standards. Moreover, the committee defined a new exploration study to create new coding tools for improving the HDR and mathematically lossless compression capabilities, while still honoring the low-complexity and low-latency requirements.
Final Quote
“The official approval of JPEG AI by JPEG Parent Bodies ISO and IEC is a strong signal of support of this activity and its importance in the creation of AI-based imaging applications” said Prof. Touradj Ebrahimi, the Convenor of the JPEG Committee.
Future JPEG meetings are planned as follows:
No 91, will be held online from April 19 to 23, 2021.
No 92, will be held online from July 7 to 13, 2021.
Welcome to the third column on the ACM SIGMM Records from the Video Quality Experts Group (VQEG). The last VQEG plenary meeting took place online from 14 to 18 December. Given the current circumstances, it was organized all online for the second time, with multiple sessions distributed over five to six hours each day allowing remote participation of people from different time zones. About 130 participants from 24 different countries registered to the meeting and could attend the several presentations and discussions that took place in all working groups. This column provides an overview of this meeting, while all the information, minutes, files (including the presented slides), and video recordings from the meeting are available online in the VQEG meeting website. As highlights of interest for the SIGMM community, apart from several interesting presentations of state-of-the-art works, relevant contributions to ITU recommendations related to multimedia quality assessment were reported from various groups (e.g., on adaptive bitrate streaming services, on subjective quality assessment of 360-degree videos, on statistical analysis of quality assessments, on gaming applications, etc.), the new group on quality assessment for health applications was launched, and an interesting session on 5G use cases took place, as well as a workshop dedicated to user testing during Covid-19. In addition, new efforts have been launched related to the research on quality metrics for live media streaming applications, and to provide guidelines on implementing objective video quality metrics (ahead of PSNR) to the video compression community. We encourage those readers interested in any of the activities going on in the working groups to check their websites and subscribe to the corresponding reflectors, to follow them and get involved.
Overview of VQEG Projects
Audiovisual HD (AVHD)
AVHD/P.NATS2 project was a joint collaboration between VQEG and ITU SG12, whose goal was to develop a multitude of objective models, varying in terms of complexity/type of input/use-cases for the assessment of video quality in adaptive bitrate streaming services over reliable transport up to 4K. The report of this project, which finished in January 2020, was approved in this meeting. In summary, it resulted in 10 model categories with models trained and validated on 26 subjective datasets. This activity resulted in 4 ITU standards (ITU-T Rec. P.1204 in [1], P.1204.3 in [2], P.1204.4 in [3], P.1204.5 in [4], a dataset created during this effort and a journal publication reporting details on the validation tests [5]. In this sense, one presentation by Alexander Raake (TU Ilmenau) provided details on the P.NATS Phase 2 project and the resulting ITU recommendations, while details of the processing chain used in the project were presented by Werner Robitza (AVEQ GmbH) and David Lindero (Ericsson). In addition to this activity, there were various presentations covering topics related to this group. For instance, Cindy Chen, Deepa Palamadai Sundar, and Visala Vaduganathan (Facebook) presented their work on hardware acceleration of video quality metrics. Also from Facebook, Haixiong Wang presented their work on efficient measurement of quality at scale in their video ecosystem [6]. Lucjan Janowski (AGH University) proposed a discussion on more ecologically valid subjective experiments, Alan Bovik (University of Texas at Austin) presented a hitchhiker’s guide to SSIM, and Ali Ak (Université de Nantes) presented a comprehensive analysis of crowdsourcing for subjective evaluation of tone mapping operators. Finally, Rohit Puri (Twitch) opened a discussion on the research on QoE metrics for live media streaming applications, which led to the agreement to start a new sub-project within AVHD group on this topic.
The chairs of the PsyPhyQA group provided an update on the activities carried out. In this sense, a test plan for psychophysiological video quality assessment was established and currently the group is aiming to develop ideas to do quality assessment tests with psychophysiological measures in times of a pandemic and to collect and discuss ideas about possible joint works. In addition, the project is trying to learn about physiological correlates of simulator sickness, and in this sense, a presentation was delivered J.P. Tauscher (Technische Universität Braunschweig) on exploring neural and peripheral physiological correlates of simulator sickness. Finally, Waqas Ellahi (Université de Nantes) gave a presentation on visual fidelity of tone mapping operators from gaze data using HMM [7].
The report from the chairs of the CGI group covered the progress on the research on assessment methodologies for quality assessment of gaming services (e.g., ITU-T P.809 [10]), on crowdsourcing quality assessment for gaming application (P.808 [11]), on quality prediction and opinion models for cloud gaming (e.g., ITU-T G.1072 [12]), and on models (signal-, bitstream-, and parametric-based models) for video quality assessment of CGI content (e.g., nofu, NDNetGaming, GamingPara, DEMI, NR-GVQM, etc.). In terms of planned activities, the group is targeting the generation of new gaming datasets and tools for metrics to assess gaming QoE, but also the group is aiming at identifying other topics of interest in CGI rather than gaming content. In addition, there was a presentation on updates on gaming standardization activities and deep learning models for gaming quality prediction by Saman Zadtootaghaj (TU Berlin), another one on subjective assessment of multi-dimensional aesthetic assessment for mobile game images by Suiyi Ling (Université de Nantes), and one addressing quality assessment of gaming videos compressed via AV1 by Maria Martini (Kingston University London), leading to interesting discussions on those topics.
Quality Assessment for Computer Vision Applications (QACoViA)
The QACoViA group announced Lu Zhang (INSA Rennes) as new third co-chair, who will also work in the near future in a project related to image compression for optimized recognition by distributed neural networks. In addition, Mikołaj Leszczuk (AGH University) presented a report on a recently finished project related to objective video quality assessment method for recognition tasks, in collaboration with Huawei through its Innovation Research Programme.
5G Key Performance Indicators (5GKPI)
The 5GKPI session was oriented to identify possible interested partners and joint works (e.g., contribution to ITU-T SG12 recommendation G.QoE-5G [14], generation of open/reference datasets, etc.). In this sense, it included four presentations of use cases of interest: tele-operated driving by Yungpeng Zang (5G Automotive Association), content production related to the European project 5G-Records by Paola Sunna (EBU), Augmented/Virtual Reality by Bill Krogfoss (Bell Labs Consulting), and QoE for remote controlled use cases by Kjell Brunnström (RISE).
Immersive Media Group (IMG)
A report on the updates within the IMG group was initially presented, especially covering the current joint work investigating the subjective quality assessment of 360-degree video. In particular, a cross-lab test, involving 10 different labs, were carried out at the beginning of 2020 resulting in relevant outcomes including various contributions to ITU SG12/Q13 and MPEG AhG on Quality of Immersive Media. It is worth noting that the new ITU-T recommendation P.919 [15], related to subjective quality assessment of 360-degree videos (in line with ITU-R BT.500 [8] or ITU-T P.910 [13]), was approved in mid-October, and was supported by the results of these cross-lab tests. Furthermore, since these tests have already finished, there was a presentation by Pablo Pérez (Nokia Bell-Labs) on possible future joint activities within IMG, which led to an open discussion after it that will continue in future audio calls. In addition, a total of four talks covered topics related to immersive media technologies, including an update from the Audiovisual Technology Group of the TU Ilmenau on immersive media topics, and a presentation of a no-reference quality metric for light field content based on a structural representation of the epipolar plane image by Ali Ak and Patrick Le Callet (Université de Nantes) [16]. Also, there were two presentations related to 3D graphical contents, one addressing the perceptual characterization of 3D graphical contents based on visual attention patterns by Mona Abid (Université de Nantes), and another one comparing subjective methods for quality assessment of 3D graphics in virtual reality by Yana Nehmé (INSA Lyon).
Intersector Rapporteur Group on Audiovisual Quality Assessment (IRG-AVQA) and Q19 Interim Meeting
Chulhee Lee (Yonsei University) chaired the IRG-AVQA session, providing an overview on the progress and recent works within ITU-R WP6C in HDR related topics and ITU-T SG12 Questions 9, 13, 14, 19 (e.g., P.NATS Phase 2 and follow-ups, subjective assessment of 360-degree video, QoE factors for AR applications, etc.). In addition, a new work item was announced within ITU-T SG9: End-to-end network characteristics requirements for video services (J.pcnp-char [17]). From the discussions raised during this session, a new dedicated group was set up to work on introducing and provide guidelines on implementing objective video quality metrics, ahead of PSNR, to the video compression community. The group was named “Implementers Guide for Video Quality Metrics (IGVQM)” and will be chaired by Ioannis Katsavounidis (Facebook), accounting with the involvement of several people from VQEG. After the IRG-AVQA session, the Q19 interim meeting took place with a report by Chulhee Lee and a presentation by Zhi Li (Netflix) on an update on improvements on subjective experiment data analysis process.
Other updates
Apart from the aforementioned groups, the Human Factors for Visual Experience (HVEI) is still active coordinating VQEG activities in liaison with the IEEE Standards Association Working Groups on HFVE, especially on perceptual quality assessment of 3D, UHD and HD contents, quality of experience assessment for VR and MR, quality assessment of light-field imaging contents, and deep-learning-based assessment of visual experience based on human factors. In this sense, there are ongoing contributions from VQEG members to IEEE Standards. In addition, there was a workshop dedicated to user testing during Covid-19, which included a presentation on precaution for lab experiments by Kjell Brunnström (RISE), another presentation by Babak Naderi (TU Berlin) on subjective tests during the pandemic, and a break-out session for discussions on the topic.
Finally, the next VQEG plenary meeting will take place in spring 2021 (exact dates still to be agreed), probably online again.
JPEG initiates standardisation of image compression based on AI
The 89th JPEG meeting was held online from 5 to 9 October 2020.
During this meeting, multiple JPEG standardisation activities and explorations were discussed and progressed. Notably, the call for evidence on learning-based image coding was successfully completed and evidence was found that this technology promises several new functionalities while offering at the same time superior compression efficiency, beyond the state of the art. A new work item, JPEG AI, that will use learning-based image coding as core technology has been proposed, enlarging the already wide families of JPEG standards.
Figure 1. JPEG Families of standards and JPEG AI.
The 89th JPEG meeting had the following highlights:
JPEG AI call for evidence report
JPEG explores standardization needs to address fake media
JPEG Pleno Point Cloud Coding reviews the status of the call for evidence
JPEG Pleno Holography call for proposals timeline
JPEG DNA identifies use cases and requirements
JPEG XL standard defines the final specification
JPEG Systems JLINK reaches committee draft stage
JPEG XS 2nd Edition Parts 1, 2 and 3.
JPEG AI
At the 89th meeting, the submissions to the Call for Evidence on learning-based image coding were presented and discussed. Four submissions were received in response to the Call for Evidence. The results of the subjective evaluation of the submissions to the Call for Evidence were reported and discussed in detail by experts. It was agreed that there is strong evidence that learning-based image coding solutions can outperform the already defined anchors in terms of compression efficiency when compared to state-of-the-art conventional image coding architecture. Thus, it was decided to create a new standardisation activity for a JPEG AI on learning-based image coding system, that applies machine learning tools to achieve substantially better compression efficiency compared to current image coding systems, while offering unique features desirable for efficient distribution and consumption of images. This type of approach should allow obtaining an efficient compressed domain representation not only for visualisation but also for machine learning-based image processing and computer vision. JPEG AI releases to the public the results of the objective and subjective evaluations as well as the first version of common test conditions for assessing the performance of learning-based image coding systems.
JPEG explores standardization needs to address fake media
Recent advances
in media modification, particularly deep learning-based approaches, can produce
near realistic media content that is almost indistinguishable from authentic
content. These developments open opportunities for production of new types of
media contents that are useful for many creative industries but also increase risks
of spread of maliciously modified content (e.g., ‘deepfake’) leading to social
unrest, spreading of rumours or encouragement of hate crimes. The JPEG
Committee is interested in exploring if a JPEG standard can facilitate a secure
and reliable annotation of media modifications, both in good faith and
malicious usage scenarios.
The JPEG is currently discussing with stakeholders from academia, industry and other organisations to explore the use cases that will define a roadmap to identify the requirements leading to a potential standard. The Committee has received significant interest and has released a public document outlining the context, use cases and requirements. JPEG invites experts and technology users to actively participate in this activity and attend a workshop, to be held online in December 2020. Details on the activities of JPEG in this area can be found on the JPEG.org website. Interested parties are notably encouraged to register to the mailing list of the ad hoc group that has been set up to facilitate the discussions and coordination on this topic.
JPEG Pleno Point Cloud Coding
JPEG Pleno is working towards the
integration of various modalities of plenoptic content under a single and seamless framework. Efficient and
powerful point cloud representation is a key
feature within this vision. Point cloud data supports a wide range of applications
including computer-aided manufacturing,
entertainment, cultural heritage preservation, scientific research and advanced
sensing and analysis. During the 89th JPEG meeting, the JPEG Committee reviewed
expressions of interest in the Final Call for Evidence on JPEG Pleno Point
Cloud Coding. This Call for Evidence focuses specifically on point cloud coding
solutions supporting scalability and random access of decoded point clouds.
Between its 89th and 90th meetings, the JPEG Committee will be actively
promoting this activity and collecting submissions to participate in the Call
for Evidence.
JPEG Pleno Holography
At the 89th meeting, the JPEG Committee released an updated draft of the Call for Proposals for JPEG Pleno Holography. A final Call for Proposals on JPEG Pleno Holography will be released in April 2021. JPEG Pleno Holography is seeking for compression solutions of holographic content. The scope of the activity is quite large and addresses diverse use cases such as holographic microscopy and tomography, but also holographic displays and printing. Current activities are centred around refining the objective and subjective quality assessment procedures. Interested parties are already invited at this stage to participate in these activities.
JPEG DNA
JPEG standards are used in storage and archival of digital pictures. This puts the JPEG Committee in a good position to address the challenges of DNA-based storage by proposing an efficient image coding format to create artificial DNA molecules. JPEG DNA has been established as an exploration activity within the JPEG Committee to study use cases, to identify requirements and to assess the state of the art in DNA storage for the purpose of image archival using DNA in order to launch a standardization activity. To this end, a first workshop was organised on 30 September 2020. Presentations made at the workshop are available from the following URL: http://ds.jpeg.org/proceedings/JPEG_DNA_1st_Workshop_Proceedings.zip. At its 89th meeting, the JPEG Committee released a second version of a public document that describes its findings regarding storage of digital images using artificial DNA. In this framework, JPEG DNA ad hoc group was re-conducted in order to continue its activities to further refine the above-mentioned document and to organise a second workshop. Interested parties are invited to join this activity by participating in the AHG through the following URL: http://listregistration.jpeg.org.
JPEG XL
Final
technical comments by national bodies have been addressed and incorporated into
the JPEG XL specification (ISO/IEC 18181-1) and the reference implementation. A
draft FDIS study text has been prepared and final validation experiments are
planned.
JPEG Systems
The JLINK (ISO/IEC 19566-7) standard has reached the committee draft stage and will be made public. The JPEG Committee invites technical feedback on the document which is available on the JPEG website. Development of the JPEG Snack (IS0/IEC 19566-8) standard has begun to support the defined use cases and requirements. Interested parties can subscribe to the mailing list of the JPEG Systems AHG in order to contribute to the above activities.
JPEG XS
The JPEG committee is finalizing its work on the 2nd Editions of JPEG-XS Part 1, Part 2 and Part 3. Part 1 defines new coding tools required to efficiently compress raw Bayer images. The observed quality gains of raw Bayer compression over compressing in the RGB domain can be as high as 5dB PSNR. Moreover, the second edition adds support for mathematically lossless image compression and allows compression of 4:2:0 sub-sampled images. Part 2 defines new profiles for such content. With the support for low-complexity high-quality compression of raw Bayer (or Color-Filtered Array) data, JPEG XS proves to also be an excellent compression scheme in the professional and consumer digital camera market, as well as in the machine vision and automotive industry.
Final Quote
“JPEG AI will be a new work item completing the collection of JPEG standards. JPEG AI relies on artificial intelligence to compress images. This standard not only will offer superior compression efficiency beyond the current state of the art but also will open new possibilities for vision tasks by machines and computational imaging for humans.” Said Prof. Touradj Ebrahimi, the Convenor of the JPEG Committee.
Future JPEG meetings are planned as follows:
No 90, will be held online from January 18 to 22, 2021.
N0 91, will be held online from April 19 to 23, 2021.
The original blog post can be found at the Bitmovin Techblog and has been modified/updated here to focus on and highlight research aspects.
The 132nd MPEG meeting was the first meeting with the new structure. That is, ISO/IEC JTC 1/SC 29/WG 11 — the official name of MPEG under the ISO structure — was disbanded after the 131st MPEG meeting and some of the subgroups of WG 11 (MPEG) have been elevated to independent MPEG Working Groups (WGs) and Advisory Groups (AGs) of SC 29 rather than subgroups of the former WG 11. Thus, the MPEG community is now an affiliated group of WGs and AGs that will continue meeting together according to previous MPEG meeting practices and will further advance the standardization activities of the MPEG work program.
The 132nd MPEG meeting was the first meeting with the new structure as follows (incl. Convenors and position within WG 11 structure):
AG 2 MPEG Technical Coordination (Convenor: Prof. Jörn Ostermann; for overall MPEG work coordination and prev. known as the MPEG chairs meeting; it’s expected that one can also provide inputs to this AG without being a member of this AG)
WG 2 MPEG Technical Requirements (Convenor Dr. Igor Curcio; former Requirements subgroup)
WG 3 MPEG Systems (Convenor: Dr. Youngkwon Lim; former Systems subgroup)
WG 4 MPEG Video Coding (Convenor: Prof. Lu Yu; former Video subgroup)
WG 5 MPEG Joint Video Coding Team(s) with ITU-T SG 16 (Convenor: Prof. Jens-Rainer Ohm; former JVET)
WG 6 MPEG Audio Coding (Convenor: Dr. Schuyler Quackenbush; former Audio subgroup)
WG 7 MPEG Coding of 3D Graphics (Convenor: Prof. Marius Preda, former 3DG subgroup)
WG 8 MPEG Genome Coding (Convenor: Prof. Marco Mattaveli; newly established WG)
AG 3 MPEG Liaison and Communication (Convenor: Prof. Kyuheon Kim; (former Communications subgroup)
AG 5 MPEG Visual Quality Assessment (Convenor: Prof. Mathias Wien; former Test subgroup).
The 132nd MPEG meeting was held as an online meeting and more than 300 participants continued to work efficiently on standards for the future needs of the industry. As a group, MPEG started to explore new application areas that will benefit from standardized compression technology in the future. A new web site has been created and can be found at http://mpeg.org/.
The official press release can be found here and comprises the following items:
Versatile Video Coding (VVC) Ultra-HD Verification Test Completed and Conformance and Reference Software Standards Reach their First Milestone
MPEG Completes Geometry-based Point Cloud Compression (G-PCC) Standard
MPEG Evaluates Extensions and Improvements to MPEG-G and Announces a Call for Evidence on New Advanced Genomics Features and Technologies
MPEG Issues Draft Call for Proposals on the Coded Representation of Haptics
MPEG Evaluates Responses to MPEG IPR Smart Contracts CfP
MPEG Completes Standard on Harmonization of DASH and CMAF
MPEG Completes 2nd Edition of the Omnidirectional Media Format (OMAF)
MPEG Completes the Low Complexity Enhancement Video Coding (LCEVC) Standard
In this report, I’d like to focus on VVC, G-PCC, DASH/CMAF, OMAF, and LCEVC.
Versatile Video Coding (VVC) Ultra-HD Verification Test Completed and Conformance & Reference Software Standards Reach their First Milestone
MPEG completed a verification testing assessment of the recently ratified Versatile Video Coding (VVC) standard for ultra-high definition (UHD) content with standard dynamic range, as may be used in newer streaming and broadcast television applications. The verification test was performed using rigorous subjective quality assessment methods and showed that VVC provides a compelling gain over its predecessor — the High-Efficiency Video Coding (HEVC) standard produced in 2013. In particular, the verification test was performed using the VVC reference software implementation (VTM) and the recently released open-source encoder implementation of VVC (VVenC):
Using its reference software implementation (VTM), VVC showed bit rate savings of roughly 45% over HEVC for comparable subjective video quality.
Using VVenC, additional bit rate savings of more than 10% relative to VTM were observed, which at the same time runs significantly faster than the reference software implementation.
Additionally, the standardization work for both conformance testing and reference software for the VVC standard reached its first major milestone, i.e., progressing to the Committee Draft ballot in the ISO/IEC approval process. The conformance testing standard (ISO/IEC 23090-15) will ensure interoperability among the diverse applications that use the VVC standard, and the reference software standard (ISO/IEC 23090-16) will provide an illustration of the capabilities of VVC and a valuable example showing how the standard can be implemented. The reference software will further facilitate the adoption of the standard by being available for use as the basis of product implementations.
Research aspects: as for every new video codec, its compression efficiency and computational complexity are important performance metrics. While the reference software (VTM) provides a valid reference in terms of compression efficiency it is not optimized for runtime. VVenC seems to provide already a significant improvement and with x266 another open source implementation will be available soon. Together with AOMedia’s AV1 (including its possible successor AV2), we are looking forward to a lively future in the area of video codecs.
MPEG Completes Geometry-based Point Cloud Compression Standard
MPEG promoted its ISO/IEC 23090-9 Geometry-based Point Cloud Compression (G-PCC) standard to the Final Draft International Standard (FDIS) stage. G-PCC addresses lossless and lossy coding of time-varying 3D point clouds with associated attributes such as color and material properties. This technology is particularly suitable for sparse point clouds. ISO/IEC 23090-5 Video-based Point Cloud Compression (V-PCC), which reached the FDIS stage in July 2020, addresses the same problem but for dense point clouds, by projecting the (typically dense) 3D point clouds onto planes, and then processing the resulting sequences of 2D images using video compression techniques. The generalized approach of G-PCC, where the 3D geometry is directly coded to exploit any redundancy in the point cloud itself, is complementary to V-PCC and particularly useful for sparse point clouds representing large environments.
Point clouds are typically represented by extremely large amounts of data, which is a significant barrier to mass-market applications. However, the relative ease of capturing and rendering spatial information compared to other volumetric video representations makes point clouds increasingly popular for displaying immersive volumetric data. The current draft reference software implementation of a lossless, intra-frame G‐PCC encoder provides a compression ratio of up to 10:1 and lossy coding of acceptable quality for a variety of applications with a ratio of up to 35:1.
By providing high immersion at currently available bit rates, the G‐PCC standard will enable various applications such as 3D mapping, indoor navigation, autonomous driving, advanced augmented reality (AR) with environmental mapping, and cultural heritage.
Research aspects: the main research focus related to G-PCC and V-PCC is currently on compression efficiency but one should not dismiss its delivery aspects including its dynamic, adaptive streaming. A recent paper on this topic has been published in the IEEE Communications Magazine and is entitled “From Capturing to Rendering: Volumetric Media Delivery With Six Degrees of Freedom“.
MPEG Finalizes the Harmonization of DASH and CMAF
MPEG successfully completed the harmonization of Dynamic Adaptive Streaming over HTTP (DASH) with Common Media Application Format (CMAF) featuring a DASH profile for the use with CMAF (as part of the 1st Amendment of ISO/IEC 23009-1:2019 4th edition).
CMAF and DASH segments are both based on the ISO Base Media File Format (ISOBMFF), which per se enables smooth integration of both technologies. Most importantly, this DASH profile defines (a) a normative mapping of CMAF structures to DASH structures and (b) how to use Media Presentation Description (MPD) as a manifest format. Additional tools added to this amendment include
DASH events and timed metadata track timing and processing models with in-band event streams,
a method for specifying the resynchronization points of segments when the segments have internal structures that allow container-level resynchronization,
an MPD patch framework that allows the transmission of partial MPD information as opposed to the complete MPD using the XML patch framework as defined in IETF RFC 5261, and
content protection enhancements for efficient signalling.
It is expected that the 5th edition of the MPEG DASH standard (ISO/IEC 23009-1) containing this change will be issued at the 133rd MPEG meeting in January 2021. An overview of DASH standards/features can be found in the Figure below.
Research aspects: one of the features enabled by CMAF is low latency streaming that is actively researched within the multimedia systems community (e.g., here). The main research focus has been related to the ABR logic while its impact on the network is not yet fully understood and requires strong collaboration among stakeholders along the delivery path including ingest, encoding, packaging, (encryption), content delivery network (CDN), and consumption. A holistic view on ABR is needed to enable innovation and the next step towards the future generation of streaming technologies (https://athena.itec.aau.at/).
MPEG Completes 2nd Edition of the Omnidirectional Media Format
MPEG completed the standardization of the 2nd edition of the Omnidirectional MediA Format (OMAF) by promoting ISO/IEC 23009-2 to Final Draft International Standard (FDIS) status including the following features:
“Late binding” technologies to deliver and present only that part of the content that adapts to the dynamically changing users’ viewpoint. To enable an efficient implementation of such a feature, this edition of the specification introduces the concept of bitstream rewriting, in which a compliant bitstream is dynamically generated that, by combining the received portions of the bitstream, covers only the users’ viewport on the client.
Extension of OMAF beyond 360-degree video. This edition introduces the concept of viewpoints, which can be considered as user-switchable camera positions for viewing content or as temporally contiguous parts of a storyline to provide multiple choices for the storyline a user can follow.
Enhances the use of video, image, or timed text overlays on top of omnidirectional visual background video or images related to a sphere or a viewport.
Research aspects: standards usually define formats to enable interoperability but various informative aspects are left open for industry competition and subject to research and development. The same holds for OMAF and its 2nd edition enables researchers and developers to work towards efficient viewport-adaptive implementations focusing on the users’ viewport.
MPEG Completes the Low Complexity Enhancement Video Coding Standard
MPEG is pleased to announce the completion of the new ISO/IEC 23094-2 standard, i.e., Low Complexity Enhancement Video Coding (MPEG-5 Part 2 LCEVC), which has been promoted to Final Draft International Standard (FDIS) at the 132nd MPEG meeting.
LCEVC adds an enhancement data stream that can appreciably improve the resolution and visual quality of reconstructed video with an effective compression efficiency of limited complexity by building on top of existing and future video codecs.
LCEVC can be used to complement devices originally designed only for decoding the base layer bitstream, by using firmware, operating system, or browser support. It is designed to be compatible with existing video workflows (e.g., CDNs, metadata management, DRM/CA) and network protocols (e.g., HLS, DASH, CMAF) to facilitate the rapid deployment of enhanced video services.
LCEVC can be used to deliver higher video quality in limited bandwidth scenarios, especially when the available bit rate is low for high-resolution video delivery and decoding complexity is a challenge. Typical use cases include mobile streaming and social media, and services that benefit from high-density/low-power transcoding.
Research aspects: LCEVC provides a kind of scalable video coding by combining hardware- and software-based decoders that allow for certain flexibility as part of regular software life cycle updates. However, LCEVC has been never compared to Scalable Video Coding (SVC) and Scalable High-Efficiency Video Coding (SHVC) which could be an interesting aspect for future work.
The 133rd MPEG meeting will be again an online meeting in January 2021.
Click here for more information about MPEG meetings and their developments.