MPEG Column: 142nd MPEG Meeting in Antalya, Türkiye

The 142nd MPEG meeting was held as a face-to-face meeting in Antalya, Türkiye, and the official press release can be found here and comprises the following items:

  • MPEG issues Call for Proposals for Feature Coding for Machines
  • MPEG finalizes the 9th Edition of MPEG-2 Systems
  • MPEG reaches the First Milestone for Storage and Delivery of Haptics Data
  • MPEG completes 2nd Edition of Neural Network Coding (NNC)
  • MPEG completes Verification Test Report and Conformance and Reference Software for MPEG Immersive Video
  • MPEG finalizes work on metadata-based MPEG-D DRC Loudness Leveling

The press release text has been modified to match the target audience of ACM SIGMM and highlight research aspects targeting researchers in video technologies. This column focuses on the 9th edition of MPEG-2 Systems, storage and delivery of haptics data, neural network coding (NNC), MPEG immersive video (MIV), and updates on MPEG-DASH.

© https://www.mpeg142.com/en/

Feature Coding for Video Coding for Machines (FCVCM)

At the 142nd MPEG meeting, MPEG Technical Requirements (WG 2) issued a Call for Proposals (CfP) for technologies and solutions enabling efficient feature compression for video coding for machine vision tasks. This work on “Feature Coding for Video Coding for Machines (FCVCM)” aims at compressing intermediate features within neural networks for machine tasks. As applications for neural networks become more prevalent and the neural networks increase in complexity, use cases such as computational offload become more relevant to facilitate the widespread deployment of applications utilizing such networks. Initially as part of the “Video Coding for Machines” activity, over the last four years, MPEG has investigated potential technologies for efficient compression of feature data encountered within neural networks. This activity has resulted in establishing a set of ‘feature anchors’ that demonstrate the achievable performance for compressing feature data using state-of-the-art standardized technology. These feature anchors include tasks performed on four datasets.

Research aspects: FCVCM is about compression, and the central research aspect here is compression efficiency which can be tested against a commonly agreed dataset (anchors). Additionally, it might be attractive to research which features are relevant for video coding for machines (VCM) and quality metrics in this emerging domain. One might wonder whether, in the future, robots or other AI systems will participate in subjective quality assessments.

9th Edition of MPEG-2 Systems

MPEG-2 Systems was first standardized in 1994, defining two container formats: program stream (e.g., used for DVDs) and transport stream. The latter, also known as MPEG-2 Transport Stream (M2TS), is used for broadcast and internet TV applications and services. MPEG-2 Systems has been awarded a Technology and Engineering Emmy® in 2013 and at the 142nd MPEG meeting, MPEG Systems (WG 3) ratified the 9th edition of ISO/IEC 13818-1 MPEG-2 Systems. The new edition includes support for Low Complexity Enhancement Video Coding (LCEVC), the youngest in the MPEG family of video coding standards on top of more than 50 media stream types, including, but not limited to, 3D Audio and Versatile Video Coding (VVC). The new edition also supports new options for signaling different kinds of media, which can aid the selection of the best audio or other media tracks for specific purposes or user preferences. As an example, it can indicate that a media track provides information about a current emergency.

Research aspects: MPEG container formats such as MPEG-2 Systems and ISO Base Media File Format are necessary for storing and delivering multimedia content but are often neglected in research. Thus, I would like to take up the cudgels on behalf of the MPEG Systems working group and argue that researchers should pay more attention to these container formats and conduct research and experiments for its efficient use with respect to multimedia storage and delivery.

Storage and Delivery of Haptics Data

At the 142nd MPEG meeting, MPEG Systems (WG 3) reached the first milestone for ISO/IEC 23090-32 entitled “Carriage of haptics data” by promoting the text to Committee Draft (CD) status. This specification enables the storage and delivery of haptics data (defined by ISO/IEC 23090-31) in the ISO Base Media File Format (ISOBMFF; ISO/IEC 14496-12). Considering the nature of haptics data composed of spatial and temporal components, a data unit with various spatial or temporal data packets is used as a basic entity like an access unit of audio-visual media. Additionally, an explicit indication of a silent period considering the sparse nature of haptics data has been introduced in this draft. The standard is planned to be completed, i.e., to reach the status of Final Draft International Standard (FDIS), by the end of 2024.

Research aspects: Coding (ISO/IEC 23090-31) and carriage (ISO/IEC 23090-32) of haptics data goes hand in hand and needs further investigation concerning compression efficiency and storage/delivery performance with respect to various use cases.

Neural Network Coding (NNC)

Many applications of artificial neural networks for multimedia analysis and processing (e.g., visual and acoustic classification, extraction of multimedia descriptors, or image and video coding) utilize edge-based content processing or federated training. The trained neural networks for these applications contain many parameters (weights), resulting in a considerable size. Therefore, the MPEG standard for the compressed representation of neural networks for multimedia content description and analysis (NNC, ISO/IEC 15938-17, published in 2022) was developed, which provides a broad set of technologies for parameter reduction and quantization to compress entire neural networks efficiently.

Recently, an increasing number of artificial intelligence applications, such as edge-based content processing, content-adaptive video post-processing filters, or federated training, need to exchange updates of neural networks (e.g., after training on additional data or fine-tuning to specific content). Such updates include changes in the neural network parameters but may also involve structural changes in the neural network (e.g. when extending a classification method with a new class). In scenarios like federated training, these updates must be exchanged frequently, such that much more bandwidth over time is required, e.g., in contrast to the initial deployment of trained neural networks.

The second edition of NNC addresses these applications through efficient representation and coding of incremental updates and extending the set of compression tools that can be applied to both entire neural networks and updates. Trained models can be compressed to at least 10-20% and, for several architectures, even below 3% of their original size without performance loss. Higher compression rates are possible at moderate performance degradation. In a distributed training scenario, a model update after a training iteration can be represented at 1% or less of the base model size on average without sacrificing the classification performance of the neural network. NNC also provides synchronization mechanisms, particularly for distributed artificial intelligence scenarios, e.g., if clients in a federated learning environment drop out and later rejoin.

Research aspects: The incremental compression of neural networks enables various new use cases, which provides research opportunities for media coding and communication, including optimization thereof.

MPEG Immersive Video

At the 142nd MPEG meeting, MPEG Video Coding (WG 4) issued the verification test report of ISO/IEC 23090-12 MPEG immersive video (MIV) and completed the development of the conformance and reference software for MIV (ISO/IEC 23090-23), promoting it to the Final Draft International Standard (FDIS) stage.

MIV was developed to support the compression of immersive video content, in which multiple real or virtual cameras capture a real or virtual 3D scene. The standard enables the storage and distribution of immersive video content over existing and future networks for playback with 6 degrees of freedom (6DoF) of view position and orientation. MIV is a flexible standard for multi-view video plus depth (MVD) and multi-planar video (MPI) that leverages strong hardware support for commonly used video formats to compress volumetric video.

ISO/IEC 23090-23 specifies how to conduct conformance tests and provides reference encoder and decoder software for MIV. This draft includes 23 verified and validated conformance bitstreams spanning all profiles and encoding and decoding reference software based on version 15.1.1 of the test model for MPEG immersive video (TMIV). The test model, objective metrics, and other tools are publicly available at https://gitlab.com/mpeg-i-visual.

Research aspects: Conformance and reference software are usually provided to facilitate product conformance testing, but it also provides researchers with a common platform and dataset, allowing for the reproducibility of their research efforts. Luckily, conformance and reference software are typically publicly available with an appropriate open-source license.

MPEG-DASH Updates

Finally, I’d like to provide a quick update regarding MPEG-DASH, which has become a new part, namely redundant encoding and packaging for segmented live media (REAP; ISO/IEC 23009-9). The following figure provides the reference workflow for redundant encoding and packaging of live segmented media.

Reference workflow for redundant encoding and packaging of live segmented media.

The reference workflow comprises (i) Ingest Media Presentation Description (I-MPD), (ii) Distribution Media Presentation Description (D-MPD), and (iii) Storage Media Presentation Description (S-MPD), among others; each defining constraints on the MPD and tracks of ISO base media file format (ISOBMFF).

Additionally, the MPEG-DASH Break out Group discussed various technologies under consideration, such as (a) combining HTTP GET requests, (b) signaling common media client data (CMCD) and common media server data (CMSD) in a MPEG-DASH MPD, (c) image and video overlays in DASH, and (d) updates on lower latency.

An updated overview of DASH standards/features can be found in the Figure below.

Research aspects: The REAP committee draft (CD) is publicly available feedback from academia and industry is appreciated. In particular, first performance evaluations or/and reports from proof of concept implementations/deployments would be insightful for the next steps in the standardization of REAP.

The 143rd MPEG meeting will be held in Geneva from July 17-21, 2023. Click here for more information about MPEG meetings and their developments.

VQEG Column: Emerging Technologies Group (ETG)

Introduction

This column provides an overview of the new Video Quality Experts Group (VQEG) group called the Emerging Technologies Group (ETG), which was created during the last VQEG plenary meeting in December 2022. For an introduction to VQEG, please check the VQEG homepage or this presentation.

The works addressed by this new group can be of interest for the SIGMM community since they are related to AI-based technologies for image and video processing, greening of streaming, blockchain in media and entertainment, and ongoing related standardization activities.

About ETG

The main objective of this group is to address various aspects of multimedia that do not fall under the scope of any of the existing VQEG groups. The group, through its activities, aims to provide a common platform for people to gather together and discuss new emerging topics and ideas, discuss possible collaborations in the form of joint survey papers/whitepapers, funding proposals, etc. The topics addressed are not necessarily directly related to “video quality” but rather focus on any ongoing work in the field of multimedia which can indirectly impact the work addressed as part of VQEG. 

Scope

During the creation of the group, the following topics were tentatively identified to be of possible interest to the members of this group and VQEG in general: 

  • AI-based technologies:
    • Super Resolution
    • Learning-based video compression
    • Video coding for machines, etc., 
    • Enhancement, Denoising and other pre- and post-filter techniques
  • Greening of streaming and related trends
    • For example, trade-off between HDR and SDR to save energy and its impact on visual quality
  • Ongoing Standards Activities (which might impact the QoE of end users and hence will be relevant for VQEG)
    • 3GPP, SVTA, CTA WAVE, UHDF, etc.
    • MPEG/JVET
  • Blockchain in Media and Entertainment

Since the creation of the group, four talks on various topics have been organized, an overview of which is summarized next.

Overview of the Presentations

We briefly provide a summary of various talks that have been organized by the group since its inception.

On the work by MPEG Systems Smart Contracts for Media Subgroup

The first presentation was on the topic of the recent work by MPEG Systems on Smart Contract for Media [1], which was delivered by Dr Panos Kudumakis who is the Head of UK Delegation, ISO/IEC JTC1/SC29 & Chair of British Standards Institute (BSI) IST/37. Dr Panos in this talk highlighted the efforts in the last few years by MPEG towards developing several standardized ontologies catering to the needs of the media industry with respect to the codification of Intellectual Property Rights (IPR) information toward the fair trade of media. However, since inference and reasoning capabilities normally associated with ontology use cannot naturally be done on DLT environments, there is a huge potential to unlock the Semantic Web and, in turn, the creative economy by bridging this interoperability gap [2]. In that direction, ISO/IEC 21000-23 Smart Contracts for Media standard specifies the means (e.g., APIs) for converting MPEG IPR ontologies to smart contracts that can be executed on existing DLT environments [3]. The talk discussed the recent works that have been done as part of this effort and also on the ongoing efforts towards the design of a full-fledged ISO/IEC 23000-23 Decentralized Media Rights Application Format standard based on MPEG technologies (e.g., audio-visual codecs, file formats, streaming protocols, and smart contracts) and non-MPEG technologies (e.g., DLTs, content, and creator IDs). 
The recording of the presentation is available here, and the slides can be accessed here.

Introduction to NTIRE Workshop on Quality Assessment for Video Enhancement

The second presentation was given by Xiaohong Liu and Yuxuan Gao from Shanghai Jiao Tong University, China about one of the CVPR challenge workshops called the NTIRE 2023 Quality Assessment of Video Enhancement Challenge. The presentation described the motivation for starting this challenge and how this is of great relevance to the video community in general. Then the presenters described the dataset such as the dataset creation process, subjective tests to obtain ratings, and the reasoning behind the choice of the split of the dataset into training, validation, and test sets. The results of this challenge are scheduled to be presented at the upcoming spring meeting end of June 2023. The presentation recording is available here.  

Perception: The Next Milestone in Learned Image Compression

Johannes Balle from Google was the third presenter on the topic of “Perception: The Next Milestone in Learned Image Compression.” In the first part, Johannes discussed the learned compression and described the nonlinear transforms [4] and how they could achieve a higher image compression rate than linear transforms. Next, they emphasized the importance of perceptual metrics in comparison to distortion metrics by introducing the difference between perceptual quality vs. reconstruction quality [5]. Next, an example of generative-based image compression is presented where the two criteria of distortion metric and perceptual metric (named as realism criteria) are combined, HiFiC [6]. Finally, the talk concluded with an introduction to perceptual spaces and an example of a perceptual metric, PIM [7]. The presentation slides can be found here.

Compression with Neural Fields

Emilien Dupont (DeepMind) was the fourth presenter. He started the talk with a short introduction on the emergence of neural compression that fits a signal, e.g., an image or video, to a neural network. He then discussed the two recent works on neural compression that he was involved in, named COIN [8] and COIN++ [9].  He then made a short overview of other Implicit Neural Representation in the domain of video such as NerV [10] and NIRVANA [11]. The slides for the presentation can be found here.

Upcoming Presentations

As part of the ongoing efforts of the group, the following talks/presentations are scheduled in the next two months. For an updated schedule and list of presentations, please check the ETG homepage here.

Sustainable/Green Video Streaming

Given the increasing carbon footprint of streaming services and climate crisis, many new collaborative efforts have started recently, such as the Greening of the Streaming alliance, Ultra HD Sustainability forum, etc. In addition, research works recently have started focussing on how to make video streaming more greener/sustainable. A talk providing an overview of the recent works and progress in direction is tentatively scheduled around mid-May, 2023.    

Panel discussion at VQEG Spring Meeting (June 26-30, 2023), Sony Interactive Entertainment HQ, San Mateo, US

During the next face-to-face VQEG meeting in San Mateo there will be an interesting panel discussion on the topic of “Deep Learning in Video Quality and Compression.” The goal is to invite the machine learning experts to VQEG and bring the two groups closer. ETG will organize the panel discussion, and the following four panellists are currently invited to join this event: Zhi Li (Netflix), Ioannis Katsavounidis (Meta), Richard Zhang (Adobe), and Mathias Wien (RWTH Aachen). Before this panel discussion, two talks are tentatively scheduled, the first one on video super-resolution and the second one focussing on learned image compression. 
The meeting will talk place in hybrid mode allowing for participation both in-person and online. For further information about the meeting, please check the details here and if interested, register for the meeting.

Joining and Other Logistics

While participation in the talks is open to everyone, to get notified about upcoming talks and participate in the discussion, please consider subscribing to etg@vqeg.org email reflector and join the slack channel using this link. The meeting minutes are available here. We are always looking for new ideas to improve. If you have suggestions on topics we should focus on or have recommendation of presenters, please reach out to the chairs (Nabajeet and Saman).

References

[1] White paper on MPEG Smart Contracts for Media.
[2] DLT-based Standards for IPR Management in the Media Industry.
[3] DLT-agnostic Media Smart Contracts (ISO/IEC 21000-23).
[4] [2007.03034] Nonlinear Transform Coding.
[5] [1711.06077] The Perception-Distortion Tradeoff.
[6] [2006.09965] High-Fidelity Generative Image Compression.
[7] [2006.06752] An Unsupervised Information-Theoretic Perceptual Quality Metric.
[8] Coin: Compression with implicit neural representations.
[9] COIN++: Neural compression across modalities.
[10] Nerv: Neural representations for videos.
[11] NIRVANA: Neural Implicit Representations of Videos with Adaptive Networks and Autoregressive Patch-wise Modeling.

JPEG Column: 98th JPEG meeting in Sydney, Australia

JPEG explores standardization in event-based imaging

The 98th JPEG meeting was held in Sydney, Australia, from the 16th to 20th January 2023. This was a welcome return to face-to-face meetings after a long period of online meetings due to Covid-19 pandemics. Interestingly, the previous face-to-face meeting of the JPEG Committee was also held in Sydney, in January 2020. The face-to-face 98th JPEG meeting was complemented with online connections to allow the remote participation of those who could not be present.

The recent calls for proposals, such as JPEG Fake Media, JPEG AI and JPEG Pleno Learning Based Point Cloud Coding, resulted in a very dynamic and participative meeting in Sydney, with multiple technical sessions and decisions. Exploration activities such as JPEG DNA and JPEG NFT also produced drafts of future calls for proposals as a consequence of reaching sufficient maturity.

Furthermore, and considering the current trends in machine-based imaging applications, the JPEG Committee initiated an exploration on standardization in event-based imaging.

98th JPEG Meeting first plenary.

The 98th JPEG meeting had the following highlights:

  • New JPEG exploration in event-based imaging;
  • JPEG Fake Media and NFT;
  • JPEG AI;
  •  JPEG Pleno Learning-based Point Cloud Coding improves its Verification Model;
  • JPEG AIC prepares the analysis of the responses to the Call for Contribution;
  • JPEG XL second editions;
  • JPEG Systems;
  • JPEG DNA prepares its call for proposals;
  • JPEG XS 3rd Edition;
  • JPEG 2000 guidelines.

The following summarizes the major achievements during the 98th JPEG meeting.

New JPEG exploration in event-based imaging

The JPEG Committee has started a new exploration activity on event-based imaging named JPEG XE.

Event-based Imaging revolves around a new and emerging image modality created by event-based visual sensors. Event-based sensors are the foundation for a new class of cameras that allow the efficient capture of visual information at high speed while at the same time requiring low computational cost, a requirement which it is common in many machine vision applications. Such sensors are modeled based on the mechanisms of the human visual system for the detection of scene changes and the asynchronous capture of those changes. This means that every pixel works individually to detect scene changes and creates the associated events. If nothing happens, then no events are generated. This contrasts with conventional image sensors, where pixels are sampled in a continuous and periodic manner, with images generated regardless of any changes in the scene and a risk of reacting with delay and even missing quick changes.

The JPEG Committee recognizes that this new image modality opens doors to a large number of applications where capture and processing of visual information is needed. Currently, there is no standard format to represent event-based information, and therefore existing and emerging applications are fragmented and lack interoperability. The new JPEG XE activity focuses on establishing a scope and relevant definitions, collecting use cases and their associated requirements, and investigating the role that JPEG can play in the definition of timely standards in the near- and long-term. To start, an Ad-hoc Group has been established. To stay informed about the activities please join the event based imaging Ad-hoc Group mailing list.

JPEG Fake Media and NFT

In April 2022, the JPEG Committee released a Final Call for Proposals on JPEG Fake Media. The scope of JPEG Fake Media is the creation of a standard that can facilitate the secure and reliable annotation of media assets creation and modifications. During the 98th meeting, the JPEG Committee finalised the evaluation of the six submitted proposals and initiated the process for establishing a new standard.

The JPEG Committee also continues to explore use cases and requirements related to Non-Fungible Tokens (NFTs). Although the use cases for both topics are very different, there is a clear commonality in terms of requirements and relevant solutions. An updated version of the “Use Cases and Requirements for JPEG NFT” was produced and made publicly available for review and feedback.

To stay informed about the activities, please join the mailing list of the Ad-hoc Group and regularly check the JPEG website for the latest information.

JPEG AI

Following the creation of the JPEG AI Verification Model at the previous 97th JPEG meeting, more discussions occurred at the 98th meeting to improve the coding efficiency, and complexity, especially on the decoder side. The JPEG AI VM has several unique characteristics, such as a parallelizable context model to perform latent prediction, decoupling of prediction and sample reconstruction, and rate adaptation, among others. JPEG AI VM shows up to 31% compression gain over VVC Intra for natural content. A new JPEG AI test set was released during the 98th meeting. This is a large dataset for the evaluation of the JPEG AI VM containing 50 images, with the objective of tracking the performance improvements at every meeting. The JPEG AI Common Training and Test Conditions were updated to include this new dataset. In this meeting, it was also decided to integrate several changes into the JPEG AI VM, speeding up training, improving performance at high rates and fixing bugs. A set of core experiments were established at this meeting targeting RD performance and complexity improvements. The JPEG AI VM Software Guidelines were approved, describing the initial setup repository of JPEG AI VM, how to obtain the JPEG AI dataset, and how to run tests and training. A description of the structure of the JPEG AI VM repository was also made available.

JPEG Pleno Learning-based Point Cloud coding

The JPEG Pleno Point Cloud activity progressed at this meeting with a number of technical submissions for improvements to the VM in the area of colour coding, artefact processing and improvements to coding speed. In addition, the JPEG Committee released the “Call for Content for JPEG Pleno Point Cloud Coding” to expand on the current training and test set with new point clouds representing key use cases. Prior to the 99th JPEG Meeting, JPEG experts will promote the Call for Content as well as investigate possible advancements to the VM in the areas of auto-regressive entropy encoding, sparse tensor convolution, meta-data controlled post-filtering of colour and a flexible split geometry and colour coding framework for the VM.

JPEG AIC

During the 98th JPEG meeting in Sydney, Australia, Exploration Study 1 on JPEG AIC was established. This exploration study will collect results from three types of previously standardized subjective evaluation methodologies in order to provide an informative reference for the JPEG AIC submissions to the Call for Contributions that are due by April 1st, 2023. Corrections and additions to the JPEG AIC Common Test Conditions were issued in order to reflect the addition of a new codec for testing content generation and a new anchor subjective quality assessment methodology.

The JPEG Committee is working on the continuation of the previous standardization efforts (AIC-1 and AIC-2) and aims at developing a new standard, known as AIC-3. The new standard will focus on the methodologies for quality assessment of images in a range that goes from high quality to near-visually lossless quality, which are not covered by any previous AIC standards.

JPEG XL

The second editions of JPEG XL Part 1 (Core coding system) and Part 2 (File format) have reached the CD stage. These second editions provide clarifications, corrections and editorial improvements that will facilitate independent implementations. Also, an updated version of the JPEG XL White Paper has been published and is freely available through jpeg.org.

JPEG Systems

The JLINK standard (19566-7:2022) is now published by ISO. JLINK specifies an image file format capable of linking multiple media elements, such as image and text in any JPEG file format. It enables enhanced curated experiences of a set of images for education, training, virtual museum tours, travelogs, and similar visually-oriented content.

The JPEG Snack (19566-8) standard is expected to be published in February 2023. JPEG Snack specifies the coding of audio, picture, multimedia and hypermedia information, enabling a rich, image-based, short-form animated experiences for social media.

The second edition of JUMBF (JPEG Universal Metadata Box Format, 19566-5) is progressing to IS stage; the second edition brings new capabilities and support for additional types of media.

JPEG DNA

The JPEG Committee has been working on an exploration for coding of images in quaternary representations particularly suitable for image archival on DNA storage. The scope of JPEG DNA is the creation of a standard for efficient coding of images that considers biochemical constraints and offers robustness to noise introduced by the different stages of the storage process that is based on DNA synthetic polymers. During the 98th JPEG meeting, a draft Call for Proposals for JPEG DNA was issued and made public, as a first concrete step towards standardisation. The draft call for proposals for JPEG DNA is complemented by a JPEG DNA Common Test Conditions document which is also made public, describing details about the dataset, operating points, anchors and performance assessment methodologies and metrics that will be used to evaluate anchors and future responses to the Call for Proposals. The final Call for Proposals for JPEG DNA is expected to be released at the conclusion of the 99th JPEG meeting in April 2023, after a set of exploration experiments have validated the procedures outlined in the draft Call for Proposals for JPEG DNA and JPEG DNA Common Test Conditions. The deadline for submission of proposals to the Call for Proposals for JPEG DNA is 2 October 2023 with a pre-registration due by 10 July 2023. The JPEG DNA international standard is expected to be published by early 2025.

JPEG XS

The JPEG Committee continued with the definition of JPEG XS 3rd edition. The primary goal of the 3rd edition is to deliver the same image quality as the 2nd edition, but with half of the required bandwidth. The Committee Draft for Part 1 (Core coding system) will proceed to ISO ballot. This means that the standard is now technically defined, and all the new coding tools are known. Most notably, Part 1 adds a temporal decorrelation coding mode to further improve the coding efficiency, while keeping the low-latency and low-complexity core aspects of JPEG XS. This new coding tool is of extreme importance for remote desktop applications and screen sharing. In addition, mathematically lossless coding can now support up to 16 bits precision (up from 12 bits). For Part 2 (Profiles and buffer models), the committee created a second Working Draft and issued further core experiments to proceed and support this work. Meanwhile, ISO approved the creation of a new edition of Part 3 (Transport and container formats) that is needed to address the changes of Part 1 and Part 2.

JPEG 2000

The JPEG committee publishes two sets of guidelines for implementers of JPEG 2000, available on jpeg.org.

The first describes an algorithm for controlling JPEG 2000 coding quality using a single number (Qfactor) between 1 (worst quality) and 100 (best quality), as is commonly done with JPEG.

The second explains how to create, parse and use HTJ2K placeholder passes and HT Sets. These features are an integral part of HTJ2K and enable mathematically lossless transcoding between HT- and J2K-based codestreams, among other applications.

Final Quote

“The interest in event-based imaging has been rising with several products designed and offered by the industry. The JPEG Committee believes in interoperable solutions and has initiated an exploration for standardization of event-based imaging in order to accelerate creation of an ecosystem.” said Prof. Touradj Ebrahimi, the Convenor of the JPEG Committee.

Upcoming JPEG meetings are planned as follows:

  • No 99, will be online from 24-28 April 2023
  • No 100, will be in Covilhã, Portugal from 17-21 July 2023

Sustainability vs. Quality of Experience: Striking the Right Balance for Video Streaming

The exponential growth in internet data traffic, driven by the widespread use of video streaming applications, has resulted in increased energy consumption and carbon emissions. This outcome is primarily due to higher resolution or higher framerates content and the ability to watch videos on various end-devices. However, efforts to reduce energy consumption in video streaming services may have unintended consequences on users’ Quality of Experience (QoE). This column delves into the intricate relationship between QoE and energy consumption, considering the impact of different bit rates on end-devices. We also consider other factors to provide a more comprehensive understanding of whether these end-devices have a significant environmental impact. It is essential to carefully weigh the trade-offs between QoE and energy consumption to make informed decisions and develop sustainable practices in video streaming services.

Energy Consumption for Video Streaming

In the past few years, we have seen a remarkable expansion in how online content is delivered. According to Sandvine’s 2023 Global Internet Phenomena Report [1], video usage on the Internet has increased by 24% in 2022 and now accounts for 65% of all Internet traffic. This surge in video usage is mainly due to the growing popularity of streaming video services. Videos have become an increasingly popular form of online content, capturing a significant portion of internet users’ attention and shaping how we consume information and entertainment online. Therefore, the rising quality expectations of end-users have necessitated research and implementation of video streaming management approaches that consider the Quality of Experience (QoE) [2]. The idea is to develop applications that can work within the energy and resource limits of end-devices, while still delivering the Quality of Service (QoS) needed for smooth video viewing and an improved user experience (QoE). Even though video streaming services are advancing so quickly, energy consumption is still a significant issue causing many concerns about its impact and the urgent need to boost energy efficiency [14].

The literature provides four main elements: the data centres, the data transmission networks, the end-devices and the consumer behaviour analysing of the energy consumption of video streaming [3]. In this regard, in [4], the authors present a comprehensive review of existing literature on the energy consumption of online video streaming services. Then, they outline the potential actions that can be taken by both service providers and consumers to promote sustainable video streaming, drawing from the literature studies discussed. Their summary of the current possible actions for sustainable video streaming, from both the provider’s and consumer’s perspective, is expressed in the following segments with some of the possible solutions:

  • Data center: CDN (Content Delivery Network) can be utilized to offload contents/applications to the edge from the provider’s side and choose providers that prioritize sustainability from the consumer’s side.
  • Data transmission network: Data compression/encoding algorithms from the provider’s side and no autoplay from the consumer’s side.
  • End-Device: Produce energy-efficient devices from the provider’s size and prefer small-size (mobile) devices from the consumer’s side.
  • Consumer behaviour: Reduce the number of subscribers from the provider’s size and prefer watching videos with other people than alone from the consumer’s side.

Finally, they noted that the end device and consumer behaviour are the primary contributors to energy costs in the video streaming process. This result includes actions such as reducing video resolution and using smaller devices. However, taking such actions may have a potential downside as they can negatively impact the QoE due to their effect on video quality. Therefore, in [5], they found that by sacrificing the maximum QoE and aiming for good quality instead (e.g., MOS score of 4=Good instead of MOS score 5=Excellent), significant energy savings can be achieved in video-conferencing services. This is possible by using lower video bitrates compared to higher bitrates which result in higher energy consumption, as per their logarithmic QoE model. Regarding this research, in [4], the authors propose identifying an acceptable level of QoE, rather than striving for maximum QoE, as a potential solution to reduce energy consumption while still meeting consumer satisfaction. They conducted a crowdsourcing survey to gather real consumer opinions on their willingness to save energy consumption while streaming online videos. Then, they analysed the survey results to understand how willing people are to lower video streaming quality in order to achieve energy savings.

Green Video Streaming: The Trade-Off Between QoE and Energy Consumption

To provide a trade-off between QoE and Energy Consumption, we looked at the connection between video bitrate of standard resolution, electricity usage, and perceived QoE for a video streaming service on four different devices (smartphone, tablet, laptop/PC, and smart TV) as taken from [4].

They calculated the energy consumption of streaming on devices which is provided in [6]: Q_i = t_i.(P_i+R_i.ƿ), in the given equation, Q_i represents the electricity consumption (in kWh) of the i-th device, t_i denotes the streaming duration (in hours per week) for the i-th device, P_i represents the power load (in kW) of the i-th device, R_i signifies the data traffic (in GB/h) for a specific bitrate, and ρ = 0.1 kWh/GB represents the electricity intensity of data traffic.

Then,  to estimate the perceived QoE based on the video bitrate, the authors employed a QoE model from [7], as noted in their analysis which is: QoE = a.br^b + c, where “br” represents the bitrate, and “a”, “b”, and “c” are the regression coefficients calculated for specific resolutions.

After taking this into account, we can establish a link between the QoE model, energy consumption, and the perceived QoE associated with video bitrate across various end-devices. Therefore, we implemented the green QoE model in [8] to provide a trade-off between the perceived QoE and the calculated energy consumption from above in the following way: f_γ(x)= 4/(log(x’_5)-log(x_1))*log(x)+ (log(x’_5)-5*log(x_1))/(log(x’_5)-log(x_1)). The given equation represents the mapping function between video bitrate and Mean Opinion Scores (MOS), considering both the minimum bitrate x_1 corresponding to MOS 1 and the maximum bitrate x_5 corresponding to MOS 5. Moreover, the factor γ, representing the greenness of a user, is considered in the context of maximum bitrate x’_5 = x_5/γ, which results in a MOS score of 5.

The model focuses on the concept of a “green user,” who considers the energy consumption aspect in their overall QoE evaluations. Thus, a green user might rate their QoE slightly lower in order to reduce their carbon footprint compared to a high-quality (HQ) user (or “non-green” user) who prioritizes QoE without considering energy consumption.

The numerical results for the energy consumption (in kWh) and the MOS scores depending on the video bitrate can be simplified with linear and logarithmic regressions, respectively. In Figure 1, the graph depicts a linear regression analysis conducted to examine the relationship between energy consumption (kWh) and bitrate (kbps). The y-axis represents energy consumption while the x-axis represents bitrate (kbps). The graph displays a straight-line trend that starts at 1.6 kWh and extends up to 3.5 kWh as the bitrate increases. The linear fitting function used for the analysis is formulated as: kWh = f(bitrate) = a * bitrate + c, where ‘a’ represents the slope and ‘c’ represents the y-intercept of the line.

Figure 1 visually illustrates how energy consumption tends to increase with higher bitrates, as indicated by the positive slope of the linear regression line in Figure 1. One notable observation is that as video bitrates increase, the electricity consumption of end-devices also tends to increase. This can be attributed to the larger amount of data traffic generated by higher-resolution video content, which requires higher bitrates for transmission. Consequently, smart TVs are likely to consume more energy compared to other devices. This finding is consistent with the results obtained from the linear regression model, as described in [4], further validating the relationship between bitrate and energy consumption.

As illustrated in Figure 2, the relationship between MOS and video bitrate (kbps) follows a logarithmic pattern. Therefore, we can use a straightforward QoE model to estimate the MOS if there is information about the video bitrate. This can be achieved by utilizing a logistic regression model MOS(x), where MOS = f(x) = a * log(x) + c, with x representing the video bitrate in Mbps, and a and c being coefficients, as provided in [9]. After, MOS and video bitrate (kbps) values in [4] are applied to the above-mentioned QoE green model equation regarding the logistic regression model, which is an extension of the logarithmic regression model [8]. This relationship allows to determine the green user QoE model and we exemplary show the green user QoE model for smart TV (using γ=2 in f_γ(x)).

According to Figure 2, it is categorized users into two groups: those who prioritize high-quality (HQ) video regardless of energy consumption, and green users who prioritize energy efficiency while still being satisfied with slightly lower video quality. It can be observed that the MOS value changes in video quality on their smart TVs faster compared to other end-devices.  This is evident from the steeper curve in the smart TV section. On the other hand, when looking at the curve for tablets, it shows that changes in bitrate have a milder impact on MOS values. The outcome suggests that video streaming on smaller screens, such as tablets or laptops, may contribute less to the perception of quality changes. Considering that those small-screen devices consume less energy than larger screen devices, it may be preferable to use lower resolution videos instead of high-resolution ones. Analysing the relationship between laptops and tablets, it can be seen that low-resolution video streaming on laptops resulted in lower MOS scores compared to the tablet. From this result, it can be inferred that the choice of end-device and user behaviour plays a significant role in energy savings. Figure 2 indicates that the MOS values for the green user of a smart TV is comparable to the MOS values of an HQ user using a laptop.

Concerning this outcome, in [10], the authors presented the results of a subjective assessment aimed at investigating how different factors, such as video resolution, luminance, and end devices (TV, Laptop, and Smartphone), impact the QoE and energy consumption of video streaming services. The study found that, in certain conditions such as dark or bright environments, low device backlight luminance, or small-screen devices), users may need to strike a balance between acceptable QoE and sustainable (green) choices, as consuming more energy (e.g., by streaming higher-quality videos) may not significantly enhance the QoE.

Therefore, Figure 3 plots the trade-off relationship between energy consumption (kWh) and MOS for the end devices (such as smart TV, laptop and tablet). Thereby, we differentiate the HQ user and the green user, which presents some interesting results. First, a MOS score of 4 leads to comparable energy consumption results for green and HQ users. The relative differences are rather small. However, aiming for best quality (MOS 5) leads to significant differences. Furthermore, it is seen that the device type has a significant impact on energy consumption. Even for green users, which rate lower bitrates with higher MOS scores than HQ users, the energy consumption of the smart TV is much higher than for any quality (i.e. bitrate) for laptop and tablet users. Thus, device type and user behaviour are essential to strike the right balance between QoE and energy consumption.

Discussions and Future Research

Meeting the QoE expectations of end-users is essential to fulfilling the requirements of video streaming services. As users are the primary viewers of streaming videos in most real-world scenarios, subjective QoE assessment [11] provides a direct and dependable means to evaluate the perceptual quality of video streaming. Furthermore, there is a growing need to create objective QoE assessment models provided in [12][13]. However, many studies have focused on investigating the QoE obtained through subjective and objective models and have overlooked the consideration of energy consumption in video streaming.

Therefore, in the previous section, we have discussed how the different elements within the video streaming ecosystem play a role in consuming energy and emitting CO2.  The findings pave the way for an objective response to determining an appropriate optimal video bitrate for viewing, considering both QoE and sustainability considerations, which can be further explored in future research.

It is evident that addressing energy consumption and emissions is crucial for the future of video streaming systems, while ensuring that end-users’ QoE is not compromised poses a significant and ongoing challenge. Thus, potential solutions to prevent energy consumption increase in QoE while still satisfying the user include streaming videos on smaller screen devices and watching lower resolution videos that offer sufficient quality instead of the highest resolution ones. Here, it can be highlighted the importance of user behavior to prevent energy consumption. Additionally, trade-off models can be developed using the green QoE model (especially for smarTV) by identifying ideal bitrate values for energy savings and user satisfaction in the QoE.

Delving deeper into the dynamics of the video streaming ecosystem, it becomes increasingly clear that energy consumption and emissions are critical concerns that must be addressed for the sustainable future of video streaming systems. The environmental impact of video streaming, particularly in terms of carbon emissions, cannot be understated. With the growing awareness of the urgent need to combat climate change, mitigating the environmental footprint of video streaming has become a pressing priority.

As video streaming technologies evolve, optimizing energy-efficient approaches without compromising users’ QoE is a complex task. End-users, who expect seamless and high-quality video streaming experiences, should not be deprived of their QoE while addressing the energy and emissions concerns. The outcome opens a novel door for an objective answer to the question of what constitutes an appropriate optimal video bitrate for viewing that takes into account both QoE and sustainability concerns.

Future research in this area is crucial to explore innovative techniques and strategies that can effectively reduce the energy consumption and carbon emissions of video streaming systems without sacrificing the QoE. Additionally, collaborative efforts among stakeholders, including researchers, industry practitioners, policymakers, and end-users, are essential in devising sustainable video streaming solutions that consider both environmental and user experience factors [14].

In conclusion, the discussions on the relationship between energy consumption, emissions, and QoE in video streaming systems emphasize the need for continued research and innovation to achieve a sustainable balance between environmental sustainability and user satisfaction.

References

  • [1] Sandvine. The Global Internet Phenomena Report. January 2023. Retrieved April 24, 2023
  • [2] M. Seufert, S. Egger, M. Slanina, T. Zinner, T. Hoßfeld and P. Tran-Gia, “A Survey on Quality of Experience of HTTP Adaptive Streaming,” in IEEE Communications Surveys & Tutorials, vol. 17, no. 1, pp. 469-492, Firstquarter 2015, doi: 10.1109/COMST.2014.2360940., 2015.
  • [3] Reinhard Madlener, Siamak Sheykhha, Wolfgang Briglauer,”The electricity- and CO2-saving potentials offered by regulation of European video-streaming services,” Energy Policy,vol. 161, p. 112716, 2022.
  • [4] G. Bingöl, S. Porcu, A. Floris and L. Atzori, “An Analysis of the Trade-off between Sustainability,” in IEEE ICC Workshop-GreenNet, Rome, 2023.
  • [5] T. Hoßfeld, M. Varela, L. Skorin-Kapov, P. E. Heegaard, “What is the trade-off between CO2 emission and video-conferencing QoE?,” ACM SIGMM Records, 2022.
  • [6] P. Suski, J. Pohl, and V. Frick, “All you can stream: Investigating the role of user behavior for greenhouse gas intensity of video streaming,” in Proc. of the 7th Int. Conf. on ICT for Sustainability, 2020, pp. 128–138.
  • [7] M. Mu, M. Broadbent, A. Farshad, N. Hart, D. Hutchison, Q. Ni, and N. Race, “A Scalable User Fairness Model for Adaptive Video Streaming Over SDN-Assisted Future Networks,” IEEE Journal on Selected Areas in Communications, vol. 34, no. 8, p. 2168–2184, 2016.
  • [8] T. Hossfeld, M. Varela, L. Skorin-Kapov and P. E. Heegaard, “A Greener Experience: Trade-offs between QoE and CO2 Emissions in Today’s and 6G Networks,” IEEE Communications Magazine, pp. 1-7, 2023.
  • [9] J. P. López, D. Martín, D. Jiménez and J. M. Menéndez, “Prediction and Modeling for No-Reference Video Quality Assessment Based on Machine Learning,” in 14th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS), IEEE, pp. 56-63, Las Palmas de Gran Canaria, Spain, 2018.
  • [10] G. Bingöl, A. Floris, S. Porcu, C. Timmerer and L. Atzori, “Are Quality and Sustainability Reconcilable? A Subjective Study on Video QoE, Luminance and Resolution,” in 15th International Conference on Quality of Multimedia Experience (QoMEX), Gent, Belgium, 2023.
  • [11] G. Bingol, L. Serreli, S. Porcu, A. Floris, L. Atzori, “The Impact of Network Impairments on the QoE of WebRTC applications: A Subjective study,” in 14th International Conference on Quality of Multimedia Experience (QoMEX), Lippstadt, Germany, 2022.
  • [12] D. Z. Rodríguez, R. L. Rosa, E. C. Alfaia, J. I. Abrahão and G. Bressan, “Video quality metric for streaming service using DASH standard,” IEEE Trans. Broadcasting, vol. vol. 62, no. 3, pp. 628-639, Sep. 2016.
  • [13] T. Hoßfeld, M. Seufert, C. Sieber and T. Zinner, “Assessing effect sizes of influence factors towards a QoE model for HTTP adaptive streaming,” in 6th Int. Workshop Qual. Multimedia Exper. (QoMEX), Sep. 2014.
  • [14] S. Afzal, R. Prodan, C. Timmerer, “Green Video Streaming: Challenges and Opportunities.” ACM SIGMultimedia Records, Jan. 2023.

Spring School on Social XR organized by CWI

ACM SIGMM co-sponsored the Spring School on Social XR, organized by the Distributed and Interactive Systems group (DIS) at CWI in Amsterdam. The event took place on March 13th – 17th 2023 and attracted 33 students from different disciplines (technology, social sciences, and humanities). The program included 18 lectures, 4 of them open, by 20 instructors. The event was co-sponsored by the ACM Special Interest  Group on Multimedia ACM SIGMM, making available student grants, and The Netherlands Institute for Sound and Vision (https://www.beeldengeluid.nl/en). The event was part of the recently started research semester programmes of CWI.

Students and organisers of the Spring School on Social XR (March 13th – 17th 2023, Amsterdam)

“The future of media communication is immersive, and will empower sectors such as cultural heritage, education, manufacturing, and provide a climate-neutral alternative to travelling in the European Green Deal”. With such a vision in mind, the organization committee created a holistic program around the research topic of Social XR. The program included keynotes and workshops, where prominent scientists in the field shared their knowledge with students and triggered meaningful conversations and exchanges. 

The program included topics such as the capturing and modelling of realistic avatars and their behavior, coding and transmission techniques of volumetric video content, ethics for the design and development of responsible social XR experiences, novel rending and interaction paradigms, and human factors and evaluation of experiences. Together, they provided a holistic perspective, helping participants to better understand the area and to initiate a network of collaboration to overcome current limitations of current real-time conferencing systems. 

Apart from science, there is always time for fun, so a number of social events took place, including a visit to the recently renovated Museum of Sound and Vision!

Museum of Sound and Vision

The spring school is part of the semester program organized by the DIS group of CWI, which was initiated in May 2022 with the Symposium on human-centered multimedia systems: a workshop and seminar to celebrate the inaugural lecture,  “Human-Centered Multimedia: Making Remote Togetherness Possible” of Prof. Pablo Cesar.

The list of talks was:

  • “Discovering Horizon Europe Projects: TRANSMIXR – Ignite the Immersive Media Sector by Enabling New Narrative Visions” by Niall Murray
  • “Understanding Social Touch in XR” by Gijs Huisman 
  • “Designing ‘Weird’ Social Experiences for XR” by Katherine Isbister
  •  “Virtual Social Interaction and its Applications in Health and Healthcare” by Sylvia Xueni Pan
  • “How to Create Virtual Humans and Avatars for Social XR?” by Zerrin Yumak
  • “Navigation and View Management for Interactive 360 Streaming Systems” by Klara Nahrstedt
  • “Immersive Video Delivery: From Omnidirectional Video to Holography” by Christian Timmerer
  • “Movement Remapping as a Solution to Interaction” by Mar Gonzalez Franco
  • “Perceptual Quality Assessment of Point Clouds” by Evangelos Alexiou
  • “Design, Develop and Evaluate Social XR Experiences” by Jie Li
  • “The Psychology of Social Presence” by Tilo Hartmann
  • “Towards a Responsible Metaverse” by Mariëtte van Huijstee and, Stefan Roolvink
  • “Using Empathic Computing to Create Social XR Experiences” by Mark Billinghurst
  • “Pre & Post for Volumetric Video” by Natasja Paulssen
  • “A Journey to Volumetric Video – the Past, the Present and the Future” by Oliver Schreer
  • “eXtended Reality and Passengers of the Future” by Stephen Brewster
  • “Enabling Interactive Networked Virtual Reality Experiences” by Maria Torres Vega
  • An Overview on Standardization for Social XR” by Pablo Perez and Jesús Gutiérrez