JPEG Column: 86th JPEG Meeting in Sydney, Australia

The 86th JPEG meeting was held in Sydney, Australia.

Among the different activities that took place, the JPEG Committee issued a Call for Evidence on learning-based image coding solutions. This call results from the success of the  explorations studies recently carried out by the JPEG Committee, and honours the pioneering work of JPEG issuing the first image coding standard more than 25 years ago.

In addition, a First Call for Evidence on Point Cloud Coding was issued in the framework of JPEG Pleno. Furthermore, an updated version of the JPEG Pleno reference software and a JPEG XL open source implementation have been released, while JPEG XS continues the development of raw-Bayer image sensor compression.

JPEG Plenary at the 86th meeting.

The 86th JPEG meeting had the following highlights:

  • JPEG AI issues a call for evidence on machine learning based image coding solutions
  • JPEG Pleno issues call for evidence on Point Cloud coding
  • JPEG XL verification test reveal competitive performance with commonly used image coding solutions 
  • JPEG Systems submitted final texts for Privacy & Security
  • JPEG XS announces new coding tools optimised for compression of raw-Bayer image sensor data

JPEG AI

The JPEG Committee launched a learning-based image coding activity more than a year ago, also referred as JPEG AI. This activity aims to find evidence for image coding technologies that offer substantially better compression efficiency when compared to conventional approaches but relying on models exploiting a large image database.

A Call for Evidence (CfE) has been issued as outcome of the 86th JPEG meeting, Sydney, Australia as a first formal step to consider standardisation of such approaches in image compression. The CfE is organised in coordination with the IEEE MMSP 2020 Grand Challenge on Learning-based Image Coding Challenge and will use the same content, evaluation methodologies and deadlines.

JPEG Pleno

JPEG Pleno is working toward the integration of various modalities of plenoptic content under a single framework and in a seamless manner. Efficient and powerful point cloud representation is a key feature within this vision.  Point cloud data supports a wide range of applications including computer-aided manufacturing, entertainment, cultural heritage preservation, scientific research and advanced sensing and analysis. During the 86th JPEG Meeting, the JPEG Committee released a First Call for Evidence on JPEG Pleno Point Cloud Coding to be integrated in the JPEG Pleno framework.  This Call for Evidence focuses specifically on point cloud coding solutions that support scalability and random access of decoded point clouds.

Furthermore, a Reference Software implementation of the JPEG Pleno file format (Part 1) and light field coding technology (Part 2) is made publicly available as open source on the JPEG Gitlab repository (https://gitlab.com/wg1). The JPEG Pleno Reference Software is planned to become an International Standard as Part 4 of JPEG Pleno by the end of 2020.

JPEG XL

The JPEG XL Image Coding System (ISO/IEC 18181) has produced an open source reference implementation available on the JPEG Gitlab repository (https://gitlab.com/wg1/jpeg-xl). The software is available under Apache 2, which includes a royalty-free patent grant. Speed tests indicate the multithreaded encoder and decoder outperforms libjpeg-turbo. 

Independent subjective and objective evaluation experiments have indicated competitive performance with commonly used image coding solutions while offering new functionalities such as lossless transcoding from legacy JPEG format to JPEG XL. The standardisation process has reached the Draft International Standard stage.

JPEG exploration into Media Blockchain

Fake news, copyright violations, media forensics, privacy and security are emerging challenges in digital media. JPEG has determined that blockchain and distributed ledger technologies (DLT) have great potential as a technology component to address these challenges in transparent and trustable media transactions. However, blockchain and DLT need to be integrated efficiently with a widely adopted standard to ensure broad interoperability of protected images. Therefore, the JPEG committee has organised several workshops to engage with the industry and help to identify use cases and requirements that will drive the standardisation process.

During its Sydney meeting, the committee organised an Open Discussion Session on Media Blockchain and invited local stakeholders to take part in an interactive discussion. The discussion focused on media blockchain and related application areas including, media and document provenance, smart contracts, governance, legal understanding and privacy. The presentations of this session are available on the JPEG website. To keep informed and to get involved in this activity, interested parties are invited to register to the ad hoc group’s mailing list.

JPEG Systems

JPEG Systems & Integration submitted final texts for ISO/IEC 19566-4 (Privacy & Security), ISO/IEC 24800-2 (JPSearch), and ISO/IEC 15444-16 2nd edition (JPEG 2000-in-HEIF) for publication.  Amendments to add new capabilities for JUMBF and JPEG 360 reached Committee Draft stage and will be reviewed and balloted by national bodies.

The JPEG Privacy & Security release is timely as consumers are increasingly aware and concerned about the need to protect privacy in imaging applications.  The JPEG 2000-in-HEIF enables embedding JPEG 2000 images in the HEIF file format.  The updated JUMBF provides a more generic means to embed images and other media within JPEG files to enable richer image experiences.  The updated JPEG 360 adds stereoscopic 360 images, and a method to accelerate the rendering of a region-of-interest within an image in order to reduce the latency experienced by users.  JPEG Systems & Integrations JLINK, which elaborates the relationships of the embedded media within the file, created updated use cases to refine the requirements, and continued technical discussions on implementation.

JPEG XS

The JPEG committee is pleased to announce the specification of new coding tools optimised for compression of raw-Bayer image sensor data. The JPEG XS project aims at the standardisation of a visually lossless, low-latency and lightweight compression scheme that can be used as a mezzanine codec in various markets. Video transport over professional video links, real-time video storage in and outside of cameras, and data compression onboard of autonomous cars are among the targeted use cases for raw-Bayer image sensor compression. Amendment of the Core Coding System, together with new profiles targeting raw-Bayer image applications are ongoing and expected to be published by the end of 2020.

Final Quote

“The efforts to find new and improved solutions in image compression have led JPEG to explore new opportunities relying on machine learning for coding. After rigorous analysis in form of explorations during the last 12 months, JPEG believes that it is time to formally initiate a standardisation process, and consequently, has issued a call for evidence for image compression based on machine learning.” said Prof. Touradj Ebrahimi, the Convenor of the JPEG Committee.

86th JPEG meeting social event in Sydney, Australia.

About JPEG

The Joint Photographic Experts Group (JPEG) is a Working Group of ISO/IEC, the International Organisation for Standardization / International Electrotechnical Commission, (ISO/IEC JTC 1/SC 29/WG 1) and of the International Telecommunication Union (ITU-T SG16), responsible for the popular JPEG, JPEG 2000, JPEG XR, JPSearch, JPEG XT and more recently, the JPEG XS, JPEG Systems, JPEG Pleno and JPEG XL families of imaging standards.

More information about JPEG and its work is available at www.jpeg.org or by contacting Antonio Pinheiro or Frederik Temmermans (pr@jpeg.org) of the JPEG Communication Subgroup. If you would like to stay posted on JPEG activities, please subscribe to the jpeg-news mailing list on http://jpeg-news-list.jpeg.org.  

Future JPEG meetings are planned as follows:

  • No 87, Erlangen, Germany, April 25 to 30, 2020 (Cancelled because of Covid-19 outbreak; Replaced by online meetings.)
  • No 88, Geneva, Switzerland, July 4 to 10, 2020

MPEG Column: 129th MPEG Meeting in Brussels, Belgium

The original blog post can be found at the Bitmovin Techblog and has been modified/updated here to focus on and highlight research aspects.

The 129th MPEG meeting concluded on January 17, 2020 in Brussels, Belgium with the following topics:

  • Coded representation of immersive media – WG11 promotes Network-Based Media Processing (NBMP) to the final stage
  • Coded representation of immersive media – Publication of the Technical Report on Architectures for Immersive Media
  • Genomic information representation – WG11 receives answers to the joint call for proposals on genomic annotations in conjunction with ISO TC 276/WG 5
  • Open font format – WG11 promotes Amendment of Open Font Format to the final stage
  • High efficiency coding and media delivery in heterogeneous environments – WG11 progresses Baseline Profile for MPEG-H 3D Audio
  • Multimedia content description interface – Conformance and Reference Software for Compact Descriptors for Video Analysis promoted to the final stage

Additional Important Activities at the 129th WG 11 (MPEG) meeting

The 129th WG 11 (MPEG) meeting was attended by more than 500 experts from 25 countries working on important activities including (i) a scene description for MPEG media, (ii) the integration of Video-based Point Cloud Compression (V-PCC) and Immersive Video (MIV), (iii) Video Coding for Machines (VCM), and (iv) a draft call for proposals for MPEG-I Audio among others.

The corresponding press release of the 129th MPEG meeting can be found here: https://mpeg.chiariglione.org/meetings/129. This report focused on network-based media processing (NBMP), architectures of immersive media, compact descriptors for video analysis (CDVA), and an update about adaptive streaming formats (i.e., DASH and CMAF).

MPEG picture at Friday plenary; © Rob Koenen (Tiledmedia).

Coded representation of immersive media – WG11 promotes Network-Based Media Processing (NBMP) to the final stage

At its 129th meeting, MPEG promoted ISO/IEC 23090-8, Network-Based Media Processing (NBMP), to Final Draft International Standard (FDIS). The FDIS stage is the final vote before a document is officially adopted as an International Standard (IS). During the FDIS vote, publications and national bodies are only allowed to place a Yes/No vote and are no longer able to make any technical changes. However, project editors are able to fix typos and make other necessary editorial improvements.

What is NBMP? The NBMP standard defines a framework that allows content and service providers to describe, deploy, and control media processing for their content in the cloud by using libraries of pre-built 3rd party functions. The framework includes an abstraction layer to be deployed on top of existing commercial cloud platforms and is designed to be able to be integrated with 5G core and edge computing. The NBMP workflow manager is another essential part of the framework enabling the composition of multiple media processing tasks to process incoming media and metadata from a media source and to produce processed media streams and metadata that are ready for distribution to media sinks.

Why NBMP? With the increasing complexity and sophistication of media services and the incurred media processing, offloading complex media processing operations to the cloud/network is becoming critically important in order to keep receiver hardware simple and power consumption low.

Research aspects: NBMP reminds me a bit about what has been done in the past in MPEG-21, specifically Digital Item Adaptation (DIA) and Digital Item Processing (DIP). The main difference is that MPEG now targets APIs rather than pure metadata formats, which is a step forward in the right direction as APIs can be implemented and used right away. NBMP will be particularly interesting in the context of new networking approaches including, but not limited to, software-defined networking (SDN), information-centric networking (ICN), mobile edge computing (MEC), fog computing, and related aspects in the context of 5G.

Coded representation of immersive media – Publication of the Technical Report on Architectures for Immersive Media

At its 129th meeting, WG11 (MPEG) published an updated version of its technical report on architectures for immersive media. This technical report, which is the first part of the ISO/IEC 23090 (MPEG-I) suite of standards, introduces the different phases of MPEG-I standardization and gives an overview of the parts of the MPEG-I suite. It also documents use cases and defines architectural views on the compression and coded representation of elements of immersive experiences. Furthermore, it describes the coded representation of immersive media and the delivery of a full, individualized immersive media experience. MPEG-I enables scalable and efficient individual delivery as well as mass distribution while adjusting to the rendering capabilities of consumption devices. Finally, this technical report breaks down the elements that contribute to a fully immersive media experience and assigns quality requirements as well as quality and design objectives for those elements.

Research aspects: This technical report provides a kind of reference architecture for immersive media, which may help identify research areas and research questions to be addressed in this context.

Multimedia content description interface – Conformance and Reference Software for Compact Descriptors for Video Analysis promoted to the final stage

Managing and organizing the quickly increasing volume of video content is a challenge for many industry sectors, such as media and entertainment or surveillance. One example task is scalable instance search, i.e., finding content containing a specific object instance or location in a very large video database. This requires video descriptors that can be efficiently extracted, stored, and matched. Standardization enables extracting interoperable descriptors on different devices and using software from different providers so that only the compact descriptors instead of the much larger source videos can be exchanged for matching or querying. ISO/IEC 15938-15:2019 – the MPEG Compact Descriptors for Video Analysis (CDVA) standard – defines such descriptors. CDVA includes highly efficient descriptor components using features resulting from a Deep Neural Network (DNN) and uses predictive coding over video segments. The standard is being adopted by the industry. At its 129th meeting, WG11 (MPEG) has finalized the conformance guidelines and reference software. The software provides the functionality to extract, match, and index CDVA descriptors. For easy deployment, the reference software is also provided as Docker containers.

Research aspects: The availability of reference software helps to conduct reproducible research (i.e., reference software is typically publicly available for free) and the Docker container even further contributes to this aspect.

DASH and CMAF

The 4th edition of DASH has already been published and is available as ISO/IEC 23009-1:2019. Similar to previous iterations, MPEG’s goal was to make the newest edition of DASH publicly available for free, with the goal of industry-wide adoption and adaptation. During the most recent MPEG meeting, we worked towards implementing the first amendment which will include additional (i) CMAF support and (ii) event processing models with minor updates; these amendments are currently in draft and will be finalized at the 130th MPEG meeting in Alpbach, Austria. An overview of all DASH standards and updates are depicted in the figure below:

ISO/IEC 23009-8 or “session-based DASH operations” is the newest variation of MPEG-DASH. The goal of this part of DASH is to allow customization during certain times of a DASH session while maintaining the underlying media presentation description (MPD) for all other sessions. Thus, MPDs should be cacheable within content distribution networks (CDNs) while additional information should be customizable on a per session basis within a newly added session-based description (SBD). It is understood that the SBD should have an efficient representation to avoid file size issues and it should not duplicate information typically found in the MPD.

The 2nd edition of the CMAF standard (ISO/IEC 23000-19) will be available soon (currently under FDIS ballot) and MPEG is currently reviewing additional tools in the so-called ‘technologies under considerations’ document. Therefore, amendments were drafted for additional HEVC media profiles and exploration activities on the storage and archiving of CMAF contents.

The next meeting will bring MPEG back to Austria (for the 4th time) and will be hosted in Alpbach, Tyrol. For more information about the upcoming 130th MPEG meeting click here.

Click here for more information about MPEG meetings and their developments

MPEG Column: 128th MPEG Meeting in Geneva, Switzerland

The original blog post can be found at the Bitmovin Techblog and has been modified/updated here to focus on and highlight research aspects.

The 128th MPEG meeting concluded on October 11, 2019 in Geneva, Switzerland with the following topics:

  • Low Complexity Enhancement Video Coding (LCEVC) Promoted to Committee Draft
  • 2nd Edition of Omnidirectional Media Format (OMAF) has reached the first milestone
  • Genomic Information Representation – Part 4 Reference Software and Part 5 Conformance Promoted to Draft International Standard

The corresponding press release of the 128th MPEG meeting can be found here: https://mpeg.chiariglione.org/meetings/128. In this report we will focus on video coding aspects (i.e., LCEVC) and immersive media applications (i.e., OMAF). At the end, we will provide an update related to adaptive streaming (i.e., DASH and CMAF).

Low Complexity Enhancement Video Coding

Low Complexity Enhancement Video Coding (LCEVC) has been promoted to committee draft (CD) which is the first milestone in the ISO/IEC standardization process. LCEVC is part two of MPEG-5 or ISO/IEC 23094-2 if you prefer the always easy-to-remember ISO codes. We introduced MPEG-5 already in previous posts and LCEVC is about a standardized video coding solution that leverages other video codecs in a manner that improves video compression efficiency while maintaining or lowering the overall encoding and decoding complexity.

The LCEVC standard uses a lightweight video codec to add up to two layers of encoded residuals. The aim of these layers is correcting artefacts produced by the base video codec and adding detail and sharpness for the final output video.

The target of this standard comprises software or hardware codecs with extra processing capabilities, e.g., mobile devices, set top boxes (STBs), and personal computer based decoders. Additional benefits are the reduction in implementation complexity or a corresponding expansion in spatial resolution.

LCEVC is based on existing codecs which allows for backwards-compatibility with existing deployments. Supporting LCEVC enables “softwareized” video coding allowing for release and deployment options known from software-based solutions which are well understood by software companies and, thus, opens new opportunities in improving and optimizing video-based services and applications.

Research aspects: in video coding, research efforts are mainly related to coding efficiency and complexity (as usual). However, as MPEG-5 basically adds a software layer on top of what is typically implemented in hardware, all kind of aspects related to software engineering could become an active area of research.

Omnidirectional Media Format

The scope of the Omnidirectional Media Format (OMAF) is about 360° video, images, audio and associated timed text and specifies (i) a coordinate system, (ii) projection and rectangular region-wise packing methods, (iii) storage of omnidirectional media and the associated metadata using ISOBMFF, (iv) encapsulation, signaling and streaming of omnidirectional media in DASH and MMT, and (v) media profiles and presentation profiles.

At this meeting, the second edition of OMAF (ISO/IEC 23090-2) has been promoted to committee draft (CD) which includes

  • support of improved overlay of graphics or textual data on top of video,
  • efficient signaling of videos structured in multiple sub parts,
  • enabling more than one viewpoint, and
  • new profiles supporting dynamic bitstream generation according to the viewport.

As for the first edition, OMAF includes encapsulation and signaling in ISOBMFF as well as streaming of omnidirectional media (DASH and MMT). It will reach its final milestone by the end of 2020.

360° video is certainly a vital use case towards a fully immersive media experience. Devices to capture and consume such content are becoming increasingly available and will probably contribute to the dissemination of this type of content. However, it is also understood that the complexity increases significantly, specifically with respect to large-scale, scalable deployments due to increased content volume/complexity, timing constraints (latency), and quality of experience issues.

Research aspects: understanding the increased complexity of 360° video or immersive media in general is certainly an important aspect to be addressed towards enabling applications and services in this domain. We may even start thinking that 360° video actually works (e.g., it’s possible to capture, upload to YouTube and consume it on many devices) but the devil is in the detail in order to handle this complexity in an efficient way to enable seamless and high quality of experience.

DASH and CMAF

The 4th edition of DASH (ISO/IEC 23009-1) will be published soon and MPEG is currently working towards a first amendment which will be about (i) CMAF support and (ii) event processing model. An overview of all DASH standards is depicted in the figure below, notably part one of MPEG-DASH referred to as media presentation description and segment formats.

MPEG-DASH-standard-status

The 2nd edition of the CMAF standard (ISO/IEC 23000-19) will become available very soon and MPEG is currently reviewing additional tools in the so-called technologies under considerations document as well as conducting various explorations. A working draft for additional media profiles is also under preparation.

Research aspects: with CMAF, low-latency supported is added to DASH-like applications and services. However, the implementation specifics are actually not defined in the standard and subject to competition (e.g., here). Interestingly, the Bitmovin video developer reports from both 2018 and 2019 highlight the need for low-latency solutions in this domain.

At the ACM Multimedia Conference 2019 in Nice, France I gave a tutorial entitled “A Journey towards Fully Immersive Media Access” which includes updates related to DASH and CMAF. The slides are available here.

Outlook 2020

Finally, let me try giving an outlook for 2020, not so much content-wise but events planned for 2020 that are highly relevant for this column:

  • MPEG129, Jan 13-17, 2020, Brussels, Belgium
  • DCC 2020, Mar 24-27, 2020, Snowbird, UT, USA
  • MPEG130, Apr 20-24, 2020, Alpbach, Austria
  • NAB 2020, Apr 08-22, Las Vegas, NV, USA
  • ICASSP 2020, May 4-8, 2020, Barcelona, Spain
  • QoMEX 2020, May 26-28, 2020, Athlone, Ireland
  • MMSys 2020, Jun 8-11, 2020, Istanbul, Turkey
  • IMX 2020, June 17-19, 2020, Barcelona, Spain
  • MPEG131, Jun 29 – Jul 3, 2020, Geneva, Switzerland
  • NetSoft,QoE Mgmt Workshop, Jun 29 – Jul 3, 2020, Ghent, Belgium
  • ICME 2020, Jul 6-10, London, UK
  • ATHENA summer school, Jul 13-17, Klagenfurt, Austria
  • … and many more!

JPEG Column: 85th JPEG Meeting in San Jose, California, U.S.A.

The 85th JPEG meeting was held in San Jose, CA, USA.

The meeting was distinguished by the Prime Time Engineering Emmy Award from the Academy of Television Arts & Sciences (ATAS) for the longevity of the first JPEG standard. Furthermore, a very successful workshop on JPEG emerging technologies was held at Microsoft premises in Silicon Valley with a broad participation from several companies working in imaging technologies. This workshop ended with the celebration of two JPEG committee experts, Thomas Richter and Ogawa Shigetaka, recognized by ISO outstanding contribution awards for the key roles they played in the development of JPEG XT standard.

The 85th JPEG meeting continued laying the groundwork for the continuous development of JPEG standards and exploration studies. In particular, the developments on new image coding standard JPEG XL,  the low latency and complexity standard JPEG XS, and the release of the JPEG Systems interoperable 360 image standard, together with the exploration studies on image compression using machine learning and on the use of blockchain and distributed ledger technologies for media applications.

The 85th JPEG meeting had the following highlights:

  • Prime Time Engineering Emmy award,
  • JPEG Emerging Technologies Workshop,
  • JPEG XL progresses towards a final specification,
  • JPEG AI evaluates machine learning based coding solutions,
  • JPEG exploration on Media Blockchain,
  • JPEG Systems interoperable 360 image standards released,
  • JPEG XS announces significant improvements of Bayer image sensor data compression.
JPEG Emerging Technologies Workshop.

Prime Time Engineering Emmy

The JPEG committee is honored to be the recipient of a prestigious Prime Time Engineering Award in 2019 by the US Academy of Television Arts & Sciences at the 71st Engineering Emmy Awards ceremony on the 23rd of October 2019 in Los Angeles, CA, USA. The first JPEG standard is known as a popular format in digital photography, used by hundreds of millions of users everywhere, in a wide range of applications including the world wide web, social media, photographic apparatus and smart cameras. The first part of the standard was published in 1992 and has grown to seven parts, with the latest, defining the reference software, published in 2019. This is a unique example of longevity in the fast moving information technologies and the Emmy award acknowledges this longevity and continuing influence over nearly three decades.

This is a well-deserved recognition not only for the Joint Photographic Experts Group committee members who started this standard under the auspices of ITU, ISO, IEC but also to all experts in the JPEG committee who continued to extend and maintain it, hence guaranteeing such a longevity.

JPEG convenor Touradj Ebrahimi during the Emmy acceptance speech.

According to Prof. Touradj Ebrahimi, Convenor of JPEG standardization committee, the longevity of JPEG is based on three very important factors: “The credibility by being developed under the auspices of three important standardization bodies, namely ITU, ISO and IEC, development by explicitly taking into account end users, and the choice of being royalty free”. Furthermore,  “JPEG defined not only a great technology but also it was a committee that first defined how standardization should take place in order to become successful”.

JPEG Emerging Technologies Workshop

At the 85th JPEG meeting in San Jose, CA, USA, JPEG organized the “JPEG Emerging Technologies Workshop” on the 5th of November 2019 to inform industry and academia active in the wider field of multimedia and in particular in imaging, about current JPEG Committee standardization activities and exploration studies. Leading JPEG experts shared highlights about some of the emerging JPEG technologies that could shape the future of imaging and multimedia, with the following program:

  • Welcome and Introduction (Touradj Ebrahimi);
  • JPEG XS – Lightweight compression; Transparent quality. (Antonin Descampe);
  • JPEG Pleno (Peter Schelkens);
  • JPEG XL – Next-generation Image Compression (Jan Wassenberg and Jon Sneyers);
  • High-Throughput JPEG 2000 – Big improvement to JPEG 2000 (Pierre-Anthony Lemieux);
  • JPEG Systems – The framework for future and legacy standards (Andy Kuzma);
  • JPEG Privacy and Security and Exploration on Media Blockchain Standardization Needs (Frederik Temmermans);
  • JPEG AI: Learning to Compress (João Ascenso)

This very successful workshop ended with a panel moderated by Fernando Pereira where different relevant media technology issues were discussed with a vibrant participation of the attendees.

Proceedings of the JPEG Emerging Technologies Workshop are available for download via the following link: https://jpeg.org/items/20191108_jpeg_emerging_technologies_workshop_proceedings.html

JPEG XL

The JPEG XL Image Coding System (ISO/IEC 18181) continues its progression towards a final specification. The Committee Draft of JPEG XL is being refined based on feedback received from experts from ISO/IEC national bodies. Experiments indicate the main two JPEG XL modes compare favorably with specialized responsive and lossless modes, enabling a simpler specification.

The JPEG committee has approved open-sourcing the JPEG XL software. JPEG XL will advance to the Draft International Standard stage in 2020-01.

JPEG AI

JPEG AI carried out rigorous subjective and objective evaluations of a number of promising learning-based image coding solutions from state of the art, which show the potential of these codecs for different rate-quality tradeoffs, in comparison to widely used anchors. Moreover, a wide set of objective metrics were evaluated for several types of image coding solutions.

JPEG exploration on Media Blockchain

Fake news, copyright violations, media forensics, privacy and security are emerging challenges in digital media. JPEG has determined that blockchain and distributed ledger technologies (DLT) have great potential as a technology component to address these challenges in transparent and trustable media transactions. However, blockchain and DLT need to be integrated closely with a widely adopted standard to ensure broad interoperability of protected images. Therefore, the JPEG committee has organized several workshops to engage with the industry and help to identify use cases and requirements that will drive the standardization process. During the San Jose meeting, the committee drafted a first version of the use cases and requirements document. On the 21st of January 2020, during its 86th JPEG Meeting to be held in Sydney, Australia, JPEG plans to organize an interactive discussion session with stakeholders. Practical and registration information is available on the JPEG website. To keep informed and to get involved in this activity, interested parties are invited to register to the ad hoc group’s mailing list. (http://jpeg-blockchain-list.jpeg.org).

JPEG Systems interoperable 360 image standards released.

The ISO/IEC 19566-5 JUMBF and ISO/IEC 19566-6 JPEG 360 were published in July 2019.  These two standards work together to define basics for interoperability and lay the groundwork for future capabilities for richer interactions with still images as we add functionality to JUMBF (Part 5), Privacy & Security (Part 4), JPEG 360 (Part 6), and JLINK (Part 7). 

JPEG XS announces significant improvements of Bayer image sensor data compression.

JPEG XS aims at standardization of a visually lossless low-latency and lightweight compression that can be used as a mezzanine codec in various markets. Work has been done in the last meeting to enable JPEG XS for use in Bayer image sensor compression. Among the targeted use cases for Bayer image sensor compression, one can cite video transport over professional video links, real-time video storage in and outside of cameras, and data compression onboard of autonomous cars. The JPEG Committee also announces the final publication of JPEG XS Part-3 “Transport and Container Formats” as International Standard. This part enables storage of JPEG XS images in various formats. In addition, an effort is currently on its final way to specify RTP payload for JPEG XS, which will enable transport of JPEG XS in the SMPTE ST2110 framework.

“The 2019 Prime Time Engineering Award by the Academy is a well-deserved recognition for the Joint Photographic Experts Group members who initiated standardization of the first JPEG standard and to all experts of the JPEG committee who since then have extended and maintained it, guaranteeing its longevity. JPEG defined not only a great technology but also it was the first committee that defined how standardization should take place in order to become successful” said Prof. Touradj Ebrahimi, the Convenor of the JPEG Committee.

About JPEG

The Joint Photographic Experts Group (JPEG) is a Working Group of ISO/IEC, the International Organisation for Standardization / International Electrotechnical Commission, (ISO/IEC JTC 1/SC 29/WG 1) and of the International Telecommunication Union (ITU-T SG16), responsible for the popular JPEG, JPEG 2000, JPEG XR, JPSearch, JPEG XT and more recently, the JPEG XS, JPEG Systems, JPEG Pleno and JPEG XL families of imaging standards.

The JPEG Committee nominally meets four times a year, in different world locations. The 84th JPEG Meeting was held on 13-19 July 2019, in Brussels, Belgium. The next 86th JPEG Meeting will be held on 18-24 January 2020, in Sydney, Australia.

More information about JPEG and its work is available at www.jpeg.org or by contacting Antonio Pinheiro or Frederik Temmermans (pr@jpeg.org) of the JPEG Communication Subgroup.

If you would like to stay posted on JPEG activities, please subscribe to the jpeg-news mailing list on http://jpeg-news-list.jpeg.org.  

Future JPEG meetings are planned as follows:

  • No 86, Sydney, Australia, January 18 to 24, 2020
  • No 87, Erlangen, Germany, April 25 to 30, 2020

JPEG Column: 84th JPEG Meeting in Brussels, Belgium

The 84th JPEG meeting was held in Brussels, Belgium.

This meeting was characterised by significant progress in most of JPEG projects and also exploratory studies. JPEG XL, the new image coding system, has issued the Committee Draft, giving shape to this new effective solution for the future of image coding. JPEG Pleno, the standard for new imaging technologies, Part 1 (Framework) and Part 2 (Light field coding) have also reached Draft International Standard status.

Moreover, exploration studies are ongoing in the domain of media blockchain and on the application of learning solutions for image coding (JPEG AI). Both have triggered a number of activities providing new knowledge and opening new possibilities on the future use of these technologies in future JPEG standards.

The 84th JPEG meeting had the following highlights: 84th meetingTE-66694113_10156591758739370_4025463063158194176_n

  • JPEG XL issues the Committee Draft
  • JPEG Pleno Part 1 and 2 reaches Draft International Standard status
  • JPEG AI defines Common Test Conditions
  • JPEG exploration studies on Media Blockchain
  • JPEG Systems –JLINK working draft
  • JPEG XS

In the following, a short description of the most significant activities is presented.

 

JPEG XL

The JPEG XL Image Coding System (ISO/IEC 18181) has completed the Committee Draft of the standard. The new coding technique allows storage of high-quality images at one-third the size of the legacy JPEG format. Moreover, JPEG XL can losslessly transcode existing JPEG images to about 80% of their original size simplifying interoperability and accelerating wider deployment.

The JPEG XL reference software, ready for mobile and desktop deployments, will be available in Q4 2019. The current contributors have committed to releasing it publicly under a royalty-free and open source license.

 

JPEG Pleno

A significant milestone has been reached during this meeting: the Draft International Standard (DIS) for both JPEG Pleno Part 1 (Framework) and Part 2 (Light field coding) have been completed. A draft architecture of the Reference Software (Part 4) and developments plans have been also discussed and defined.

In addition, JPEG has completed an in-depth analysis of existing point cloud coding solutions and a new version of the use-cases and requirements document has been released reflecting the future role of JPEG Pleno in point cloud compression. A new set of Common Test Conditions has been released as a guideline for the testing and evaluation of point cloud coding solutions with both a best practice subjective testing protocol and a set of objective metrics.

JPEG Pleno holography activities had significant advances on the definition of use cases and requirements, and description of Common Test Conditions. New quality assessment methodologies for holographic data defined in the framework of a collaboration between JPEG and Qualinet were established. Moreover, JPEG Pleno continues collecting microscopic and tomographic holographic data.

 

JPEG AI

The JPEG Committee continues to carry out exploration studies with deep learning-based image compression solutions, typically with an auto-encoder architecture. The promise that these types of codecs hold, especially in terms of coding efficiency, will be evaluated with several studies. In this meeting, a Common Test Conditions was produced, which includes a plan for subjective and objective quality assessment experiments as well as coding pipelines for anchor and learning-based codecs. Moreover, a JPEG AI dataset was proposed and discussed, and a double stimulus impairment scale experiment (side-by-side) was performed with a mix of experts and non-experts in a controlled environment.

 

JPEG exploration on Media Blockchain

Fake news, copyright violation, media forensics, privacy and security are emerging challenges in digital media. JPEG has determined that blockchain and distributed ledger technologies (DLT) have great potential as a technology component to address these challenges in transparent and trustable media transactions. However, blockchain and DLT need to be integrated closely with a widely adopted standard to ensure broad interoperability of protected images. JPEG calls for industry participation to help define use cases and requirements that will drive the standardization process. In order to clearly identify the impact of blockchain and distributed ledger technologies on JPEG standards, the committee has organised several workshops to interact with stakeholders in the domain.

The 4th public workshop on media blockchain was organized in Brussels on Tuesday the 16th of July 2019 during the 84th ISO/IEC JTC 1/SC 29/WG1 (JPEG) Meeting. The presentations and program of the workshop are available on jpeg.org.

The JPEG Committee has issued an updated version of the white paper entitled “Towards a Standardized Framework for Media Blockchain” that elaborates on the initiative, exploring relevant standardization activities, industrial needs and use cases.

To keep informed and to get involved in this activity, interested parties are invited to register to the ad hoc group’s mailing list.

 

JPEG Systems – JLINK

At the 84th meeting, IS text reviews for ISO/IEC 19566-5 JUMBF and ISO/IEC 19566-6 JPEG 360 were completed; IS publication will be forthcoming.  Work began on adding functionality to JUMBF, Privacy & Security, and JPEG 360; and initial planning towards developing software implementation of these parts of JPEG Systems specification.  Work also began on the new ISO/IEC 19566-7 Linked media images (JLINK) with development of a working draft.

 

JPEG XS

The JPEG Committee is pleased to announce new Core Experiments and Exploration Studies on compression of raw image sensor data. The JPEG XS project aims at the standardization of a visually lossless low-latency and lightweight compression scheme that can be used as a mezzanine codec in various markets. Video transport over professional video links (SDI, IP, Ethernet), real-time video storage in and outside of cameras, memory buffers, machine vision systems, and data compression onboard of autonomous vehicles are among the targeted use cases for raw image sensor compression. This new work on raw sensor data will pave the way towards highly efficient close-to-sensor image compression workflows with JPEG XS.

 

Final Quote

“Completion of the Committee Draft of JPEG XL, the new standard for image coding is an important milestone. It is hoped that JPEG XL can become an excellent replacement of the widely used JPEG format which has been in service for more than 25 years.” said Prof. Touradj Ebrahimi, the Convenor of the JPEG Committee.

About JPEG

The Joint Photographic Experts Group (JPEG) is a Working Group of ISO/IEC, the International Organisation for Standardization / International Electrotechnical Commission, (ISO/IEC JTC 1/SC 29/WG 1) and of the International Telecommunication Union (ITU-T SG16), responsible for the popular JPEG, JPEG 2000, JPEG XR, JPSearch, JPEG XT and more recently, the JPEG XS, JPEG Systems, JPEG Pleno and JPEG XL families of imaging standards.

More information about JPEG and its work is available at www.jpeg.org.

Future JPEG meetings are planned as follows:

  • No 85, San Jose, California, U.S.A., November 2 to 8, 2019
  • No 86, Sydney, Australia, January 18 to 24, 2020

MPEG Column: 127th MPEG Meeting in Gothenburg, Sweden

The original blog post can be found at the Bitmovin Techblog and has been modified/updated here to focus on and highlight research aspects.

Plenary of the 127th MPEG Meeting in Gothenburg, Sweden.

Plenary of the 127th MPEG Meeting in Gothenburg, Sweden.

The 127th MPEG meeting concluded on July 12, 2019 in Gothenburg, Sweden with the following topics:

  • Versatile Video Coding (VVC) enters formal approval stage, experts predict 35-60% improvement over HEVC
  • Essential Video Coding (EVC) promoted to Committee Draft
  • Common Media Application Format (CMAF) 2nd edition promoted to Final Draft International Standard
  • Dynamic Adaptive Streaming over HTTP (DASH) 4th edition promoted to Final Draft International Standard
  • Carriage of Point Cloud Data Progresses to Committee Draft
  • JPEG XS carriage in MPEG-2 TS promoted to Final Draft Amendment of ISO/IEC 13818-1 7th edition
  • Genomic information representation – WG11 issues a joint call for proposals on genomic annotations in conjunction with ISO TC 276/WG 5
  • ISO/IEC 23005 (MPEG-V) 4th Edition – WG11 promotes the Fourth edition of two parts of “Media Context and Control” to the Final Draft International Standard (FDIS) stage

The corresponding press release of the 127th MPEG meeting can be found here: https://mpeg.chiariglione.org/meetings/127

Versatile Video Coding (VVC)

The Moving Picture Experts Group (MPEG) is pleased to announce that Versatile Video Coding (VVC) progresses to Committee Draft, experts predict 35-60% improvement over HEVC.

The development of the next major generation of video coding standard has achieved excellent progress, such that MPEG has approved the Committee Draft (CD, i.e., the text for formal balloting in the ISO/IEC approval process).

The new VVC standard will be applicable to a very broad range of applications and it will also provide additional functionalities. VVC will provide a substantial improvement in coding efficiency relative to existing standards. The improvement in coding efficiency is expected to be quite substantial – e.g., in the range of 35–60% bit rate reduction relative to HEVC although it has not yet been formally measured. Relative to HEVC means for equivalent subjective video quality at picture resolutions such as 1080p HD or 4K or 8K UHD, either for standard dynamic range video or high dynamic range and wide color gamut content for levels of quality appropriate for use in consumer distribution services. The focus during the development of the standard has primarily been on 10-bit 4:2:0 content, and 4:4:4 chroma format will also be supported.

The VVC standard is being developed in the Joint Video Experts Team (JVET), a group established jointly by MPEG and the Video Coding Experts Group (VCEG) of ITU-T Study Group 16. In addition to a text specification, the project also includes the development of reference software, a conformance testing suite, and a new standard ISO/IEC 23002-7 specifying supplemental enhancement information messages for coded video bitstreams. The approval process for ISO/IEC 23002-7 has also begun, with the issuance of a CD consideration ballot.

Research aspects: VVC represents the next generation video codec to be deployed in 2020+ and basically the same research aspects apply as for previous generations, i.e., coding efficiency, performance/complexity, and objective/subjective evaluation. Luckily, JVET documents are freely available including the actual standard (committee draft), software (and its description), and common test conditions. Thus, researcher utilizing these resources are able to conduct reproducible research when contributing their findings and code improvements back to the community at large. 

Essential Video Coding (EVC)

MPEG-5 Essential Video Coding (EVC) promoted to Committee Draft

Interestingly, at the same meeting as VVC, MPEG promoted MPEG-5 Essential Video Coding (EVC) to Committee Draft (CD). The goal of MPEG-5 EVC is to provide a standardized video coding solution to address business needs in some use cases, such as video streaming, where existing ISO video coding standards have not been as widely adopted as might be expected from their purely technical characteristics.

The MPEG-5 EVC standards includes a baseline profile that contains only technologies that are over 20 years old or are otherwise expected to be royalty-free. Additionally, a main profile adds a small number of additional tools, each providing significant performance gain. All main profile tools are capable of being individually switched off or individually switched over to a corresponding baseline tool. Organizations making proposals for the main profile have agreed to publish applicable licensing terms within two years of FDIS stage, either individually or as part of a patent pool.

Research aspects: Similar research aspects can be described for EVC and from a software engineering perspective it could be also interesting to further investigate this switching mechanism of individual tools or/and fall back option to baseline tools. Naturally, a comparison with next generation codecs such as VVC is interesting per se. The licensing aspects itself are probably interesting for other disciplines but that is another story…

Common Media Application Format (CMAF)

MPEG ratified the 2nd edition of the Common Media Application Format (CMAF)

The Common Media Application Format (CMAF) enables efficient encoding, storage, and delivery of digital media content (incl. audio, video, subtitles among others), which is key to scaling operations to support the rapid growth of video streaming over the internet. The CMAF standard is the result of widespread industry adoption of an application of MPEG technologies for adaptive video streaming over the Internet, and widespread industry participation in the MPEG process to standardize best practices within CMAF.

The 2nd edition of CMAF adds support for a number of specifications that were a result of significant industry interest. Those include

  • Advanced Audio Coding (AAC) multi-channel;
  • MPEG-H 3D Audio;
  • MPEG-D Unified Speech and Audio Coding (USAC);
  • Scalable High Efficiency Video Coding (SHVC);
  • IMSC 1.1 (Timed Text Markup Language Profiles for Internet Media Subtitles and Captions); and
  • additional HEVC video CMAF profiles and brands.

This edition also introduces CMAF supplemental data handling as well as new structural brands for CMAF that reflects the common practice of the significant deployment of CMAF in industry. Companies adopting CMAF technology will find the specifications introduced in the 2nd Edition particularly useful for further adoption and proliferation of CMAF in the market.

Research aspects: see below (DASH).

Dynamic Adaptive Streaming over HTTP (DASH)

MPEG approves the 4th edition of Dynamic Adaptive Streaming over HTTP (DASH)

The 4th edition of MPEG-DASH comprises the following features:

  • service description that is intended by the service provider on how the service is expected to be consumed;
  • a method to indicate the times corresponding to the production of associated media;
  • a mechanism to signal DASH profiles and features, employed codec and format profiles; and
  • supported protection schemes present in the Media Presentation Description (MPD).

It is expected that this edition will be published later this year. 

Research aspects: CMAF 2nd and DASH 4th edition come along with a rich feature set enabling a plethora of use cases. The underlying principles are still the same and research issues arise from updated application and service requirements with respect to content complexity, time aspects (mainly delay/latency), and quality of experience (QoE). The DASH-IF awards the excellence in DASH award at the ACM Multimedia Systems conference and an overview about its academic efforts can be found here.

Carriage of Point Cloud Data

MPEG progresses the Carriage of Point Cloud Data to Committee Draft

At its 127th meeting, MPEG has promoted the carriage of point cloud data to the Committee Draft stage, the first milestone of ISO standard development process. This standard is the first one introducing the support of volumetric media in the industry-famous ISO base media file format family of standards.

This standard supports the carriage of point cloud data comprising individually encoded video bitstreams within multiple file format tracks in order to support the intrinsic nature of the video-based point cloud compression (V-PCC). Additionally, it also allows the carriage of point cloud data in one file format track for applications requiring multiplexed content (i.e., the video bitstream of multiple components is interleaved into one bitstream).

This standard is expected to support efficient access and delivery of some portions of a point cloud object considering that in many cases that entire point cloud object may not be visible by the user depending on the viewing direction or location of the point cloud object relative to other objects. It is currently expected that the standard will reach its final milestone by the end of 2020.

Research aspects: MPEG’s Point Cloud Compression (PCC) comes in two flavors, video- and geometric-based but still requires to be packaged into file and delivery formats. MPEG’s choice here is the ISO base media file format and the efficient carriage of point cloud data is characterized by both functionality (i.e., enabling the required used cases) and performance (such as low overhead).

MPEG 2 Systems/Transport Stream

JPEG XS carriage in MPEG-2 TS promoted to Final Draft Amendment of ISO/IEC 13818-1 7th edition

At its 127th meeting, WG11 (MPEG) has extended ISO/IEC 13818-1 (MPEG-2 Systems) – in collaboration with WG1 (JPEG) – to support ISO/IEC 21122 (JPEG XS) in order to support industries using still image compression technologies for broadcasting infrastructures. The specification defines a JPEG XS elementary stream header and specifies how the JPEG XS video access unit (specified in ISO/IEC 21122-1) is put into a Packetized Elementary Stream (PES). Additionally, the specification also defines how the System Target Decoder (STD) model can be extended to support JPEG XS video elementary streams.

Genomic information representation

WG11 issues a joint call for proposals on genomic annotations in conjunction with ISO TC 276/WG 5

The introduction of high-throughput DNA sequencing has led to the generation of large quantities of genomic sequencing data that have to be stored, transferred and analyzed. So far WG 11 (MPEG) and ISO TC 276/WG 5 have addressed the representation, compression and transport of genome sequencing data by developing the ISO/IEC 23092 standard series also known as MPEG-G. They provide a file and transport format, compression technology, metadata specifications, protection support, and standard APIs for the access of sequencing data in the native compressed format.

An important element in the effective usage of sequencing data is the association of the data with the results of the analysis and annotations that are generated by processing pipelines and analysts. At the moment such association happens as a separate step, standard and effective ways of linking data and meta information derived from sequencing data are not available.

At its 127th meeting, MPEG and ISO TC 276/WG 5 issued a joint Call for Proposals (CfP) addressing the solution of such problem. The call seeks submissions of technologies that can provide efficient representation and compression solutions for the processing of genomic annotation data.

Companies and organizations are invited to submit proposals in response to this call. Responses are expected to be submitted by the 8th January 2020 and will be evaluated during the 129th WG 11 (MPEG) meeting. Detailed information, including how to respond to the call for proposals, the requirements that have to be considered, and the test data to be used, is reported in the documents N18648, N18647, and N18649 available at the 127th meeting website (http://mpeg.chiariglione.org/meetings/127). For any further question about the call, test conditions, required software or test sequences please contact: Joern Ostermann, MPEG Requirements Group Chair (ostermann@tnt.uni-hannover.de) or Martin Golebiewski, Convenor ISO TC 276/WG 5 (martin.golebiewski@h-its.org).

ISO/IEC 23005 (MPEG-V) 4th Edition

WG11 promotes the Fourth edition of two parts of “Media Context and Control” to the Final Draft International Standard (FDIS) stage

At its 127th meeting, WG11 (MPEG) promoted the 4th edition of two parts of ISO/IEC 23005 (MPEG-V; Media Context and Control) standards to the Final Draft International Standard (FDIS). The new edition of ISO/IEC 23005-1 (architecture) enables ten new use cases, which can be grouped into four categories: 3D printing, olfactory information in virtual worlds, virtual panoramic vision in car, and adaptive sound handling. The new edition of ISO/IEC 23005-7 (conformance and reference software) is updated to reflect the changes made by the introduction of new tools defined in other parts of ISO/IEC 23005. More information on MPEG-V and its parts 1-7 can be found at https://mpeg.chiariglione.org/standards/mpeg-v.


Finally, the unofficial highlight of the 127th MPEG meeting we certainly found while scanning the scene in Gothenburg on Tuesday night…

MPEG127_Metallica

JPEG Column: 83rd JPEG Meeting in Geneva, Switzerland

The 83rd JPEG meeting was held in Geneva, Switzerland.

The meeting was very dense due to the multiple activities taking place. Beyond the multiple standardization activities, like the new JPEG XL, JPEG Pleno, JPEG XS, HTJ2K or JPEG Systems, the 83rd JPEG meeting had the report and discussion of a new exploration study on the use of learning based methods applied to image coding, and two successful workshops, namely on digital holography applications and systems and the 3rd on media blockchain technology.

The new exploration study on the use of learning based methods applied to image coding was initiated at the previous 82nd JPEG meeting in Lisbon, Portugal. The initial approach provided very promising results and might establish a new alternative for future image representations.

The workshop on digital holography applications and systems, revealed the state of the art on industry applications and current technical solutions. It covered applications such as holographic microscopy, tomography, printing and display. Moreover, insights were provided on state-of-the-art holographic coding technologies and quality assessment procedures. The workshop allowed a very fruitful exchange of ideas between the different invited parties and JPEG experts.

The 3rd workshop of a series organized around media blockchain technology, had several talks were academia and industry shared their views on this emerging solution. The workshop ended with a panel where multiple questions were further elaborated by different panelists, providing the ground to a better understanding of the possible role of blockchain in media technology for the near future.

Two new logos for JPEG Pleno and JPEG XL, were approved and released during the Geneva meeting.

jpegpleno-logo  jpegxl-logo

The two new logos, for JPEG Pleno and JPEG XL

The 83rd JPEG meeting had the following highlights: 55540677_10156332786204370_7011318091044880384_n_h

  • New explorations studies of JPEG AI
  • The new Image Coding System JPEG XL
  • JPEG Pleno
  • JPEG XS
  • HTJ2K
  • JPEG Media Blockchain Technology
  • JPEG Systems – Privacy, Security & IPR, JPSearch and JPEG in HEIF

In the following a short summary of the most relevant achievements of the 83rd meeting in Geneva, Switzerland, are presented.

 

JPEG AI

The JPEG Committee is pleased to announce that it has started exploration studies on the use of learning-based solutions for its standards.

In the last few years, several efficient learning-based image coding solutions have been proposed, mainly with improved neural network models. These advances exploit the availability of large image datasets and special hardware, such as the highly parallelizable graphic processing units (GPUs). Recognizing that this area has received many contributions recently and it is considered critical for the future of a rich multimedia ecosystem, JPEG has created the JPEG AI AhG group to study promising learning-based image codecs with a precise and well-defined quality evaluation methodology.

In this meeting, a taxonomy was proposed and available solutions from the literature were organized into different dimensions. Besides, a list of promising learning-based image compression implementations and potential datasets to be used in the future were gathered.

JPEG XL

The JPEG Committee continues to develop the JPEG XL Image Coding System, a standard for image coding that offers substantially better compression efficiency than relevant alternative image formats, along with features desirable for web distribution and efficient compression of high quality images.

Software for the JPEG XL verification model has been implemented. A series of experiments showed promising results for lossy, lossless and progressive coding. In particular, photos can be stored with significant savings in size compared to equivalent-quality JPEG files. Additionally, existing JPEG files can also be considerably reduced in size (for faster download) while retaining the ability to later reproduce the exact JPEG file. Moreover, lossless storage of images is possible with major savings in size compared to PNG. Further refinements to the software and experiments (including enhancement of existing JPEG files, and animations) will follow.

JPEG Pleno

The JPEG Committee has three activities in JPEG Pleno: Light Field, Point Cloud, and Holographic image coding. A generic box-based syntax has been defined that allows for signaling of these modalities, independently or composing a plenoptic scene represented by different modalities. The JPEG Pleno system also includes a reference grid system that supports the positioning of the respective modalities. The generic file format and reference grid system are defined in Part 1 of the standard, which is currently under development. Part 2 of the standard covers light field coding and supports two encoding mechanisms. The launch of specifications for point cloud and holographic content is under study by the JPEG committee.

JPEG XS

The JPEG committee is pleased to announce the creation of an Amendment to JPEG XS Core Coding System defining the use of the codec for raw image sensor data. The JPEG XS project aims at the standardization of a visually lossless low-latency and lightweight compression scheme that can be used as a mezzanine codec in various markets. Among the targeted use cases for raw image sensor compression, one can cite video transport over professional video links (SDI, IP, Ethernet), real-time video storage in and outside of cameras, memory buffers, machine vision systems, and data compression onboard of autonomous cars. One of the most important benefit of the JPEG XS codec is an end-to-end latency ranging from less than one line to a few lines of the image.

HTJ2K

The JPEG committee is pleased to announce a significant milestone, with ISO/IEC 15444-15 High-Throughput JPEG 2000 (HTJ2K) submitted to ISO for immediate publication as International Standard. HTJ2K opens the door to higher encoding and decoding throughput for applications where JPEG 2000 is used today.

The HTJ2K algorithm has demonstrated an average tenfold increase in encoding and decoding throughput compared to the algorithm currently defined by JPEG 2000 Part 1. This increase in throughput results in an average coding efficiency loss of 10% or less in comparison to the most efficient modes of the block coding algorithm in JPEG 2000 Part 1 and enables mathematically lossless transcoding to and from JPEG 2000 Part 1 codestreams.

JPEG Media Blockchain Technology

In order to clearly identify the impact of blockchain and distributed ledger technologies on JPEG standards, the committee has organized several workshops to interact with stakeholders in the domain. The programs and proceedings of these workshop are accessible on the JPEG website:

  1. 1st JPEG Workshop on Media Blockchain Proceedings, ISO/IEC JTC1/SC29/WG1, Vancouver, Canada, October 16th, 2018
  2. 2nd JPEG Workshop on Media Blockchain Proceedings, ISO/IEC JTC1/SC29/WG1, Lisbon, Portugal, January 22nd, 2019
  3. 3rd JPEG Workshop on Media Blockchain Proceedings, ISO/IEC JTC1/SC29/WG1, Geneva, Switzerland, March 20th, 2019

A 4th workshop is planned during the 84th JPEG meeting to be held in Brussels, Belgium on July 16th, 2019. The JPEG Committee invites experts to participate to this upcoming workshop.

JPEG Systems – Privacy, Security & IPR, JPSearch, and JPEG-in-HEIF.

At the 83rd meeting, JPEG Systems realized significant progress towards improving users’ privacy with the DIS text completion of ISO/IEC 19566-4 “Privacy, Security, and IPR Features” which will be released for ballot. JPEG Systems continued to progress on image search and retrieval with the FDIS text release of JPSearch ISO/IEC 24800 Part 2- 2nd edition. Finally, support for JPEG 2000, JPEG XR, and JPEG XS images encapsulated in ISO/IEC 15444-12 are progressing towards IS stage; this enables these JPEG images to be encapsulated in ISO base media file formats, such as ISO/IEC 23008-12 High efficiency file format (HEIF).

Final Quote

“Intelligent codecs might redesign the future of media compression. JPEG can accelerate this trend by producing the first AI based image coding standard.” said Prof. Touradj Ebrahimi, the Convenor of the JPEG Committee.

About JPEG

The Joint Photographic Experts Group (JPEG) is a Working Group of ISO/IEC, the International Organisation for Standardization / International Electrotechnical Commission, (ISO/IEC JTC 1/SC 29/WG 1) and of the International Telecommunication Union (ITU-T SG16), responsible for the popular JPEG, JPEG 2000, JPEG XR, JPSearch, JPEG XT and more recently, the JPEG XS, JPEG Systems, JPEG Pleno and JPEG XL families of imaging standards.

The JPEG Committee nominally meets four times a year, in different world locations. The 82nd JPEG Meeting was held on 19-25 January 2018, in Lisbon, Portugal. The next 84th JPEG Meeting will be held on 13-19 July 2019, in Brussels, Belgium.

More information about JPEG and its work is available at jpeg.org or by contacting Antonio Pinheiro or Frederik Temmermans of the JPEG Communication Subgroup.

If you would like to stay posted on JPEG activities, please subscribe to the jpeg-news mailing list.

Future JPEG meetings are planned as follows:

  • No 84, Brussels, Belgium, July 13 to 19, 2019
  • No 85, San Jose, California, U.S.A., November 2 to 8, 2019
  • No 86, Sydney, Australia, January 18 to 24, 2020

MPEG Column: 126th MPEG Meeting in Geneva, Switzerland

The original blog post can be found at the Bitmovin Techblog and has been modified/updated here to focus on and highlight research aspects.

The 126th MPEG meeting concluded on March 29, 2019 in Geneva, Switzerland with the following topics:

  • Three Degrees of Freedom Plus (3DoF+) – MPEG evaluates responses to the Call for Proposal and starts a new project on Metadata for Immersive Video
  • Neural Network Compression for Multimedia Applications – MPEG evaluates responses to the Call for Proposal and kicks off its technical work
  • Low Complexity Enhancement Video Coding – MPEG evaluates responses to the Call for Proposal and selects a Test Model for further development
  • Point Cloud Compression – MPEG promotes its Geometry-based Point Cloud Compression (G-PCC) technology to the Committee Draft (CD) stage
  • MPEG Media Transport (MMT) – MPEG approves 3rd Edition of Final Draft International Standard
  • MPEG-G – MPEG-G standards reach Draft International Standard for Application Program Interfaces (APIs) and Metadata technologies

The corresponding press release of the 126th MPEG meeting can be found here: https://mpeg.chiariglione.org/meetings/126

Three Degrees of Freedom Plus (3DoF+)

MPEG evaluates responses to the Call for Proposal and starts a new project on Metadata for Immersive Video

MPEG’s support for 360-degree video — also referred to as omnidirectional video — is achieved using the Omnidirectional Media Format (OMAF) and Supplemental Enhancement Information (SEI) messages for High Efficiency Video Coding (HEVC). It basically enables the utilization of the tiling feature of HEVC to implement 3DoF applications and services, e.g., users consuming 360-degree content using a head mounted display (HMD). However, rendering flat 360-degree video may generate visual discomfort when objects close to the viewer are rendered. The interactive parallax feature of Three Degrees of Freedom Plus (3DoF+) will provide viewers with visual content that more closely mimics natural vision, but within a limited range of viewer motion.

At its 126th meeting, MPEG received five responses to the Call for Proposals (CfP) on 3DoF+ Visual. Subjective evaluations showed that adding the interactive motion parallax to 360-degree video will be possible. Based on the subjective and objective evaluation, a new project was launched, which will be named Metadata for Immersive Video. A first version of a Working Draft (WD) and corresponding Test Model (TM) were designed to combine technical aspects from multiple responses to the call. The current schedule for the project anticipates Final Draft International Standard (FDIS) in July 2020.

Research aspects: Subjective evaluations in the context of 3DoF+ but also immersive media services in general are actively researched within the multimedia research community (e.g., ACM SIGMM/SIGCHI, QoMEX) resulting in a plethora of research papers. One apparent open issue is the gap between scientific/fundamental research and standards developing organizations (SDOs) and industry fora which often address the same problem space but sometimes adopt different methodologies, approaches, tools, etc. However, MPEG (and also other SDOs) often organize public workshops and there will be one during the next meeting, specifically on July 10, 2019 in Gothenburg, Sweden which will be about “Coding Technologies for Immersive Audio/Visual Experiences”. Further details are available here.

Neural Network Compression for Multimedia Applications

MPEG evaluates responses to the Call for Proposal and kicks off its technical work

Artificial neural networks have been adopted for a broad range of tasks in multimedia analysis and processing, such as visual and acoustic classification, extraction of multimedia descriptors or image and video coding. The trained neural networks for these applications contain a large number of parameters (i.e., weights), resulting in a considerable size. Thus, transferring them to a number of clients using them in applications (e.g., mobile phones, smart cameras) requires compressed representation of neural networks.

At its 126th meeting, MPEG analyzed nine technologies submitted by industry leaders as responses to the Call for Proposals (CfP) for Neural Network Compression. These technologies address compressing neural network parameters in order to reduce their size for transmission and the efficiency of using them, while not or only moderately reducing their performance in specific multimedia applications.

After a formal evaluation of submissions, MPEG identified three main technology components in the compression pipeline, which will be further studied in the development of the standard. A key conclusion is that with the proposed technologies, a compression to 10% or less of the original size can be achieved with no or negligible performance loss, where this performance is measured as classification accuracy in image and audio classification, matching rate in visual descriptor matching, and PSNR reduction in image coding. Some of these technologies also result in the reduction of the computational complexity of using the neural network or can benefit from specific capabilities of the target hardware (e.g., support for fixed point operations).

Research aspects: This topic has been addressed already in previous articles here and here. An interesting observation after this meeting is that apparently the compression efficiency is remarkable, specifically as the performance loss is negligible for specific application domains. However, results are based on certain applications and, thus, general conclusions regarding the compression of neural networks as well as how to evaluate its performance are still subject to future work. Nevertheless, MPEG is certainly leading this activity which could become more and more important as more applications and services rely on AI-based techniques.

Low Complexity Enhancement Video Coding

MPEG evaluates responses to the Call for Proposal and selects a Test Model for further development

MPEG started a new work item referred to as Low Complexity Enhancement Video Coding (LCEVC), which will be added as part 2 of the MPEG-5 suite of codecs. The new standard is aimed at bridging the gap between two successive generations of codecs by providing a codec-agile extension to existing video codecs that improves coding efficiency and can be readily deployed via software upgrade and with sustainable power consumption.

The target is to achieve:

  • coding efficiency close to High Efficiency Video Coding (HEVC) Main 10 by leveraging Advanced Video Coding (AVC) Main Profile and
  • coding efficiency close to upcoming next generation video codecs by leveraging HEVC Main 10.

This coding efficiency should be achieved while maintaining overall encoding and decoding complexity lower than that of the leveraged codecs (i.e., AVC and HEVC, respectively) when used in isolation at full resolution. This target has been met, and one of the responses to the CfP will serve as starting point and test model for the standard. The new standard is expected to become part of the MPEG-5 suite of codecs and its development is expected to be completed in 2020.

Research aspects: In addition to VVC and EVC, LCEVC is now the third video coding project within MPEG basically addressing requirements and needs going beyond HEVC. As usual, research mainly focuses on compression efficiency but a general trend in video coding is probably observable that favors software-based solutions rather than pure hardware coding tools. As such, complexity — both at encoder and decoder — is becoming important as well as power efficiency which are additional factors to be taken into account. Other issues are related to business aspects which are typically discussed elsewhere, e.g., here.

Point Cloud Compression

MPEG promotes its Geometry-based Point Cloud Compression (G-PCC) technology to the Committee Draft (CD) stage

MPEG’s Geometry-based Point Cloud Compression (G-PCC) standard addresses lossless and lossy coding of time-varying 3D point clouds with associated attributes such as color and material properties. This technology is appropriate especially for sparse point clouds.

MPEG’s Video-based Point Cloud Compression (V-PCC) addresses the same problem but for dense point clouds, by projecting the (typically dense) 3D point clouds onto planes, and then processing the resulting sequences of 2D images with video compression techniques.

G-PCC provides a generalized approach, which directly codes the 3D geometry to exploit any redundancy found in the point cloud itself and is complementary to V-PCC and particularly useful for sparse point clouds representing large environments.

Point clouds are typically represented by extremely large amounts of data, which is a significant barrier for mass market applications. However, the relative ease to capture and render spatial information compared to other volumetric video representations makes point clouds increasingly popular to present immersive volumetric data. The current implementation of a lossless, intra-frame G-PCC encoder provides a compression ratio up to 10:1 and acceptable quality lossy coding of ratio up to 35:1.

Research aspects: After V-PCC MPEG has now promoted G-PCC to CD but, in principle, the same research aspects are relevant as discussed here. Thus, coding efficiency is the number one performance metric but also coding complexity and power consumption needs to be considered to enable industry adoption. Systems technologies and adaptive streaming are actively researched within the multimedia research community, specifically ACM MM and ACM MMSys.

MPEG Media Transport (MMT)

MPEG approves 3rd Edition of Final Draft International Standard

MMT 3rd edition will introduce two aspects:

  • enhancements for mobile environments and
  • support of Contents Delivery Networks (CDNs).

The support for multipath delivery will enable delivery of services over more than one network connection concurrently, which is specifically useful for mobile devices that can support more than one connection at a time.

Additionally, support for intelligent network entities involved in media services (i.e., Media Aware Network Entity (MANE)) will make MMT-based services adapt to changes of the mobile network faster and better. Understanding the support for load balancing is an important feature of CDN-based content delivery, messages for DNS management, media resource update, and media request is being added in this edition.

On going developments within MMT will add support for the usage of MMT over QUIC (Quick UDP Internet Connections) and support of FCAST in the context of MMT.

Research aspects: Multimedia delivery/transport is still an important issue, specifically as multimedia data on the internet is increasing much faster than network bandwidth. In particular, the multimedia research community (i.e., ACM MM and ACM MMSys) is looking into novel approaches and tools utilizing exiting/emerging protocols/techniques like HTTP/2, HTTP/3 (QUIC), WebRTC, and Information-Centric Networking (ICN). One question, however, remains, namely what is the next big thing in multimedia delivery/transport as currently we are certainly in a phase where tools like adaptive HTTP streaming (HAS) reached maturity and the multimedia research community is eager to work on new topics in this domain.

MPEG Column: 125th MPEG Meeting in Marrakesh, Morocco

The original blog post can be found at the Bitmovin Techblog and has been modified/updated here to focus on and highlight research aspects.

The 125th MPEG meeting concluded on January 18, 2019 in Marrakesh, Morocco with the following topics:

  • Network-Based Media Processing (NBMP) – MPEG promotes NBMP to Committee Draft stage
  • 3DoF+ Visual – MPEG issues Call for Proposals on Immersive 3DoF+ Video Coding Technology
  • MPEG-5 Essential Video Coding (EVC) – MPEG starts work on MPEG-5 Essential Video Coding
  • ISOBMFF – MPEG issues Final Draft International Standard of Conformance and Reference software for formats based on the ISO Base Media File Format (ISOBMFF)
  • MPEG-21 User Description – MPEG finalizes 2nd edition of the MPEG-21 User Description

The corresponding press release of the 125th MPEG meeting can be found here. In this blog post I’d like to focus on those topics potentially relevant for over-the-top (OTT), namely NBMP, EVC, and ISOBMFF.

Network-Based Media Processing (NBMP)

The NBMP standard addresses the increasing complexity and sophistication of media services, specifically as the incurred media processing requires offloading complex media processing operations to the cloud/network to keep receiver hardware simple and power consumption low. Therefore, NBMP standard provides a standardized framework that allows content and service providers to describe, deploy, and control media processing for their content in the cloud. It comes with two main functions: (i) an abstraction layer to be deployed on top of existing cloud platforms (+ support for 5G core and edge computing) and (ii) a workflow manager to enable composition of multiple media processing tasks (i.e., process incoming media and metadata from a media source and produce processed media streams and metadata that are ready for distribution to a media sink). The NBMP standard now reached Committee Draft (CD) stage and final milestone is targeted for early 2020.

In particular, a standard like NBMP might become handy in the context of 5G in combination with mobile edge computing (MEC) which allows offloading certain tasks to a cloud environment in close proximity to the end user. For OTT, this could enable lower latency and more content being personalized towards the user’s context conditions and needs, hopefully leading to a better quality and user experience.

For further research aspects please see one of my previous posts

MPEG-5 Essential Video Coding (EVC)

MPEG-5 EVC clearly targets the high demand for efficient and cost-effective video coding technologies. Therefore, MPEG commenced work on such a new video coding standard that should have two profiles: (i) royalty-free baseline profile and (ii) main profile, which adds a small number of additional tools, each of which is capable, on an individual basis, of being either cleanly switched off or else switched over to the corresponding baseline tool. Timely publication of licensing terms (if any) is obviously very important for the success of such a standard.

The target coding efficiency for responses to the call for proposals was to be at least as efficient as HEVC. This target was exceeded by approximately 24% and the development of the MPEG-5 EVC standard is expected to be completed in 2020.

As of today, there’s the need to support AVC, HEVC, VP9, and AV1; soon VVC will become important. In other words, we already have a multi-codec environment to support and one might argue one more codec is probably not a big issue. The main benefit of EVC will be a royalty-free baseline profile but with AV1 there’s already such a codec available and it will be interesting to see how the royalty-free baseline profile of EVC compares to AV1.

For a new video coding format we will witness a plethora of evaluations and comparisons with existing formats (i.e., AVC, HEVC, VP9, AV1, VVC). These evaluations will be mainly based on objective metrics such as PSNR, SSIM, and VMAF. It will be also interesting to see subjective evaluations, specifically targeting OTT use cases (e.g., live and on demand).

ISO Base Media File Format (ISOBMFF)

The ISOBMFF (ISO/IEC 14496-12) is used as basis for many file (e.g., MP4) and streaming formats (e.g., DASH, CMAF) and as such received widespread adoption in both industry and academia. An overview of ISOBMFF is available here. The reference software is now available on GitHub and a plethora of conformance files are available here. In this context, the open source project GPAC is probably the most interesting aspect from a research point of view.

JPEG Column: 82nd JPEG Meeting in Lisbon, Portugal

The 82nd JPEG meeting was held in Lisbon, Portugal. Highlights of the meeting are progress on JPEG XL, JPEG XS, HTJ2K, JPEG Pleno, JPEG Systems and JPEG reference software.

JPEG has been the most common representation format of digital images for more than 25 years. Other image representation formats have been standardised by JPEG committee like JPEG 2000 or more recently JPEG XS. Furthermore, JPEG has been extended with new functionalities like HDR or alpha plane coding with the JPEG XT standard, and more recently with a reference software. Another solutions have been also proposed by different players with limited success. The JPEG committee decided it is the time to create a new working item, named JPEG XL, that aims to develop an image coding standard with increased quality and flexibility combined with a better compression efficiency. The evaluation of the call for proposals responses had already confirmed the industry interest, and development of core experiments has now begun. Several functionalities will be considered, like support for lossless transcoding of images represented with JPEG standard.

A 2nd workshop on media blockchain technologies was held in Lisbon, collocated with the JPEG meeting. Touradj Ebrahimi and Frederik Temmermans opened the workshop with presentations on relevant JPEG activities such as JPEG Privacy and Security. Thereafter, Zekeriya Erkin made a presentation on blockchain, distributed trust and privacy, and Carlos Serrão presented an overview of the ISO/TC 307 standardization work on blockchain and distributed ledger technologies. The workshop concluded with a panel discussion chaired by Fernando Pereira where the interoperability of blockchain and media technologies was discussed. A 3rd workshop is planned during the 83rd meeting to be held in Geneva, Switzerland on March 20th, 2019.

The 82nd JPEG meeting had the following highlights: jpeg82ndpicS

  • The new working item JPEG XL
  • JPEG Pleno
  • JPEG XS
  • HTJ2K
  • JPEG Systems – JUMBF & JPEG 360
  • JPEG reference software

 

The following summarizes various highlights during JPEG’s Lisbon meeting. As always, JPEG welcomes participation from industry and academia in all its standards activities.

JPEG XL

The JPEG Committee launched JPEG XL with the aim of developing a standard for image coding that offers substantially better compression efficiency when compared to existing image formats, along with features desirable for web distribution and efficient compression of high quality images. Subjective tests conducted by two independent research laboratories were presented at the 82nd meeting in Lisbon and indicate promising results that compare favorably with state of the art codecs.

A development software for the JPEG XL verification model is currently being implemented. A series of experiments have been also defined for improving the above model; these experiments address new functionalities such as lossless coding and progressive decoding.

JPEG Pleno

The JPEG Committee has three activities in JPEG Pleno: Light Field, Point Cloud, and Holographic image coding.

At the Lisbon meeting, Part 2 of JPEG Pleno Light Field was refined and a Committee Draft (CD) text was prepared. A new round of core experiments targets improved subaperture image prediction quality and scalability functionality.

JPEG Pleno Holography will be hosting a workshop on March 19th, 2019 during the 83rd JPEG meeting in Geneva. The purpose of this workshop is to provide insights in the status of holographic applications such as holographic microscopy and tomography, displays and printing, and to assess their impact on the planned standardization specification. This workshop invites participation from both industry and academia experts. Information on the workshop can be find at https://jpeg.org/items/20190228_pleno_holography_workshop_geneva_announcement.html

JPEG XS

The JPEG Committee is pleased to announce a new milestone of the JPEG XS project, with the Profiles and Buffer Models (JPEG XS ISO/IEC 21122 Part 2) submitted to ISO for immediate publication as International Standard.

This project aims at standardization of a visually lossless low-latency and lightweight compression scheme that can be used as a mezzanine codec within any AV market. Among the targeted use cases are video transport over professional video links (SDI, IP, Ethernet), real-time video storage, memory buffers, omnidirectional video capture and rendering, and sensor compression (for example in cameras and in the automotive industry). The Core Coding System allows for visually lossless quality at moderate compression rates, scalable end-to-end latency ranging from less than a line to a few lines of the image, and low complexity real time implementations in ASIC, FPGA, CPU and GPU. The new part “Profiles and Buffer Models” defines different coding tools subsets addressing specific application fields and use cases. For more information, interested parties are invited to read the JPEG White paper on JPEG XS that has been recently published on the JPEG website (https://jpeg.org).

 HTJ2K

The JPEG Committee continues its work on ISO/IEC 15444-15 High-Throughput JPEG 2000 (HTJ2K) with the development of conformance codestreams and reference software, improving interoperability and reducing obstacles to implementation.

The HTJ2K block coding algorithm has demonstrated an average tenfold increase in encoding and decoding throughput compared to the block coding algorithm currently defined by JPEG 2000 Part 1. This increase in throughput results in an average coding efficiency loss of 10% or less in comparison to the most efficient modes of the block coding algorithm in JPEG 2000 Part 1, and enables mathematically lossless transcoding to-and-from JPEG 2000 Part 1 codestreams.

JPEG Systems – JUMBF & JPEG 360

At the 82nd JPEG meeting, the Committee DIS ballots were completed, comments reviewed, and the standard progressed towards FDIS text for upcoming ballots on “JPEG Universal Metadata Box Format (JUMBF)” as ISO/IEC 19566-5, and “JPEG 360” as ISO/IEC 19566-6. Investigations continued to generalize the framework to other applications relying on JPEG (ISO/IEC 10918 | ITU-T.81), and JPEG Pleno Light Field.

JPEG reference software

With the JPEG Reference Software reaching FDIS stage, the JPEG Committee reaches an important milestone by extending its specifications with a new part containing a reference software. With its FDIS release, two implementations will become official reference to the most successful standard of the JPEG Committee: The fast and widely deployed libjpeg-turbo code, along with a complete implementation of JPEG coming from the Committee itself that also covers coding modes that were only known by a few experts.

 

Final Quote

“One of the strengths of the JPEG Committee has been in its ability to identify important trends in imaging technologies and their impact on products and services. I am delighted to see that this effort still continues and the Committee remains attentive to future.” said Prof. Touradj Ebrahimi, the Convenor of the JPEG Committee.

About JPEG

The Joint Photographic Experts Group (JPEG) is a Working Group of ISO/IEC, the International Organisation for Standardization / International Electrotechnical Commission, (ISO/IEC JTC 1/SC 29/WG 1) and of the International Telecommunication Union (ITU-T SG16), responsible for the popular JPEG, JPEG 2000, JPEG XR, JPSearch and more recently, the JPEG XT, JPEG XS, JPEG Systems and JPEG Pleno families of imaging standards.

The JPEG Committee nominally meets four times a year, in different world locations. The 82nd JPEG Meeting was held on 19-25 October 2018, in Lisbon, Portugal. The next 83rd JPEG Meeting will be held on 16-22 March 2019, in Geneva, Switzerland.

More information about JPEG and its work is available at www.jpeg.org or by contacting Antonio Pinheiro or Frederik Temmermans (pr@jpeg.org) of the JPEG Communication Subgroup.

If you would like to stay posted on JPEG activities, please subscribe to the jpeg-news mailing list on http://jpeg-news-list.jpeg.org.  

Future JPEG meetings are planned as follows:

  • No 83, Geneva, Switzerland, March 16 to 22, 2019
  • No 84, Brussels, Belgium, July 13 to 19, 2019