The Software-Defined Networking (SDN) paradigm offers the flexibility and programmability in the deployment and management of network services by separating the Control plane from the Data plane. Being based on network abstractions and virtualization techniques, SDN allows for simplifying the implementation of traffic engineering techniques as well as the communication among different services providers, included Internet Service Providers (ISPs) and Over The Top (OTT) providers. For these reasons, the SDN architectures have been widely used in the last years for the QoE-aware management of multimedia services.
The paper  presents Timber, an open source SDN-based emulation platform to provide the research community with a tool for experimenting new QoE management approaches and algorithms, which may also rely on information exchange between ISP and OTT . We believe that the exchange of information between the OTT and the ISP is extremely important because:
- QoE models depend on different influence factors, i.e., network, application, system and context factors ;
- OTT and ISP have different information in their hands, i.e., network state and application Key Quality Indicators (KQIs), respectively;
- End-to-end encryption of the OTT services makes it difficult for ISP to have access to application KQIs to perform QoE-aware network management.
In the following we briefly describe Timber and the impact of collaborative QoE management.
Figure 1 represents the reference architecture, which is composed of four planes. The Service Management Plane is a cloud space owned by the OTT provider, which includes: a QoE Monitoring module to estimate the user’s QoE on the basis of service parameters acquired at the client side; a DB where QoE measurements are stored and can be shared with third parties; a Content Distribution service to deliver multimedia contents. Through the RESTful APIs, the OTTs give access to part of the information stored in the DB to the ISP, on the basis of appropriate agreements.
The Network Data Plane, Network Control Plane, and the Network Management Plane are the those in the hands of the ISP. The Network Data Plane includes all the SDN enabled data forwarding network devices; the Network Control Plane consists of the SDN controller which manages the network devices through Southbound APIs; and the Network Management Plane is the application layer of the SDN architecture controlled by the ISP to perform network-wide control operations which communicates with the OTT via RESTful APIs. The SDN application includes a QoS Monitoring module to monitor the performance of the network, a Management Policy module to take into account Service Level Agreements (SLA), and a Control Actions module that decides on the network control actions to be implemented by the SDN controller to optimize the network resources and improve the service’s quality.
Timber implements this architecture on top of the Mininet SDN emulator and the Ryu SDN controller, which provides the major functionalities of the traffic engineering abstractions. According to the depicted scenario, the OTT has the potential to monitor the level of QoE for the provided services as it has access to the needed application and network level KQIs (Key Quality Indicators). On the other hand, the ISP has the potential to control the network level quality by changing the allocated resources. This scenario is implemented in Timber and allows for setting the needed emulation network and application configuration to text QoE-aware service management algorithms.
Specifically, the OTT performs QoE monitoring of the delivered service by acquiring service information from the client side based on passive measurements of service-related KQIs obtained through probes installed in the user’s devices. Based on these measurements, specific QoE models can be used to predict the user experience. The QoE measurements of active clients’ sessions are also stored in the OTT DB, which can also be accessed by the ISP through mentioned RESTful APIs. The ISP’s SDN application periodically controls the OTT-reported QoE and, in case of observed QoE degradations, implements network-wide policies by communicating with the SDN controller through the Northbound APIs. Accordingly, the SDN controller performs network management operations such as link-aggregation, addition of new flows, network slicing, by controlling the network devices through Southbound APIs.
QoE management based on information exchange: video service use-case
The previously described scenario, which is implemented by Timber, portraits a collaborative scenario between the ISP and the OTT, where the first provides QoE-related data and the later takes care of controlling the resources allocated to the deployed services. Ahmad et al.  makes use of Timber to conduct experiments aimed at investigating the impact of the frequency of information exchange between an OTT providing a video streaming service and the ISP on the end-user QoE.
Figure 2 shows the experiments topology. Mininet in Timber is used to create the network topology, which in this case regards the streaming of video sequences from the media server to the User1 (U1) when web traffic is also transmitted on the same network towards User2 (U2). U1 and U2 are two virtual hosts sharing the same access network and act as the clients. U1 runs the client-side video player and the Apache server provides both web and HAS (HTTP Adaptive Streaming) video services.
In the considered collaboration scenario, QoE-related KQIs are extracted from the client-side and sent to the to the MongoDB database (managed by the OTT), as depicted by the red dashed arrows. This information is then retrieved by the SDN controller of the ISP at frequency f (see green dashed arrow). The aim is to provide different network level resources to video streaming and normal web traffic when QoE degradation is observed for the video service. These control actions on the network are needed because TCP-based web traffic sessions of 4 Mbps start randomly towards U2 during the HD video streaming sessions, causing network time varying bottlenecks in the S1−S2 link. In these cases, the SDN controller implements virtual network slicing at S1 and S2 OVS switches, which provides the minimum guaranteed throughput of 2.5 Mbps and 1 Mbps to video streaming and web traffic, respectively. The SDN controller application utilizes flow matching criteria to assign flows to the virtual slice. The objective of this emulations is to show the impact of f on the resulting QoE.
The Big Buck Bunny 60-second long video sequence in 1280 × 720 was streamed between the server and the U1 by considering 5 different sampling intervals T for information exchange between OTT and ISP, i.e., 2s, 4s, 8s, 16s, and 32s. The information exchanged in this case were the average length stalling duration and the number of stalling events measured by the probe at the client video player. Accordingly, the QoE for the video streaming service was measured in terms of predicted MOS using the QoE model defined in  for HTTP video streaming, as follows:
MOSp = α exp( -β(L)N ) + γ
where L and N are the average length stalling duration and the number of stalling events, respectively, whereas α=3.5, γ=1.5, and β(L)=0.15L+0.19.
Figure 3.a shows the average predicted MOS when information is exchanged at different sampling intervals (the inverse of f). The greatest MOSp is 4.34 obtained for T=2s, and T=4s. Exponential decay in MOSp is observed as the frequency of information exchange decreases. The lowest MOSp is 3.07 obtained for T=32s. This result shows that greater frequency of information exchange leads to low latency in the controller response to QoE degradation. The reason is that the buffer at the client player side keeps on starving for longer durations in case of higher T resulting into longer stalling durations until the SDN controller gets triggered to provide the guaranteed network resources to support the video streaming service.
Figure 3.b shows the video initial loading time, average stalling duration and latency in controller response to quality degradation w.r.t different sampling intervals. The latency in controller response to QoE degradation increases linearly as the frequency of information exchange decreases while the stalling duration grows exponentially as the frequency decrease. The initial loading time seems to be not relevantly affected by different sampling intervals.
Experiments are conducted on an SDN emulation environment to investigate the impact of the frequency of information exchange between OTT and ISP when a collaborative network management approach is considered. The QoE for a video streaming service is measured by considering 5 different sampling intervals for information exchange between OTT and ISP, i.e., 2s, 4s, 8s, 16s, and 32s. The information exchanged are the video average length stalling duration and the number of stalling events.
The experiment results showed that higher frequency of information exchange results in greater delivered QoE, but a sampling interval lower than 4s (frequency > ¼ Hz) may not further improve the delivered QoE. Clearly, this threshold depends on the variability of the network conditions. Further studies are needed to understand how frequently the ISP and OTT should collaboratively share data to have observable benefits in terms of QoE varying the network status and the deployed services.
 A. Ahmad, A. Floris and L. Atzori, “Timber: An SDN based emulation platform for QoE Management Experimental Research,” 2018 Tenth International Conference on Quality of Multimedia Experience (QoMEX), Cagliari, 2018, pp. 1-6.
 P. Le Callet, S. Möller, A. Perkis et al., “Qualinet White Paper on Definitions of Quality of Experience (2012),” in European Network on Quality of Experience in Multimedia Systems and Services (COST Action IC 1003), Lausanne, Switzerland, Version 1.2, March 2013.
 A. Ahmad, A. Floris and L. Atzori, “Towards Information-centric Collaborative QoE Management using SDN,” 2019 IEEE Wireless Communications and Networking Conference (WCNC), Marrakesh, Morocco, 2019, pp. 1-6.
 T. Hoßfeld, C. Moldovan, and C. Schwartz, “To each according to his needs: Dimensioning video buffer for specific user profiles and behavior,” in IFIP/IEEE Int. Symposium on Integrated Network Management (IM), 2015. IEEE, 2015, pp. 1249–1254.
In this note I provide an update on some recent SIGMM funding initiatives we are putting in place in 2020. These come about based on feedback from you, our members, on what you believe to be important and what you would like your SIGMM Executive Committee to work on. The specific topics covered here are the new SIGMM test of Time Paper Award, the various projects to be funded as a result of our call for funding applications for special initiatives some of which are further support for student travel, and the SIGMM sponsorship of conference fee reduction.
The SIGMM Test of Time Paper Award
A new award has just been formally approved by the ACM Awards Committee called the SIGMM Test of Time Paper Award with details available here. To have an award formally approved by ACM the proposal has to be approved by a SIG Executive Committee, then approved by ACM headquarters, then approved by the ACM SIG Governing Board and then approved by the ACM Awards Committee. This ensures that ACM-approved awards are highly prestigious and rigorous in the way they select their winners.
SIGMM has been operational for 26 years and in that time has sponsored or co-sponsored more than 100 conferences and workshops, which have collectively published more than 15,574 individual papers. 5,742 of those papers have been published 10 or more years ago and the SIGMM Executive believes it is time to recognise the most significant and impactful from among those 5,742 papers.
The new award will be presented every year, starting this year, to the authors of a paper published either 10, 11 or 12 years previously at an SIGMM sponsored or co-sponsored conference. Thus the 2020 award will be for papers presented at a 2008, 2009 or 2010 SIGMM conference or workshop and will recognise the paper that has had the most impact and influence on the field of Multimedia in terms of research, development, product or ideas. The paper may include theoretical advances, techniques and/or software tools that have been widely used, and/or innovative applications that have had impact on multimedia computing.
The award-winning paper will be selected by a 5-person selection committee consisting of 2 members of the organising committee for the MULTIMEDIA Conference in that year plus 3 established and respected members of our community who have no conflict of interest with the nominated papers. The nominated papers are those top-ranked based on citation count from the ACM Digital Library, though the selection committee can add others if they wish.
Faced with the issue of recognising papers published prior to the 10, 11 or 12 year window of consideration, in this inaugural year when we announce the inaugural winner from 2008/2009/2010 we will also announce a set of up to 14 papers published at SIGMM conferences prior to 2008 as “honourable mentions” which could have been considered as strong candidates in their year of publication, if there had been an award for that year. The first SIGMM MULTIMEDIA Conference was held in 1993 but was not sponsored by SIGMM as SIGMM was formed only in 1994, and so these up to 14 honourable mentions will cover the years 1994 to 2007 inclusive.
Selecting these papers from among all these candidates will be a challenging task for the selection committee and we wish them well in their deliberations and look forward to the award announcements at the MULTIMEDIA Conference in Seattle later this year.
SIGMM Funding for Special Initiatives 2020
For the last three years in a row, the SIGMM Executive committee has issued an invitation for applications for funding for new initiatives, which are submitted by SIGMM members. The assessment criteria for these initiatives were that they focus on one, or more, of the following:
– building on SIGMM’s excellence and strengths;
– nurturing new talent in the SIGMM community;
– addressing weakness(es) in the SIGMM community and in SIGMM activities
In late 2019 we issued our third call for funding and we received our strongest yet response from the SIGMM community. Submissions were evaluated and assessed by the SIGMM Executive and discussed at an Executive Committee meeting and in this short note I outline the funding awards which were made.
Before looking at the awards, it is worth reminding the reader that starting this year, SIGMM is centralising our support for student travel to our SIGMM-supported events, namely ICMR (in Dublin), MMSys (in Istanbul), IMX (in Barcelona), IH&MMSec (in Denver), MULTIMEDIA (in Seattle) and MM Asia (in Singapore). As part of this scheme, any student member of SIGMM is eligible to apply, however, the students who are the first author of an accepted paper) are particularly encouraged. The value of the award will depend on the travel distance with up to US$2000 for long-haul travel and up to US$1000 for short-haul travel which are defined based on the location of the conference. Details of this scheme and the link for submitting applications have already started to appear on the websites of some of these conferences.
With the SIGMM scheme supporting travel for student authors as a priority, some of these conferences applied for and have been approved for further funding to support other conference attendees and the IMX Conference in Barcelona, in June 2020 was awarded travel support for under-represented minorities while the MMSys conference in Istanbul in June 2020 was awarded travel support for non-student minorities. In both these cases the conferences themselves will administer selection and awarding of the funding. Student travel support was also awarded to the African Winter School in Multimedia, in Stellenbosch, South Africa in July 2020, an event which SIGMM also sponsors.
A number of other events which are not sponsored by SIGMM but which are closely related to our area also applied for funding to support student travel and the following have also been awarded funding for supporting student travel:
– the Adaptive Streaming Summer School, in Klagenfurt, Austria, July;
– the Content Based Multimedia Information (CBMI) Conference, in Lille, France, September;
– the International Conference on Quality of Multimedia Experience (QoMEX), in Athlone, Ireland, May, for female and under-represented minority students;
– the MediaEval Benchmarking Initiative for Multimedia Evaluation, workshop, late 2020.
All this funding, both the centralised and the special awards above, will help many students to travel to events in multimedia during 2020 and in addition to travel support, SIGMM will fund a number of events at some of our conferences. These include a women and diversity lunch at CBMI in Lille, a diversity lunch and childcare support at the Information Hiding and Multimedia Security Workshop (IH&MMSec) in Denver, childcare support and a diversity and an inclusion panel discussion at IMX, a multimedia evaluation methodology workshop at the MediaEval workshop, and childcare support and an N2Women meeting at MMSys.
We are also delighted to announce that SIGMM will also support some other activities besides travel and events and one of these is the costs of software development and presentation for Conflow at ACM Multimedia in Seattle. Conflow, and its predecessor ConfLab is a unique initiative from Hayley Hung and colleagues at TU Delft which encourages people with similar or complementary research interests to find each other at conference and ultimately to help them to connect with potential collaborators. It does this by instrumenting a physical space at an event with environmental sensors and distributing wearable sensors for participants who sign up and agree to have data about their interactions with others, captured, anonymised and used as a dataset for analysis. A pilot version at ACM Multimedia in Nice in 2019 called ConfLab ran with several dozen participants and was built around the notion of meeting the conference Chairs and this will be extended in 2020.
The final element of the SIGMM funding awarded recently was to the ICMR conference in Dublin in June which will be the testbed for calculation of a conference’s carbon footprint. ACM already has some initiatives in this area based on estimating the CO2e cost of air travel of conference attendees to/from the venue and there are software tools to help with this. The SIGMM funding will include this plus estimating the CO2e costs of local transport, food, accommodation, and more, plus it will also raise awareness of individual carbon footprints for delegates. This will be done for ICMR in a way that allows the process of calculating made available for other events.
SIGMM Sponsorship of Conference Fee Reduction
The third initiative which SIGMM is starting sponsorship of in 2020 is a reduction in the registration fees for SIGMM-sponsored conferences and this means for ICMR, MMSys, IMX, IH&MMSec, MULTIMEDIA and MM Asia. This has been a particular bug-bear for many of us so it is good to be able to do something about it.
Starting in 2020, SIGMM will sponsor US$100 toward conference registration fees for SIGMM members only, for the early-bird conference registrations. This will apply to students and non-students, and to ACM members and non-members. It means the conference registration choice may look a bit complicated but basically if you are an ACM member you get a certain reduction, if you are a SIGMM member you also get a reduction (from SIGMM), and if you are a student then you also get a reduction. The amount of the reduction in the conference fee for being a SIGMM member ($100) is far more than the cost of joining SIGMM (which is either $20 or $15 for a student) thus it makes sense to join SIGMM and get the conference fee reduction and your SIGMM membership is an important thing for us.
The SIGMM Executive Committee believe this fee sponsorship is an appropriate way of giving back to the SIGMM community. Beyond 2020 we have not made a decision on sponsoring conference fee reductions, we will see how it works out in 2020 before deciding.
I’d also like to add one final note about attending our conferences and workshops. We have a commitment to addressing diversity in our 25 in 25 strategy and we also have “access all areas” policy for our conferences. This means that a single registration fee allows access to all events and activities at our conferences … lunches, refreshments, dinners, etc., all bundled into one fee. We also support those with special needs such as accessibility or dietary requirements and when these are brought to our attention, typically when an attendee registers, then we can put in place whatever support mechanisms are needed to maximise that attendee’s conference experience. Our events strive to be harassment-free and pleasant conference experiences for all participants. We do not tolerate harassment of conference attendees and that means all our attendees, speakers and organizers are bound by ACM’s Policy Against Harassment. Participants are asked to confirm their commitment to upholding the policy when registering.
Finally, thank you for your support of SIGMM and our events. If there is one thing you can do to help us to help you, it is joining SIGMM, not just for the reduced conference registration fee but to show your support for what we do. With a fixed rate of $20 or $15 for a student you’ll find details on the SIGMM Membership tab at http://sigmm.org/
The 86th JPEG meeting was held in Sydney, Australia.
Among the different activities that took place, the JPEG Committee issued a Call for Evidence on learning-based image coding solutions. This call results from the success of the explorations studies recently carried out by the JPEG Committee, and honours the pioneering work of JPEG issuing the first image coding standard more than 25 years ago.
In addition, a First Call for Evidence on Point Cloud Coding was issued in the framework of JPEG Pleno. Furthermore, an updated version of the JPEG Pleno reference software and a JPEG XL open source implementation have been released, while JPEG XS continues the development of raw-Bayer image sensor compression.
The 86th JPEG meeting had the following highlights:
- JPEG AI issues a call for evidence on machine learning based image coding solutions
- JPEG Pleno issues call for evidence on Point Cloud coding
- JPEG XL verification test reveal competitive performance with commonly used image coding solutions
- JPEG Systems submitted final texts for Privacy & Security
- JPEG XS announces new coding tools optimised for compression of raw-Bayer image sensor data
The JPEG Committee launched a learning-based image coding activity more than a year ago, also referred as JPEG AI. This activity aims to find evidence for image coding technologies that offer substantially better compression efficiency when compared to conventional approaches but relying on models exploiting a large image database.
A Call for Evidence (CfE) has been issued as outcome of the 86th JPEG meeting, Sydney, Australia as a first formal step to consider standardisation of such approaches in image compression. The CfE is organised in coordination with the IEEE MMSP 2020 Grand Challenge on Learning-based Image Coding Challenge and will use the same content, evaluation methodologies and deadlines.
JPEG Pleno is working toward the integration of various modalities of plenoptic content under a single framework and in a seamless manner. Efficient and powerful point cloud representation is a key feature within this vision. Point cloud data supports a wide range of applications including computer-aided manufacturing, entertainment, cultural heritage preservation, scientific research and advanced sensing and analysis. During the 86th JPEG Meeting, the JPEG Committee released a First Call for Evidence on JPEG Pleno Point Cloud Coding to be integrated in the JPEG Pleno framework. This Call for Evidence focuses specifically on point cloud coding solutions that support scalability and random access of decoded point clouds.
Furthermore, a Reference Software implementation of the JPEG Pleno file format (Part 1) and light field coding technology (Part 2) is made publicly available as open source on the JPEG Gitlab repository (https://gitlab.com/wg1). The JPEG Pleno Reference Software is planned to become an International Standard as Part 4 of JPEG Pleno by the end of 2020.
The JPEG XL Image Coding System (ISO/IEC 18181) has produced an open source reference implementation available on the JPEG Gitlab repository (https://gitlab.com/wg1/jpeg-xl). The software is available under Apache 2, which includes a royalty-free patent grant. Speed tests indicate the multithreaded encoder and decoder outperforms libjpeg-turbo.
Independent subjective and objective evaluation experiments have indicated competitive performance with commonly used image coding solutions while offering new functionalities such as lossless transcoding from legacy JPEG format to JPEG XL. The standardisation process has reached the Draft International Standard stage.
JPEG exploration into Media Blockchain
Fake news, copyright violations, media forensics, privacy and security are emerging challenges in digital media. JPEG has determined that blockchain and distributed ledger technologies (DLT) have great potential as a technology component to address these challenges in transparent and trustable media transactions. However, blockchain and DLT need to be integrated efficiently with a widely adopted standard to ensure broad interoperability of protected images. Therefore, the JPEG committee has organised several workshops to engage with the industry and help to identify use cases and requirements that will drive the standardisation process.
During its Sydney meeting, the committee organised an Open Discussion Session on Media Blockchain and invited local stakeholders to take part in an interactive discussion. The discussion focused on media blockchain and related application areas including, media and document provenance, smart contracts, governance, legal understanding and privacy. The presentations of this session are available on the JPEG website. To keep informed and to get involved in this activity, interested parties are invited to register to the ad hoc group’s mailing list.
JPEG Systems & Integration submitted final texts for ISO/IEC 19566-4 (Privacy & Security), ISO/IEC 24800-2 (JPSearch), and ISO/IEC 15444-16 2nd edition (JPEG 2000-in-HEIF) for publication. Amendments to add new capabilities for JUMBF and JPEG 360 reached Committee Draft stage and will be reviewed and balloted by national bodies.
The JPEG Privacy & Security release is timely as consumers are increasingly aware and concerned about the need to protect privacy in imaging applications. The JPEG 2000-in-HEIF enables embedding JPEG 2000 images in the HEIF file format. The updated JUMBF provides a more generic means to embed images and other media within JPEG files to enable richer image experiences. The updated JPEG 360 adds stereoscopic 360 images, and a method to accelerate the rendering of a region-of-interest within an image in order to reduce the latency experienced by users. JPEG Systems & Integrations JLINK, which elaborates the relationships of the embedded media within the file, created updated use cases to refine the requirements, and continued technical discussions on implementation.
The JPEG committee is pleased to announce the specification of new coding tools optimised for compression of raw-Bayer image sensor data. The JPEG XS project aims at the standardisation of a visually lossless, low-latency and lightweight compression scheme that can be used as a mezzanine codec in various markets. Video transport over professional video links, real-time video storage in and outside of cameras, and data compression onboard of autonomous cars are among the targeted use cases for raw-Bayer image sensor compression. Amendment of the Core Coding System, together with new profiles targeting raw-Bayer image applications are ongoing and expected to be published by the end of 2020.
“The efforts to find new and improved solutions in image compression have led JPEG to explore new opportunities relying on machine learning for coding. After rigorous analysis in form of explorations during the last 12 months, JPEG believes that it is time to formally initiate a standardisation process, and consequently, has issued a call for evidence for image compression based on machine learning.” said Prof. Touradj Ebrahimi, the Convenor of the JPEG Committee.
The Joint Photographic Experts Group (JPEG) is a Working Group of ISO/IEC, the International Organisation for Standardization / International Electrotechnical Commission, (ISO/IEC JTC 1/SC 29/WG 1) and of the International Telecommunication Union (ITU-T SG16), responsible for the popular JPEG, JPEG 2000, JPEG XR, JPSearch, JPEG XT and more recently, the JPEG XS, JPEG Systems, JPEG Pleno and JPEG XL families of imaging standards.
More information about JPEG and its work is available at www.jpeg.org or by contacting Antonio Pinheiro or Frederik Temmermans (email@example.com) of the JPEG Communication Subgroup. If you would like to stay posted on JPEG activities, please subscribe to the jpeg-news mailing list on http://jpeg-news-list.jpeg.org.
Future JPEG meetings are planned as follows:
- No 87, Erlangen, Germany, April 25 to 30, 2020 (Cancelled because of Covid-19 outbreak; Replaced by online meetings.)
- No 88, Geneva, Switzerland, July 4 to 10, 2020
The original blog post can be found at the Bitmovin Techblog and has been modified/updated here to focus on and highlight research aspects.
The 129th MPEG meeting concluded on January 17, 2020 in Brussels, Belgium with the following topics:
- Coded representation of immersive media – WG11 promotes Network-Based Media Processing (NBMP) to the final stage
- Coded representation of immersive media – Publication of the Technical Report on Architectures for Immersive Media
- Genomic information representation – WG11 receives answers to the joint call for proposals on genomic annotations in conjunction with ISO TC 276/WG 5
- Open font format – WG11 promotes Amendment of Open Font Format to the final stage
- High efficiency coding and media delivery in heterogeneous environments – WG11 progresses Baseline Profile for MPEG-H 3D Audio
- Multimedia content description interface – Conformance and Reference Software for Compact Descriptors for Video Analysis promoted to the final stage
Additional Important Activities at the 129th WG 11 (MPEG) meeting
The 129th WG 11 (MPEG) meeting was attended by more than 500 experts from 25 countries working on important activities including (i) a scene description for MPEG media, (ii) the integration of Video-based Point Cloud Compression (V-PCC) and Immersive Video (MIV), (iii) Video Coding for Machines (VCM), and (iv) a draft call for proposals for MPEG-I Audio among others.
The corresponding press release of the 129th MPEG meeting can be found here: https://mpeg.chiariglione.org/meetings/129. This report focused on network-based media processing (NBMP), architectures of immersive media, compact descriptors for video analysis (CDVA), and an update about adaptive streaming formats (i.e., DASH and CMAF).
Coded representation of immersive media – WG11 promotes Network-Based Media Processing (NBMP) to the final stage
At its 129th meeting, MPEG promoted ISO/IEC 23090-8, Network-Based Media Processing (NBMP), to Final Draft International Standard (FDIS). The FDIS stage is the final vote before a document is officially adopted as an International Standard (IS). During the FDIS vote, publications and national bodies are only allowed to place a Yes/No vote and are no longer able to make any technical changes. However, project editors are able to fix typos and make other necessary editorial improvements.
What is NBMP? The NBMP standard defines a framework that allows content and service providers to describe, deploy, and control media processing for their content in the cloud by using libraries of pre-built 3rd party functions. The framework includes an abstraction layer to be deployed on top of existing commercial cloud platforms and is designed to be able to be integrated with 5G core and edge computing. The NBMP workflow manager is another essential part of the framework enabling the composition of multiple media processing tasks to process incoming media and metadata from a media source and to produce processed media streams and metadata that are ready for distribution to media sinks.
Why NBMP? With the increasing complexity and sophistication of media services and the incurred media processing, offloading complex media processing operations to the cloud/network is becoming critically important in order to keep receiver hardware simple and power consumption low.
Research aspects: NBMP reminds me a bit about what has been done in the past in MPEG-21, specifically Digital Item Adaptation (DIA) and Digital Item Processing (DIP). The main difference is that MPEG now targets APIs rather than pure metadata formats, which is a step forward in the right direction as APIs can be implemented and used right away. NBMP will be particularly interesting in the context of new networking approaches including, but not limited to, software-defined networking (SDN), information-centric networking (ICN), mobile edge computing (MEC), fog computing, and related aspects in the context of 5G.
Coded representation of immersive media – Publication of the Technical Report on Architectures for Immersive Media
At its 129th meeting, WG11 (MPEG) published an updated version of its technical report on architectures for immersive media. This technical report, which is the first part of the ISO/IEC 23090 (MPEG-I) suite of standards, introduces the different phases of MPEG-I standardization and gives an overview of the parts of the MPEG-I suite. It also documents use cases and defines architectural views on the compression and coded representation of elements of immersive experiences. Furthermore, it describes the coded representation of immersive media and the delivery of a full, individualized immersive media experience. MPEG-I enables scalable and efficient individual delivery as well as mass distribution while adjusting to the rendering capabilities of consumption devices. Finally, this technical report breaks down the elements that contribute to a fully immersive media experience and assigns quality requirements as well as quality and design objectives for those elements.
Research aspects: This technical report provides a kind of reference architecture for immersive media, which may help identify research areas and research questions to be addressed in this context.
Multimedia content description interface – Conformance and Reference Software for Compact Descriptors for Video Analysis promoted to the final stage
Managing and organizing the quickly increasing volume of video content is a challenge for many industry sectors, such as media and entertainment or surveillance. One example task is scalable instance search, i.e., finding content containing a specific object instance or location in a very large video database. This requires video descriptors that can be efficiently extracted, stored, and matched. Standardization enables extracting interoperable descriptors on different devices and using software from different providers so that only the compact descriptors instead of the much larger source videos can be exchanged for matching or querying. ISO/IEC 15938-15:2019 – the MPEG Compact Descriptors for Video Analysis (CDVA) standard – defines such descriptors. CDVA includes highly efficient descriptor components using features resulting from a Deep Neural Network (DNN) and uses predictive coding over video segments. The standard is being adopted by the industry. At its 129th meeting, WG11 (MPEG) has finalized the conformance guidelines and reference software. The software provides the functionality to extract, match, and index CDVA descriptors. For easy deployment, the reference software is also provided as Docker containers.
Research aspects: The availability of reference software helps to conduct reproducible research (i.e., reference software is typically publicly available for free) and the Docker container even further contributes to this aspect.
DASH and CMAF
The 4th edition of DASH has already been published and is available as ISO/IEC 23009-1:2019. Similar to previous iterations, MPEG’s goal was to make the newest edition of DASH publicly available for free, with the goal of industry-wide adoption and adaptation. During the most recent MPEG meeting, we worked towards implementing the first amendment which will include additional (i) CMAF support and (ii) event processing models with minor updates; these amendments are currently in draft and will be finalized at the 130th MPEG meeting in Alpbach, Austria. An overview of all DASH standards and updates are depicted in the figure below:
ISO/IEC 23009-8 or “session-based DASH operations” is the newest variation of MPEG-DASH. The goal of this part of DASH is to allow customization during certain times of a DASH session while maintaining the underlying media presentation description (MPD) for all other sessions. Thus, MPDs should be cacheable within content distribution networks (CDNs) while additional information should be customizable on a per session basis within a newly added session-based description (SBD). It is understood that the SBD should have an efficient representation to avoid file size issues and it should not duplicate information typically found in the MPD.
The 2nd edition of the CMAF standard (ISO/IEC 23000-19) will be available soon (currently under FDIS ballot) and MPEG is currently reviewing additional tools in the so-called ‘technologies under considerations’ document. Therefore, amendments were drafted for additional HEVC media profiles and exploration activities on the storage and archiving of CMAF contents.
The next meeting will bring MPEG back to Austria (for the 4th time) and will be hosted in Alpbach, Tyrol. For more information about the upcoming 130th MPEG meeting click here.
Click here for more information about MPEG meetings and their developments
Can the multimedia community contribute to a better Quality of Life? Delivering a higher resolution and distortion-free media stream so you can enjoy the latest movie on Netflix or YouTube may provide instantaneous satisfaction, but does it make your long term life better? Whilst the QoMEX conference series has traditionally considered the former, in more recent years and with a view to QoMEX 2020, research works that consider the later are also welcome. In this context, rather than looking at what we do, reflecting on how we do it could offer opportunities for sustained rather than instantaneous impact in fields such as health, inclusive of assistive technologies (AT) and digital heritage among many others.
In this article, we ask if the concepts from the Quality of Experience (QoE)  framework model can be applied, adapted and reimagined to inform and develop tools and systems that enhance our Quality of Life. The World Health Organisation (WHO) definition of health states that “[h]ealth is a state of complete physical, mental and social well-being and not merely the absence of disease or infirmity” . This is a definition that is well-aligned with the familiar yet ill-defined term, Quality of Life (QoL). Whilst QoL requires further work towards a concrete definition, the definition of QoE has been developed through work by the QUALINET EU COST Network . Using multimedia quality as a use case, a white paper  resulted from this effort that describes the human, context, service and system factors that influence the quality of experience for multimedia systems.
The QoE formation process has been mapped to a conceptual model allowing systems and services to be evaluated and improved. Such a model has been developed and used in predicting QoE. Adapting and applying the methods to health-related QoL will allow predictive models for QoL to be developed.
In this context, the best paper award winner at QoMEX in 2017  proposed such a mapping for QoL in stroke prevention, care and rehabilitation (Fig. 1) along with examining practical challenges for modeling and applications. The process of identifying and categorizing factors and features was illustrated using stroke patient treatment as an example use case and this work has continued through the European Union Horizon 2020 research project PRECISE4Q . For medical practitioners, a QoL framework can assist in the development of decision support systems solutions, patient monitoring, and imaging systems.
At more of a “systems” level in e-health applications, the WHO defines assistive devices and technologies as “those whose primary purpose is to maintain or improve an individual’s functioning and independence to facilitate participation and to enhance overall well-being” . A proposed application of immersive technologies as an assistive technology (AT) training solution applied QoE as a mechanism to evaluate the usability and utility of the system . The assessment of immersive AT used a number of physiological data: EEG signal, GSR/EDA, body surface temperature, accelerometer, HR and BVP. These allow objective analysis while the individual is operating the wheelchair simulator. Performing such evaluations in an ecologically valid manner is a challenging task. However, the QoE framework provides a concrete mechanism to consider the human, context and system factors that influence the usability and utility of such a training simulator. In particular, the use of implicit and objective metrics can complement qualitative approaches to evaluations.
In the same vein, another work presented at QoMEX 2017 , employed the use of Augmented Reality (AR) and Virtual Reality (VR) as a clinical aid for diagnosis of speech and language difficulties, specifically aphasia (see Fig. 2). It is estimated, that speech or language difficulties affect more than 12% of people internationally . Individuals who suffer from a stroke or traumatic brain injury (TBI) often experience symptoms of aphasia as a result of damage to the left frontal lobe. Anomic aphasia  is a mild form of aphasia in which patients experience word retrieval problems and semantic memory difficulties. Opportunities exist to digitalize well-accepted clinical approaches that can be augmented through QoE based objective and implicit metrics. Understanding the user via advanced processing techniques is an area in dire need of further research with significant opportunities to understand the user at a cognitive, interaction and performance levels moving far beyond the binary pass/fail of traditional approaches.
Moving beyond health, the QoE concept can also be extended to other areas such as digital heritage. Organizations such as broadcasters and national archives that collect media recordings are digitizing their material because the analog storage media degrade over time. Archivists, restoration experts, content creators, and consumers are all stakeholders but they have different perspectives when it comes to their expectations and needs. Hence their QoE for archive material can be very different, as discussed at QoMEX 2019 . For people interested in media archives viewing quality through a QoE lens, QoE aids in understanding the issues and priorities of the stakeholders. Applying the QoE framework to explore the different stakeholders and the influencing factors that affect their QoE perceptions over time allows different kinds of models for QoE to be developed and used across the stages of the archived material lifecycle from digitization through restoration and consumption.
The QoE framework’s simple yet comprehensive conceptual model for the quality formation process has had a major impact on multimedia quality. The examples presented here highlight how it can be used as a blueprint in other domains and to reconcile different perspectives and attitudes to quality. With an eye on the next and future editions of QoMEX, will we see other use cases and applications of QoE to domains and concepts beyond multimedia quality evaluations? The QoMEX conference series has evolved and adapted based on emerging application domains, industry engagement, and approaches to quality evaluations. It is clear that the scope of QoE research broadened significantly over the last 11 years. Please take a look at  for details on the conference topics and special sessions that the organizing team for QoMEX2020 in Athlone Ireland hope will broaden the range of use cases that apply QoE towards QoL and other application domains in a spirit of inclusivity and diversity.
 P. Le Callet, S. Möller, and A. Perkis, eds., “Qualinet White Paper on Definitions of Quality of Experience (2012). European Network on Quality of Experience in Multimedia Systems and Services (COST Action IC 1003), Lausanne, Switzerland, Version 1.2, March 2013.”
 World Health Organization, “World health organisation. preamble to the constitution of the world health organisation,” 1946. [Online]. Available: http://apps.who.int/gb/bd/PDF/bd47/EN/constitution-en.pdf. [Accessed: 21-Jan-2020].
 A. Hines and J. D. Kelleher, “A framework for post-stroke quality of life prediction using structured prediction,” 9th International Conference on Quality of Multimedia Experience, QoMEX 2017, Erfurt, Germany, June 2017.
 D. Pereira Salgado, F. Roque Martins, T. Braga Rodrigues, C. Keighrey, R. Flynn, E. L. Martins Naves, and N. Murray, “A QoE assessment method based on EDA, heart rate and EEG of a virtual reality assistive technology system”, In Proceedings of the 9th ACM Multimedia Systems Conference (Demo Paper), pp. 517-520, 2018.
 C. Keighrey, R. Flynn, S. Murray, and N. Murray, “A QoE Evaluation of Immersive Augmented and Virtual Reality Speech & Language Assessment Applications”, 9th International Conference on Quality of Multimedia Experience, QoMEX 2017, Erfurt, Germany, June 2017.
 “Scope of Practice in Speech-Language Pathology,” 2016. [Online]. Available: http://www.asha.org/uploadedFiles/SP2016-00343.pdf. [Accessed: 21-Jan-2020].
 J. Reilly, “Semantic Memory and Language Processing in Aphasia and Dementia,” Seminars in Speech and Language, vol. 29, no. 1, pp. 3-4, 2008.
 A. Ragano, E. Benetos, and A. Hines, “Adapting the Quality of Experience Framework for Audio Archive Evaluation,” Eleventh International Conference on Quality of Multimedia Experience (QoMEX), Berlin, Germany, 2019.
The original blog post can be found at the Bitmovin Techblog and has been modified/updated here to focus on and highlight research aspects.
The 128th MPEG meeting concluded on October 11, 2019 in Geneva, Switzerland with the following topics:
- Low Complexity Enhancement Video Coding (LCEVC) Promoted to Committee Draft
- 2nd Edition of Omnidirectional Media Format (OMAF) has reached the first milestone
- Genomic Information Representation – Part 4 Reference Software and Part 5 Conformance Promoted to Draft International Standard
The corresponding press release of the 128th MPEG meeting can be found here: https://mpeg.chiariglione.org/meetings/128. In this report we will focus on video coding aspects (i.e., LCEVC) and immersive media applications (i.e., OMAF). At the end, we will provide an update related to adaptive streaming (i.e., DASH and CMAF).
Low Complexity Enhancement Video Coding
Low Complexity Enhancement Video Coding (LCEVC) has been promoted to committee draft (CD) which is the first milestone in the ISO/IEC standardization process. LCEVC is part two of MPEG-5 or ISO/IEC 23094-2 if you prefer the always easy-to-remember ISO codes. We introduced MPEG-5 already in previous posts and LCEVC is about a standardized video coding solution that leverages other video codecs in a manner that improves video compression efficiency while maintaining or lowering the overall encoding and decoding complexity.
The LCEVC standard uses a lightweight video codec to add up to two layers of encoded residuals. The aim of these layers is correcting artefacts produced by the base video codec and adding detail and sharpness for the final output video.
The target of this standard comprises software or hardware codecs with extra processing capabilities, e.g., mobile devices, set top boxes (STBs), and personal computer based decoders. Additional benefits are the reduction in implementation complexity or a corresponding expansion in spatial resolution.
LCEVC is based on existing codecs which allows for backwards-compatibility with existing deployments. Supporting LCEVC enables “softwareized” video coding allowing for release and deployment options known from software-based solutions which are well understood by software companies and, thus, opens new opportunities in improving and optimizing video-based services and applications.
Research aspects: in video coding, research efforts are mainly related to coding efficiency and complexity (as usual). However, as MPEG-5 basically adds a software layer on top of what is typically implemented in hardware, all kind of aspects related to software engineering could become an active area of research.
Omnidirectional Media Format
The scope of the Omnidirectional Media Format (OMAF) is about 360° video, images, audio and associated timed text and specifies (i) a coordinate system, (ii) projection and rectangular region-wise packing methods, (iii) storage of omnidirectional media and the associated metadata using ISOBMFF, (iv) encapsulation, signaling and streaming of omnidirectional media in DASH and MMT, and (v) media profiles and presentation profiles.
At this meeting, the second edition of OMAF (ISO/IEC 23090-2) has been promoted to committee draft (CD) which includes
- support of improved overlay of graphics or textual data on top of video,
- efficient signaling of videos structured in multiple sub parts,
- enabling more than one viewpoint, and
- new profiles supporting dynamic bitstream generation according to the viewport.
As for the first edition, OMAF includes encapsulation and signaling in ISOBMFF as well as streaming of omnidirectional media (DASH and MMT). It will reach its final milestone by the end of 2020.
360° video is certainly a vital use case towards a fully immersive media experience. Devices to capture and consume such content are becoming increasingly available and will probably contribute to the dissemination of this type of content. However, it is also understood that the complexity increases significantly, specifically with respect to large-scale, scalable deployments due to increased content volume/complexity, timing constraints (latency), and quality of experience issues.
Research aspects: understanding the increased complexity of 360° video or immersive media in general is certainly an important aspect to be addressed towards enabling applications and services in this domain. We may even start thinking that 360° video actually works (e.g., it’s possible to capture, upload to YouTube and consume it on many devices) but the devil is in the detail in order to handle this complexity in an efficient way to enable seamless and high quality of experience.
DASH and CMAF
The 4th edition of DASH (ISO/IEC 23009-1) will be published soon and MPEG is currently working towards a first amendment which will be about (i) CMAF support and (ii) event processing model. An overview of all DASH standards is depicted in the figure below, notably part one of MPEG-DASH referred to as media presentation description and segment formats.
The 2nd edition of the CMAF standard (ISO/IEC 23000-19) will become available very soon and MPEG is currently reviewing additional tools in the so-called technologies under considerations document as well as conducting various explorations. A working draft for additional media profiles is also under preparation.
Research aspects: with CMAF, low-latency supported is added to DASH-like applications and services. However, the implementation specifics are actually not defined in the standard and subject to competition (e.g., here). Interestingly, the Bitmovin video developer reports from both 2018 and 2019 highlight the need for low-latency solutions in this domain.
At the ACM Multimedia Conference 2019 in Nice, France I gave a tutorial entitled “A Journey towards Fully Immersive Media Access” which includes updates related to DASH and CMAF. The slides are available here.
Finally, let me try giving an outlook for 2020, not so much content-wise but events planned for 2020 that are highly relevant for this column:
- MPEG129, Jan 13-17, 2020, Brussels, Belgium
- DCC 2020, Mar 24-27, 2020, Snowbird, UT, USA
- MPEG130, Apr 20-24, 2020, Alpbach, Austria
- NAB 2020, Apr 08-22, Las Vegas, NV, USA
- ICASSP 2020, May 4-8, 2020, Barcelona, Spain
- QoMEX 2020, May 26-28, 2020, Athlone, Ireland
- MMSys 2020, Jun 8-11, 2020, Istanbul, Turkey
- IMX 2020, June 17-19, 2020, Barcelona, Spain
- MPEG131, Jun 29 – Jul 3, 2020, Geneva, Switzerland
- NetSoft,QoE Mgmt Workshop, Jun 29 – Jul 3, 2020, Ghent, Belgium
- ICME 2020, Jul 6-10, London, UK
- ATHENA summer school, Jul 13-17, Klagenfurt, Austria
- … and many more!
The 85th JPEG meeting was held in San Jose, CA, USA.
The meeting was distinguished by the Prime Time Engineering Emmy Award from the Academy of Television Arts & Sciences (ATAS) for the longevity of the first JPEG standard. Furthermore, a very successful workshop on JPEG emerging technologies was held at Microsoft premises in Silicon Valley with a broad participation from several companies working in imaging technologies. This workshop ended with the celebration of two JPEG committee experts, Thomas Richter and Ogawa Shigetaka, recognized by ISO outstanding contribution awards for the key roles they played in the development of JPEG XT standard.
The 85th JPEG meeting continued laying the groundwork for the continuous development of JPEG standards and exploration studies. In particular, the developments on new image coding standard JPEG XL, the low latency and complexity standard JPEG XS, and the release of the JPEG Systems interoperable 360 image standard, together with the exploration studies on image compression using machine learning and on the use of blockchain and distributed ledger technologies for media applications.
The 85th JPEG meeting had the following highlights:
- Prime Time Engineering Emmy award,
- JPEG Emerging Technologies Workshop,
- JPEG XL progresses towards a final specification,
- JPEG AI evaluates machine learning based coding solutions,
- JPEG exploration on Media Blockchain,
- JPEG Systems interoperable 360 image standards released,
- JPEG XS announces significant improvements of Bayer image sensor data compression.
Prime Time Engineering Emmy
The JPEG committee is honored to be the recipient of a prestigious Prime Time Engineering Award in 2019 by the US Academy of Television Arts & Sciences at the 71st Engineering Emmy Awards ceremony on the 23rd of October 2019 in Los Angeles, CA, USA. The first JPEG standard is known as a popular format in digital photography, used by hundreds of millions of users everywhere, in a wide range of applications including the world wide web, social media, photographic apparatus and smart cameras. The first part of the standard was published in 1992 and has grown to seven parts, with the latest, defining the reference software, published in 2019. This is a unique example of longevity in the fast moving information technologies and the Emmy award acknowledges this longevity and continuing influence over nearly three decades.
This is a well-deserved recognition not only for the Joint Photographic Experts Group committee members who started this standard under the auspices of ITU, ISO, IEC but also to all experts in the JPEG committee who continued to extend and maintain it, hence guaranteeing such a longevity.
According to Prof. Touradj Ebrahimi, Convenor of JPEG standardization committee, the longevity of JPEG is based on three very important factors: “The credibility by being developed under the auspices of three important standardization bodies, namely ITU, ISO and IEC, development by explicitly taking into account end users, and the choice of being royalty free”. Furthermore, “JPEG defined not only a great technology but also it was a committee that first defined how standardization should take place in order to become successful”.
JPEG Emerging Technologies Workshop
At the 85th JPEG meeting in San Jose, CA, USA, JPEG organized the “JPEG Emerging Technologies Workshop” on the 5th of November 2019 to inform industry and academia active in the wider field of multimedia and in particular in imaging, about current JPEG Committee standardization activities and exploration studies. Leading JPEG experts shared highlights about some of the emerging JPEG technologies that could shape the future of imaging and multimedia, with the following program:
- Welcome and Introduction (Touradj Ebrahimi);
- JPEG XS – Lightweight compression; Transparent quality. (Antonin Descampe);
- JPEG Pleno (Peter Schelkens);
- JPEG XL – Next-generation Image Compression (Jan Wassenberg and Jon Sneyers);
- High-Throughput JPEG 2000 – Big improvement to JPEG 2000 (Pierre-Anthony Lemieux);
- JPEG Systems – The framework for future and legacy standards (Andy Kuzma);
- JPEG Privacy and Security and Exploration on Media Blockchain Standardization Needs (Frederik Temmermans);
- JPEG AI: Learning to Compress (João Ascenso)
This very successful workshop ended with a panel moderated by Fernando Pereira where different relevant media technology issues were discussed with a vibrant participation of the attendees.
Proceedings of the JPEG Emerging Technologies Workshop are available for download via the following link: https://jpeg.org/items/20191108_jpeg_emerging_technologies_workshop_proceedings.html
The JPEG XL Image Coding System (ISO/IEC 18181) continues its progression towards a final specification. The Committee Draft of JPEG XL is being refined based on feedback received from experts from ISO/IEC national bodies. Experiments indicate the main two JPEG XL modes compare favorably with specialized responsive and lossless modes, enabling a simpler specification.
The JPEG committee has approved open-sourcing the JPEG XL software. JPEG XL will advance to the Draft International Standard stage in 2020-01.
JPEG AI carried out rigorous subjective and objective evaluations of a number of promising learning-based image coding solutions from state of the art, which show the potential of these codecs for different rate-quality tradeoffs, in comparison to widely used anchors. Moreover, a wide set of objective metrics were evaluated for several types of image coding solutions.
JPEG exploration on Media Blockchain
Fake news, copyright violations, media forensics, privacy and security are emerging challenges in digital media. JPEG has determined that blockchain and distributed ledger technologies (DLT) have great potential as a technology component to address these challenges in transparent and trustable media transactions. However, blockchain and DLT need to be integrated closely with a widely adopted standard to ensure broad interoperability of protected images. Therefore, the JPEG committee has organized several workshops to engage with the industry and help to identify use cases and requirements that will drive the standardization process. During the San Jose meeting, the committee drafted a first version of the use cases and requirements document. On the 21st of January 2020, during its 86th JPEG Meeting to be held in Sydney, Australia, JPEG plans to organize an interactive discussion session with stakeholders. Practical and registration information is available on the JPEG website. To keep informed and to get involved in this activity, interested parties are invited to register to the ad hoc group’s mailing list. (http://jpeg-blockchain-list.jpeg.org).
JPEG Systems interoperable 360 image standards released.
The ISO/IEC 19566-5 JUMBF and ISO/IEC 19566-6 JPEG 360 were published in July 2019. These two standards work together to define basics for interoperability and lay the groundwork for future capabilities for richer interactions with still images as we add functionality to JUMBF (Part 5), Privacy & Security (Part 4), JPEG 360 (Part 6), and JLINK (Part 7).
JPEG XS announces significant improvements of Bayer image sensor data compression.
JPEG XS aims at standardization of a visually lossless low-latency and lightweight compression that can be used as a mezzanine codec in various markets. Work has been done in the last meeting to enable JPEG XS for use in Bayer image sensor compression. Among the targeted use cases for Bayer image sensor compression, one can cite video transport over professional video links, real-time video storage in and outside of cameras, and data compression onboard of autonomous cars. The JPEG Committee also announces the final publication of JPEG XS Part-3 “Transport and Container Formats” as International Standard. This part enables storage of JPEG XS images in various formats. In addition, an effort is currently on its final way to specify RTP payload for JPEG XS, which will enable transport of JPEG XS in the SMPTE ST2110 framework.
“The 2019 Prime Time Engineering Award by the Academy is a well-deserved recognition for the Joint Photographic Experts Group members who initiated standardization of the first JPEG standard and to all experts of the JPEG committee who since then have extended and maintained it, guaranteeing its longevity. JPEG defined not only a great technology but also it was the first committee that defined how standardization should take place in order to become successful” said Prof. Touradj Ebrahimi, the Convenor of the JPEG Committee.
The Joint Photographic Experts Group (JPEG) is a Working Group of ISO/IEC, the International Organisation for Standardization / International Electrotechnical Commission, (ISO/IEC JTC 1/SC 29/WG 1) and of the International Telecommunication Union (ITU-T SG16), responsible for the popular JPEG, JPEG 2000, JPEG XR, JPSearch, JPEG XT and more recently, the JPEG XS, JPEG Systems, JPEG Pleno and JPEG XL families of imaging standards.
The JPEG Committee nominally meets four times a year, in different world locations. The 84th JPEG Meeting was held on 13-19 July 2019, in Brussels, Belgium. The next 86th JPEG Meeting will be held on 18-24 January 2020, in Sydney, Australia.
More information about JPEG and its work is available at www.jpeg.org or by contacting Antonio Pinheiro or Frederik Temmermans (firstname.lastname@example.org) of the JPEG Communication Subgroup.
If you would like to stay posted on JPEG activities, please subscribe to the jpeg-news mailing list on http://jpeg-news-list.jpeg.org.
Future JPEG meetings are planned as follows:
- No 86, Sydney, Australia, January 18 to 24, 2020
- No 87, Erlangen, Germany, April 25 to 30, 2020
“What does history mean to computer scientists?” – that was the first question that popped up in my mind when I was to attend the ACM Heritage Workshop at Minneapolis few months back. And needless to say, the follow up question was “what does history mean for a multimedia systems researcher?” As a young graduate student, I had the joy of my life when my first research paper on multimedia authoring (a hot topic those days) was accepted for presentation in the first ACM Multimedia in 1993, and that conference was held along side SIGGRAPH. Thinking about that, it gives multimedia systems researchers about 25 to 30 years of history. But what a flow of topics this area has seen: from authoring to streaming to content-based retrieval to social media and human-centered multimedia, the research area has been hot as ever. So, is it the history of research topics or the researchers or both? Then, how about the venues hosting these conferences, the networking events, or the grueling TPC meetings that prepped the conference actions?
With only questions and no clear answers, I decided to attend the workshop with an open mind. Most SIGs (Special Interest Groups) in ACM had representation at this workshop. The workshop itself was organized by the ACM History Committee. I understood this committee, apart from the workshop, organizes several efforts to track, record, and preserve computing efforts across disciplines. This includes identifying distinguished persons (who are retired but made significant contributions to computing), coming up with a customized questionnaire for the persons, training the interviewer, recording the conversations, curating them, archiving, and providing them for public consumption. Efforts at most SIGs were mostly based on the website. They were talking about how they try to preserve conference materials such as paper proceedings (when only paper proceedings were published), meeting notes, pictures, and videos. For instance, some SIGs were talking about how they tracked and preserved ACM’s approval letter for the SIG!
It was very interesting – and touching – to see some attendees (senior Professors) coming to the workshop with boxes of materials – papers, reports, books, etc. They were either downsizing their offices or clearing out, and did not feel like throwing the material in recycling bins! These materials were given to ACM and Babbage Institute (at University of Minnesota, Minneapolis) for possible curation and storage.
ACM History committee members talked about how they can fund (at a small level) projects that target specific activities for preserving and archiving computing events and materials. ACM History Committee agreed that ACM should take more responsibility in providing technical support to web hosting – obviously, not sure whether anything tangible would result.
Over the two days at the workshop, I was getting answers to my questions: History can mean pictures and videos taken at earlier MM conferences, TPC meetings, SIGMM sponsored events and retreats. Perhaps, the earlier paper proceedings that have some additional information than what is found in the corresponding ACM Digital Library version. Interviews with different research leaders that built and promoted SIGMM.
It was clear that history meant different things to different SIGs, and as SIGMM community, we would have to arrive at our own interpretation, collect and preserve that. And that made me understand the most obvious and perhaps, the most important thing: today’s events become tomorrow’s history! No brainer, right? Preserving today’s SIGMM events will give us a richer, colorful, and more complete SIGMM history for the future generations!
For the curious ones:
ACM Heritage Workshop website is at: https://acmsigheritage.dash.umn.ed
Some of the workshop presentation materials are available at: https://acmsigheritage.dash.umn.edu/uncategorized/class-material-posted/
The annual ACM Multimedia Conference was held in Nice, France during October 21st to 25th, 2019. Being the 27th of its series, it attracted approximately 800 participants from all over the World. Among them were the student volunteers who supported the smooth organization of the Conference. In this article, I would like to introduce the reports and comments provided by each of them.
Reports from student volunteers
Hui Chen (Tsinghua University, China)
It was such an honor for me to be granted for the student travel funding. During my stay in Nice, as a Ph.D. researcher, I read a lot of nice academical works which inspired me a lot. And I had wonderful conversations with authors from all over the world. Meanwhile, as a session volunteer, I was glad to help speakers and the audience during sessions. Their nice works and warm smiles impressed me a lot. What I most valued about is the friendship with other volunteers. We often discussed the attractive places and the delicious food in Nice, and cared for each other along the journey. I am deeply thankful for this wonderful experience in Nice. Some advice: (1) I think the beret was not necessary for the volunteers. Majority of us seemed to dislike it, because I did not see many volunteers wearing them. (2) Notifications about the room changing for sessions should be made clear early. (3) The manner of being punctual can be emphasized in the ice-break meeting. (4) Reminding of volunteered sessions could be shown in the Whova app.
Shizhe Chen (Renmin University of China, China)
It was a great pleasure to attend the ACM Multimedia this year. I have attended MM twice and the organizations are getting better and better. One big change was the deployment of the Whova APP, which really improved our experience at MM. On the one hand, it made connections among different attendants and organizations more convenient and efficient. On the other hand, it was nice to share photos in the APP about the conference. The volunteers are very devoted to serve the conference and uploaded many good pictures. The conference banquet at Nice also improved a lot. I really enjoyed local foods and magic shows. Even though there were so many people at that night, the organization was very ordered and made everyone satisfied. I also liked some multimedia modern art pieces exhibited at the conference which were wonderful. The conference session I enjoyed most was the Multimedia Grand Challenge, which provided a great opportunity for us academics to get involved in real-life problems in industries. It would have been better if there were more opportunities off-line to communicate with industry people in the conference. In summary, thanks for all the efforts the organizers have put on the conference. I am also proud to be able to contribute a little as a volunteer this time.
Yang Chen (University of Science and Technology of China, China)
This was my first time attending an international conference and needed to be a session volunteer during the conference. It was also my first time abroad. So I felt a litter nervous before going abroad for the conference. Fortunately, everything went smoothly in the end. The MM conference has been held for many years, so the experience of organizing the conference is rich, and the scale is also large. The MM conference provided a lot of convenience for the participants. All conference schedules can be found at the venue, so attendees can easily find the sessions that they needed to participate or were interested in. In addition, this year, the MM conference had many local characteristics of Nice, France. All attendees were given the famous local soap of Nice. The French food provided at the venue was also very delicious. All in all, it was a very impressive MM conference experience.
Amanda Duarte (Universitat Politècnica de Catalunya, Spain)
ACM Multimedia 2019 for me was a different and great experience. This was the first time that I attended this conference and it was very different of what I am used to find in a big conference. For the past four years I have been going to conferences more focused on Computer Vision and Machine Learning which nowadays have a large number of attendees, accepted papers, parallel sessions, and all the stress of being in a large venue and need to find the sessions that interest you across large rooms full of people.
ACM Multimedia on the other way around was held in a smaller venue with less attendees but yet with a very large amount of high quality researchers. Thus, I had the chance of talking more to great researchers in the areas that I have interest and also were interested in my work. In addition to my great experience during the conference in general, I had a great experience participating in the Doctoral Symposium during the conference. This event gave me the opportunity to present my work to great researchers that work on topics related to my doctoral thesis and were able of giving me great feedback and suggestions on how to improve my research.
Gelli Francesco (National University of Singapore, Singapore)
Although I am still a student, this edition of ACM Multimedia has been my third. Similar to the previous times, I met with the now more familiar community and allocated my time between attending sessions, walking around the posters, and rehearsing my presentation. My observation is that this year, there has been a major focus on applications rather than on the technical aspects. For example, the Best Paper session included works on zooming audio together with video, multi-modal dialogue system and privacy. The Brave New Ideas session, in which I presented, saw some more unusual and daring applications, such as the automatic creation of a sequence of images to match a short story. I had a great time presenting my paper on ranking images by subjective attributes, as I did my best to engage the audience with multiple questions. I learned from the senior organizers that their goal is to push the Multimedia community on applications such as Wellness and Human-Machine interaction, which naturally involves multimedia data. It was also inspiring to see so many engaged volunteers all dressed in blue running around with that very traditional beret. Definitely looking forward to attend the next edition.
Trung-Hiếu Hoàng (University of Science, Vietnam National University Ho Chi Minh City, Vietnam)
I am excited to share my experience in ACMMM 2019, as a person who received the student travel grant. Living in Vietnam, I cannot believe that I had such a great opportunity to travel thousands of kilometers and attend one of the top conferences in the world. On the first day, I met a lot of friends who received the same travel grant like me. We hung out together sharing different stories and experiences, all of us were enthusiastic and couldn’t wait to become a part of the volunteer team and contribute to the success of this year’s conference. During the last two years, I have had a strong interest in medical image processing. In detail, my research focuses on abnormality detection in the endoscopic image. Attending ACMMM 2019 gave me a wonderful chance to present my work, and discuss with experts in this field. I enjoyed the Healthcare Multimedia workshop, where I met the organizers of the BioMedia Grand Challenge track. I loved talking with them and discussing the future and their interests. In conclusion, I am so glad that the student grant brought me to Europe for the first time, opened up my mind and showed me wonderful things that I had never seen before.
Chia-Wei Hsieh (National Chiao Tung University, Taiwan)
I attended the ACM Multimedia 2019 in Nice, France, and listened to new AI approaches by experts and scholars from various countries. In this conference, I got the chance to learn about the latest studies’ results from world-renowned universities and research institutions, and learn about the latest developments in the industry. These most advanced tools broadened my view and realized the disabilities that can be improved in our future research. Furthermore, I appreciated serving as a volunteer at the conference. This forced me to interact with people and have made many good friends from all over the world. Everything is really well to attend MM’19, but a fly in the ointment is that the attendance of the last two days was pretty low. With some special benefits for people to stay, there could be more academic exchanges at the conference.
Michael Kerr (RMIT University, Australia)
I came to the conference this year hoping to learn about some very specific research that was being presented in my own field of employment of video surveillance. My expectations around these presentations was well met, but additionally I also took away new insights into other areas that were previously not of great interest to me, mainly as I had not explored their application to my own field.
I particularly enjoyed the Tutorials on Multimedia Forensics and was interested to see the work done in areas that had been developed in recent years. I was very engaged by the application of CNN to solve forensic challenges and quickly found that the application of these systems was a major theme in the entire conference. So, whilst I enjoyed many of the practical applications such as the Tutorials, the System Demonstrations, and the Open Source Software Competition, I also learnt a great deal about the growth of CNN technologies within the multimedia discipline as a whole. This has had a positive effect by helping to develop my own research plans and in particular enabling the identification of new applications that may be of interest to those working in multimedia as well as my specific field of interest.
Saurabh Kumar (Indian Institute of Technology Bombay, India)
I had an enjoyable experience at ACM Multimedia and learned a lot as this was my first big international conference. The papers were from diverse applications, and it was great talking to the speakers after the talks and at the posters. This allowed me to meet many amazing people from various backgrounds and talk about the exciting research they are doing. It was easy to approach anyone at the conference for casual or technical discussions. These days conferences are recorded with recording and proceedings are put up online, but that is just the tip of the iceberg. Attending a conference is a much broader experience, and I got an opportunity to experience this thanks to this travel grant. I made friends from many countries, thanks to the friendly atmosphere, and learned how my research fits in. I would like to highlight that being a volunteer was the primary reason all of this was possible. As a volunteer, it was so much easier to talk to people, and it was great helping them around. I would love to come and help out again anytime. The conference was just perfect, and I will remember my experience as a volunteer, which made it way more fun and especially the people I interacted with. I am certainly submitting to the next MM and coming back again with more exciting research and to meet this fantastic community. Also, visiting Nice was a delight, and it is a magnificent city, and the food was delicious.
Yadan Luo (University of Queensland, Australia)
It has been a great experience attending ACM Multimedia 2019 in Nice this October, where I met many brilliant people working in the same field. The Invited Talks offered impressive ideas, inspiring visions of the future and excellent coverage of many areas, like preserving audiovisual archives and data protection law. The most impressive part of the conference was the Art Exhibition, which showed a great power of installation art and interactive multimedia. Moreover, this great meeting brought me a lot of precious opportunities of meeting other researchers working in other subfields like video streaming, domain adaptation, and image generation. All chatting with them helped me quickly pick up plenty of new knowledge and opened a door to other research directions. In conclusion, I would like to sincerely express my thanks to people who have prepared the conference, in which I have benefited a lot from this fantastic event.
Kwanyong Park (Korea Advanced Institute of Science and Technology, Korea)
ACM Multimedia 2019 was especially special to me in terms of my improvement. Honestly speaking, my paper, presented in ACM Multimedia 2019, is my first international research accomplishment. So I really lacked experiences and skills about presenting my work and communicating with other researchers. But after ACM Multimedia 2019, I have confidence that at least I can do better and better. Combination of Oral and Poster sessions was really impressive and effective to obtain a lot of information in a short time. Every paper had at least 2 minutes oral presentation, and I could catch the core concept. Based on that, I easily decided whether the paper is closely related to my interest or not. I agree that this kind of configuration is a really efficient way. Through the conference, I saw which topics the students, who have mostly academic perspective, are focusing on. Although it is a great stimulus to me, I think practical perspective from various companies is also important to broaden the horizon. However, research from companies was relatively hard to find in ACM Multimedia 2019. I think that having some interactive booths from companies would be helpful.
K. R. Prajawal (International Institute of Information Technology, India)
ACM Multimedia was not only my first top-tier conference, but my first conference as well. I was pleased to see a lot of interesting and impactful papers from people from various backgrounds and universities. I particularly liked the conference venue as well, as it was spacious and comfortable to encourage a healthy discussion. I personally feel the food and meals could have been better curated. For example, I’m a vegetarian. I understand I have few items to eat, but the vegetarian items were not clearly labeled. This can be rectified in the future editions of the conference. I also believe that most of the presentation rooms were well prepared and organized for the presentation. During my oral presentation, however, I had an issue in playing a demo video. This issue had occurred because the conference organizers were not fully prepared to play a video during the presentation. That is rather odd, I felt, given this is a top-tier multimedia conference, which means it will have lots of audio and visual content. But, other than that, I had a very pleasant and fruitful time at the conference. I was able to connect and socialize with eminent researchers at ACM Multimedia and I hope to attend the next edition as well.
Estêvão Bissoli Saleme (Federal University of Espírito Santo, Brazil)
ACM Multimedia 2019 in Nice was such a unique experience. I volunteered for six sessions and attended a couple more, including the Best Paper session which I particularly liked the most. Not only because it brought original ideas, but also because I had the opportunity to witness an innovative presentation of the paper “Multimodal Dialog System: Generating Responses via Adaptive Decoders,” in which the speakers kept a dialog between them to give their talk. Besides that, I enjoyed the poster presentation hall, which we could mingle with other participants, get to know other people’s work better, and interact with them. One presentation that impressed me was entitled “Editing Text in the Wild.” In this work, the researchers proposed a method to replace any text in a picture keeping the background intact. The outcome looked like a real figure. Just impressive! Technically, I was more interested in Quality of Experience and Interaction, but I thought the subject of the papers in this session was spread out, which hindered the interaction with other presenters. It lacked a bit of work related to QoE itself. Finally, another aspect that deserves praise was the organization. Whova helped hugely, and we could post photos and interact with other people there. Moreover, Martha, Laurent, and Benoit were omnipresent and tireless. They were just on fire and worked very well to deliver such a great conference!
David Semedo (Universidade NOVA de Lisboa, Portugal)
My experience at ACM MM 2019 was very positive. I presented two full papers: one as a full oral and one as a short presentation. As such, the whole event was quite intense for me but also very personally enriching. I could do a lot of networking, with both students and senior researchers (the ConfLab contributed in this regard). As I am in my last Ph.D. year, I could talk with several researchers, from which I got valuable advices on how to take the next steps towards pursuing a career in research. At the poster sessions, I had the opportunity to discuss in detail my work with several people, from which I received constructive feedback. While I liked the fact that posters stayed posted during the whole conference, some were hard to find or were a bit hidden (e.g. the ones facing the wall). The conference program covered a wide range of topics on Multimedia. This allowed me to understand which techniques are being used on different tasks, and identify common technical aspects across these different tasks. It not only helped me in being updated, in terms of state-of-the-art approaches, but also in defining potential future research directions.
Junbo Wang (Institute of Automation, Chinese Academy of Sciences, China)
From 21-25 October 2019, I attended the ACM Multimedia 2019 Conference in Nice, France. This conference is a premier international conference in the area of multimedia within the field of computer science and I am very proud of attending this professional conference thanks to the ACM student travel grant. In this conference, I met many famous researchers in the area of multimedia, such as Tao Mei, Tat-Seng Chua, and Changsheng Xu. During the Poster or Oral sessions, I discussed many academic problems with these researchers, which really gave me new vision and insight. In addition to many academic talks, I also enjoyed a lot of French food, such as Macaroon and Foie Gras. As a session volunteer, I was also very happy to help the attendees in some session talks. The interesting and professional talks inspired me and guided my interest to many different research areas. Moreover, the conference was held at the NICE ACROPOLIS Convention Center in Nice, which is a beautiful and peaceful city. The fresh air and pleasant sea breeze gave us a good mood every day and made us have an unforgettable experience in this city. Overall, I think this conference was very successful to reach its fundamental objective: free communication. However, I also found that the sponsors this year was far less than that for last year, which can be expected to be better in the next year.
Xin Wang (Donghua University, China)
In my experience, I think MM’19 was very impressive and easy to follow. The arrangement of the conference was very reasonable especially the Whova APP helped me a lot whenever I wanted to figure on what is going on during the conference. Except one thing that I found in the first two days, there were still some workshops that had different room numbers between the session volunteer schedule (a Google sheet). That made me confused for a while, but luckily Martha told us use the APP as the standard. I really loved the Demo session and I think there must be people who had the same feeling like me. I met and talked with many researchers from all of the world, such as NUS, DCU, Nagoya University, Shandong University, National Chiao Tung University, etc. I still keep contact with some of them and exchange our research ideas. Besides, the weather in Nice was very comfortable. The food during the conference was rich and delicious. All of these reasons make me look forward to the next year’s MM conference.
Yitian Yuan (Tsinghua University, China)
It was very enjoyable to attend the ACM MM 2019 conference. As a volunteer, I could meet peers from other countries and schools and communicate with them, which is of great benefit to my scientific research knowledge. I think the agenda of this ACM MM conference was compact and reasonably arranged, but there are still the following problems that I think need to be improved: (1) The entrance of the main conference hall was dimly lit and the signs were not obvious, so volunteers needed to guide, otherwise it was difficult for participants to find the place. (2) I wish the stage at the Banquet had a bigger screen, so that everyone can see the name of the winners and the prize information. Finally, I wish the ACM MM better and better and more international influence.
Zhengyu Zhao (Radboud University, The Netherlands)
This was my second time to attend ACM Multimedia, after the first time in Korea in 2018. Overall, I felt the conference this year was a very successful edition, reflected by the perfect location, delicious food, well-designed program and especially the efforts from the volunteers. But still, I have some suggestions for further improvement. Specifically, from the experience of the poster presentation of my reproducibility paper, I realized that most people actually know nothing about this new reproducibility track. This made most of my time spent on explaining the general background of the track and so less time for my own research. I was happy to explain and get more people involved in this track but it would be better if the organization team could give more exposure of this track beforehand. From this experience serving as one of the poster session chairs, I figured out that many people do not use the official communication APP Whova, so the instructions and important announcements could not reach all the participants timely. In my opinion, more offline solutions (e.g., a big screen on the spot) would help.
In general, the student volunteers seemed to have enjoyed the event to the full extent, but some of them have proposed constructive suggestions that organizers and participants to future versions of the conference could take in account to provide better experiences!
All in all, I think we can see from the submitted reports that providing the chance to experience top-level research and to mix with all-range of researchers at a top-level Conference to young researchers who may one day become leaders in our community, would surely benefit us in the future.