MPEG Column: 144th MPEG Meeting in Hannover, Germany

The 144th MPEG meeting was held in Hannover, Germany! For those interested, the press release is available with all the details. It’s great to see progress being made in person (cf. also the group pictures below). The main outcome of this meeting is as follows:

  • MPEG issues Call for Learning-Based Video Codecs for Study of Quality Assessment
  • MPEG evaluates Call for Proposals on Feature Compression for Video Coding for Machines
  • MPEG progresses ISOBMFF-related Standards for the Carriage of Network Abstraction Layer Video Data
  • MPEG enhances the Support of Energy-Efficient Media Consumption
  • MPEG ratifies the Support of Temporal Scalability for Geometry-based Point Cloud Compression
  • MPEG reaches the First Milestone for the Interchange of 3D Graphics Formats
  • MPEG announces Completion of Coding of Genomic Annotations

We have modified the press release to cater to the readers of ACM SIGMM Records and highlighted research on video technologies. This edition of the MPEG column focuses on MPEG Systems-related standards and visual quality assessment. As usual, the column will end with an update on MPEG-DASH.

Attendees of the 144th MPEG meeting in Hannover, Germany.

Visual Quality Assessment

MPEG does not create standards in the visual quality assessment domain. However, it conducts visual quality assessments for its standards during various stages of the standardization process. For instance, it evaluates responses to call for proposals, conducts verification tests of its final standards, and so on. MPEG Visual Quality Assessment (AG 5) issued an open call to study quality assessment for learning-based video codecs. AG 5 has been conducting subjective quality evaluations for coded video content and studying their correlation with objective quality metrics. Most of these studies have focused on the High Efficiency Video Coding (HEVC) and Versatile Video Coding (VVC) standards. To facilitate the study of visual quality, MPEG maintains the Compressed Video for the study of Quality Metrics (CVQM) dataset.

With the recent advancements in learning-based video compression algorithms, MPEG is now studying compression using these codecs. It is expected that reconstructed videos compressed using learning-based codecs will have different types of distortion compared to those induced by traditional block-based motion-compensated video coding designs. To gain a deeper understanding of these distortions and their impact on visual quality, MPEG has issued a public call related to learning-based video codecs. MPEG is open to inputs in response to the call and will invite responses that meet the call’s requirements to submit compressed bitstreams for further study of their subjective quality and potential inclusion into the CVQM dataset.

Considering the rapid advancements in the development of learning-based video compression algorithms, MPEG will keep this call open and anticipates future updates to the call.

Interested parties are kindly requested to contact the MPEG AG 5 Convenor Mathias Wien (wien@lfb.rwth- aachen.de) and submit responses for review at the 145th MPEG meeting in January 2024. Further details are given in the call, issued as AG 5 document N 104 and available from the mpeg.org website.

Research aspects: Learning-based data compression (e.g., for image, audio, video content) is a hot research topic. Research on this topic relies on datasets offering a set of common test sequences, sometimes also common test conditions, that are publicly available and allow for comparison across different schemes. MPEG’s Compressed Video for the study of Quality Metrics (CVQM) dataset is such a dataset, available here, and ready to be used also by researchers and scientists outside of MPEG. The call mentioned above is open for everyone inside/outside of MPEG and allows researchers to participate in international standards efforts (note: to attend meetings, one must become a delegate of a national body).

MPEG Systems-related Standards

At the 144th MPEG meeting, MPEG Systems (WG 3) produced three news-worthy items as follows:

  • Progression of ISOBMFF-related standards for the carriage of Network Abstraction Layer (NAL) video data.
  • Enhancement of the support of energy-efficient media consumption.
  • Support of temporal scalability for geometry-based Point Cloud Compression (PPC).

ISO/IEC 14496-15, a part of the family of ISOBMFF-related standards, defines the carriage of Network Abstract Layer (NAL) unit structured video data such as Advanced Video Coding (AVC), High Efficiency Video Coding (HEVC), Versatile Video Coding (VVC), Essential Video Coding (EVC), and Low Complexity Enhancement Video Coding (LCEVC). This standard has been further improved with the approval of the Final Draft Amendment (FDAM), which adds support for enhanced features such as Picture-in-Picture (PiP) use cases enabled by VVC.

In addition to the improvements made to ISO/IEC 14496-15, separately developed amendments have been consolidated in the 7th edition of the standard. This edition has been promoted to Final Draft International Standard (FDIS), marking the final milestone of the formal standard development.

Another important standard in development is the 2nd edition of ISO/IEC14496-32 (file format reference software and conformance). This standard, currently at the Committee Draft (CD) stage of development, is planned to be completed and reach the status of Final Draft International Standard (FDIS) by the beginning of 2025. This standard will be essential for industry professionals who require a reliable and standardized method of verifying the conformance of their implementation.

MPEG Systems (WG 3) also promoted ISO/IEC 23001-11 (energy-efficient media consumption (green metadata)) Amendment 1 to Final Draft Amendment (FDAM). This amendment introduces energy-efficient media consumption (green metadata) for Essential Video Coding (EVC) and defines metadata that enables a reduction in decoder power consumption. At the same time, ISO/IEC 23001-11 Amendment 2 has been promoted to the Committee Draft Amendment (CDAM) stage of development. This amendment introduces a novel way to carry metadata about display power reduction encoded as a video elementary stream interleaved with the video it describes. The amendment is expected to be completed and reach the status of Final Draft Amendment (FDAM) by the beginning of 2025.

Finally, MPEG Systems (WG 3) promoted ISO/IEC 23090-18 (carriage of geometry-based point cloud compression data) Amendment 1 to Final Draft Amendment (FDAM). This amendment enables the compression of a single elementary stream of point cloud data using ISO/IEC 23090-9 (geometry-based point cloud compression) and storing it in more than one track of ISO Base Media File Format (ISOBMFF)-based files. This enables support for applications that require multiple frame rates within a single file and introduces a track grouping mechanism to indicate multiple tracks carrying a specific temporal layer of a single elementary stream separately.

Research aspects: MPEG Systems usually provides standards on top of existing compression standards, enabling efficient storage and delivery of media data (among others). Researchers may use these standards (including reference software and conformance bitstreams) to conduct research in the general area of multimedia systems (cf. ACM MMSys) or, specifically on green multimedia systems (cf. ACM GMSys).

MPEG-DASH Updates

The current status of MPEG-DASH is shown in the figure below with only minor updates compared to the last meeting.

MPEG-DASH Status, October 2023.

In particular, the 6th edition of MPEG-DASH is scheduled for 2024 but may not include all amendments under development. An overview of existing amendments can be found in the column from the last meeting. Current amendments have been (slightly) updated and progressed toward completion in the upcoming meetings. The signaling of haptics in DASH has been discussed and accepted for inclusion in the Technologies under Consideration (TuC) document. The TuC document comprises candidate technologies for possible future amendments to the MPEG-DASH standard and is publicly available here.

Research aspects: MPEG-DASH has been heavily researched in the multimedia systems, quality, and communications research communities. Adding haptics to MPEG-DASH would provide another dimension worth considering within research, including, but not limited to, performance aspects and Quality of Experience (QoE).

The 145th MPEG meeting will be online from January 22-26, 2024. Click here for more information about MPEG meetings and their developments.

JPEG Column: 100th meeting in Covilha, Portugal

JPEG AI reaches Committee Draft stage at the 100th JPEG meeting

The 100th JPEG meeting was held in Covilhã, Portugal, from July 17th to 21st, 2023. At this meeting, in addition to its usual standardization activities, the JPEG Committee organized a celebration on the occasion of its 100th meeting. This face-to-face meeting, the second after the pandemic, had a record amount of face-to-face participation, with more than 70 experts attending the meeting in person.

Several activities reached important milestones. JPEG AI became a committee draft after intensive meeting sessions with detailed analysis of the core experiment results and multiple evaluations of the considered technologies. JPEG NFT issued a call for proposals, and the first JPEG XE use cases and requirements document was also issued publicly. Furthermore, JPEG Trust has made major steps towards its standardization.

The 100th JPEG meeting had the following highlights:

  • JPEG Celebrates its 100th meeting;
  • JPEG AI reaches Committee Draft;
  • JPEG Pleno Learning-based Point Cloud coding improves its Verification Model;
  • JPEG Trust develops its first part, the “Core Foundation”;
  • JPEG NFT releases the Final Call for Proposals;
  • JPEG AIC-3 initiates the definition of a Working Draft;
  • JPEG XE releases the Use Cases and Requirements for Event-based Vision;
  • JPEG DNA defines the evaluation of the responses to the Call for Proposals;
  • JPEG XS proceeds the development of the 3rd edition;
  • JPEG Systems releases a Reference Software.

The following sections summarize the main highlights of the 100th JPEG meeting.

JPEG Celebrates its 100th meeting

The JPEG Committee organized a celebration of its 100th meeting. A ceremony took place on July 19, 2023 to mark this important milestone. The JPEG Convenor initiated the ceremony, followed by a speech from Prof. Carlos Salema, founder and former chair of the Instituto de Telecomunicações and current vice president of the Lisbon Academy of Sciences, and a welcome note from Prof. Silvia Socorro, vice-rector for research at the University of Beira Interior. Personalities from standardization organizations ISO, IEC and ITU, as well as the Portuguese government, sent welcome addresses in form of recorded videos. Furthermore, a collection of short video addresses from past and current JPEG experts was collected and presented during the ceremony. The celebration was preceded by a workshop on “Media Authenticity in the Age of Artificial Intelligence”. Further information on the workshop and its proceedings are accessible on jpeg.org. A social event followed the celebration ceremony.

The 100th meeting celebration and cake.

100th meeting Social Event.

JPEG AI

The JPEG AI (ISO/IEC 6048) learning-based image coding system has completed the Committee Draft of the standard. The current JPEG AI Verification Model (VM) has two operation points, called base and high which include several tools which can be enabled or disabled, without re-training the neural network models. The base operation point is a subset of design elements of the high operation point. The lowest configuration (base operating point without tools) provides 8% rate savings over the VVC Intra anchor with twice faster decoding and 250 times faster encoder run time on CPU. In the most powerful configuration, the current VM achieves a 29% compression gain over the VVC Intra anchor.

The performance of the JPEG AI VM 3 was presented and discussed during the 100th JPEG meeting. The findings of the 15 core experiments created during the previous 99th JPEG meeting, as well as other input contributions, were discussed and investigated. This effort resulted in the reorganization of many syntactic parts with the goal of their simplification, as well as the use of several neural networks and tools, namely some design simplifications and post filtering improvements. Furthermore, coding efficiency was increased at high quality up to visually lossless, and region-of-interest quality enhancement functionality, as well as bit-exact repeatability, were added among other enhancements. The attention mechanism for the high operation point is the most significant change, as it considerably decreases decoder complexity. The entropy decoding neural network structure is now identical for the high and base operation points. The defined analysis and synthesis transforms enable efficient coding from high quality to near visually lossless and the chroma quality has been improved with the use of novel enhancement filtering technologies.

JPEG Pleno Learning-based Point Cloud coding

The JPEG Pleno Point Cloud activity progressed at the 100th meeting with a major improvement to its Verification Model (VM) incorporating a sparse convolutional framework providing improved quality with a more efficient computational model. In addition, an exciting new application was demonstrated showing the ability of the JPEG VM to support point cloud classification. The 100th JPEG Meeting also saw the release of a new point cloud test set to better support this activity. Prior to the 101st JPEG meeting in October 2023, JPEG experts will investigate possible advancements to the VM in the areas of attention models, voxel pruning within sparse tensor convolution, and support for residual lossless coding. In addition, a major Exploration Study will be conducted to explore the latest point cloud quality metrics.

JPEG Trust

The JPEG Committee is expediting the development of the first part, the “Core Foundation”, of its new international standard: JPEG Trust. This standard defines a framework for establishing trust in media, and addresses aspects of authenticity and provenance through secure and reliable annotation of media assets throughout their life cycle. JPEG Trust is being built on its 2022 Call for Proposals, whose responses form the basis of the framework under development.

The new standard is expected to be published in 2024. To stay updated on JPEG Trust, please regularly check the JPEG website at jpeg.org for the latest information and reach out to the contacts listed below to subscribe to the JPEG Trust mailing list.

JPEG NFT

Non-Fungible Tokens (NFTs) are an exciting new way to create and trade media assets, and have seen an increasing interest from global markets. NFTs promise to impact the trading of artworks, collectible media assets, micro-licensing, gaming, ticketing and more.  At the same time, concerns about interoperability between platforms, intellectual property rights, and fair dealing must be addressed.

JPEG is pleased to announce a Final Call for Proposals on JPEG NFT to address these challenges. The Final Call for Proposals on JPEG NFT and the associated Use Cases and Requirements for JPEG NFT document can be downloaded from the jpeg.org website. JPEG invites interested parties to register their proposals by 2023-10-23. The final deadline for submission of full proposals is 2024-01-15.

JPEG AIC

During the 100th JPEG meeting, the AIC activity continued its efforts on the Core Experiments, which aim at collecting fundamental information on the performance of the contributions received in April 2023 in response to a Call for Contributions on Subjective Image Quality Assessment. These results will be considered during the design of the AIC-3 standard, which has been carried out in a collaborative way since its beginning. The activity also initiated the definition of a Working Draft for AIC-3.

Other activities are also planned to initiate the work on a Draft Call for Proposals on Objective Image Quality Metrics (AIC-4) during the 101st JPEG meeting, October 2023. The JPEG Committee invites interested parties to take part in the discussions and drafting of the Call.

JPEG XE

For the Event-based Vision exploration, called JPEG XE, the JPEG Committee finalized a first version of a Use Cases and Requirements for Event-based Vision v0.5 document. Event-based Vision revolves around a new and emerging image modality created by event-based visual sensors. JPEG XE is about creation and development of a standard to represent events in an efficient way allowing interoperability between sensing, storage, and processing, targeting machine vision and other relevant applications. Events in the context of this standard are defined as the messages that signal the result of an observation at a precise point in time, typically triggered by a detected change in the physical world. The new Use Cases and Requirements document is the first version to become publicly available and serves mainly to attract interest from external experts and other standardization organizations. Although still in a preliminary version, the JPEG committee continues to invest efforts into refining this document, so that it can serve as a solid basis for further standardization. An Ad-Hoc Group has been re-established to work on this topic until the 101st JPEG meeting in October 2023. To stay informed about the activities please join the event-based imaging Ad-hoc Group mailing list.

JPEG DNA

The JPEG Committee has been exploring coding of images in quaternary representations particularly suitable for image archival on DNA storage. The scope of JPEG DNA is to create a standard for efficient coding of images that considers biochemical constraints and offers robustness to noise introduced by the different stages of the storage process that is based on DNA synthetic polymers.

At the 100th JPEG meeting, “Additions to the JPEG DNA Common Test Conditions version 2.0”, was produced which supplements the “JPEG DNA Common Test Conditions” by specifying a new constraint to be taken into account when coding images in quaternary representation. In addition, the detailed procedures for evaluation of the pre-registered responses to the JPEG DNA Call for Proposals were defined.

Furthermore, the next steps towards a deployed high-performance standard were discussed and defined. In particular, it was decided to request for the new work item approval once a Committee Draft stage has been reached.

The JPEG-DNA AHG has been re-established to work on the preparation of assessment and crosschecking of responses to the JPEG DNA Call for Proposals until the 101st JPEG meeting in October 2023.

JPEG XS

The JPEG Committee continued its work on the JPEG XS 3rd edition. The main goal of the 3rd edition is to reduce the bitrate for on-screen content by half while maintaining the same image quality.

Part 1 of the standard – Core coding tools – is still under Draft International Standard (DIS) ballot. For Part 2 – Profiles and buffer models – and Part 3 – Transport and container formats – the Committee Draft (CD) circulation results were processed and the DIS ballot document was created. In Part 2, three new profiles have been added to better adapt to the needs of the market. In particular, two profiles are based on the High 444.12 profile, but introduce some useful constraints on the wavelet decomposition structure and disable the column modes entirely. This makes the profiles easier to implement (with lower resource usage and fewer options to support) while remaining consistent with the way JPEG XS is already being deployed in the market today. Additionally, the two new High profiles are further constrained by explicit conformance points (like the new TDC profile) to better support market interoperability. The third new profile is called TDC MLS 444.12, and allows the achievement of mathematically lossless quality. For example, it is intended for medical applications, where a truly lossless reconstruction might be required.

Completion of the JPEG XS 3rd edition standard is scheduled for January 2024.

JPEG Systems

At the 100th meeting the JPEG Committee produced the CD text of 19566-10, the JPEG Systems Reference Software. In addition, a JPEG white paper was released that provides an overview of the entire JPEG Systems standard. The white paper can be downloaded on the JPEG.org website.

Final Quote

“The JPEG Committee celebrated its 100th meeting, an important milestone considering the current success of JPEG standards. This celebration was enriched with significant achievements at the meeting, notably the release of the Committee Draft of JPEG AI.” said Prof. Touradj Ebrahimi, the Convenor of the JPEG Committee.

Overview of Benchmarking Platforms and Software for Multimedia Applications

In a time where Artificial Intelligence (AI) continues to push the boundaries of what was previously thought possible, the demand for benchmarking platforms that allow to fairly assess and evaluate AI models has become paramount. These platforms serve as connecting hubs between data scientists, machine learning specialists, industry partners, and other interested parties. They mostly function under the Evaluation-as-a-Service (EaaS) paradigm [1], the idea that participants that do a certain benchmarking task should be able to test the output of their systems in similar conditions, by being provided with a common definition of the targeted concepts, datasets and data splits, metrics, and evaluation tools. These common elements are provided through online platforms that can even offer Application Programming Interfaces (APIs) or container-level integration of the participants’ AI models. This column provides an insight into these platforms, looking at their main characteristics, use cases, and particularities. In the second part of the column we will also look into some of the main benchmarking platforms that are geared towards handling multimedia-centric benchmarks and datasets, relevant to SIGMM.

Defining Characteristics of EaaS platforms

Benchmarking competitions and initiatives, and EaaS platforms attempt to tackle a number of keypoints in the development of AI algorithms and models, namely:

  • Creating a fair and impartial evaluation environment, by standardizing the datasets and evaluation metrics used by all participants to an evaluation competition. In doing so, EaaS platforms play a pivotal role in promoting transparency and comparability in AI models and approaches.
  • Enhancing reproducibility by giving the option to run the AI models on dedicated servers provided and managed by competition organizers. This increases the trust and bolsters the integrity of the results produced by competition participants, as the organizers are able to closely monitor the testing process for each individual AI model. 
  • Fostering, as a natural consequence, a higher degree of data privacy, as participants could be given access only to training data, while testing data is kept private and is only accessed via APIs on the dedicated servers, reducing the risk of data exposure.
  • Creating a common repository for the sharing the data and details of a benchmarking task, building a history not only of the results of the benchmarking tasks throughout the years, but also of the evolution of the types of approaches and models used by participants. Other interesting features, like the existence of forums and discussion threads on competitions, allow new participants to quickly search for problems they encounter and hopefully have a quicker resolution of their issues.

Given these common goals, benchmarking platforms usually integrate a set of common features and user-level functionalities that are summed up in this section and grouped into three categories: task organization and scheduling, scoring and reproducibility, and communication and dissemination.

Task organization and scheduling. The platforms allow the creation, modification and maintenance of benchmarking tasks, either through a graphical user interface (GUI) or by using task bundles (most commonly using JSON, XML, Python or custom scripting languages). Competition organizers can define their task, and define sub-tasks that may explore different facets of the targeted data. Scheduling is another important feature in benchmarking competition creation, as some parts of the data may be kept private until a certain moment in time, and allow the competition organizers to hide the results of other teams until a certain point in time. We consider the last point an important one, as participants may feel discouraged from continuing their participation if their initial results are not high enough compared with other participants. Another noteworthy feature is the run quantity management that allows organizers to specify a maximum number of allowed runs per participant during the benchmarking task. This limitation discourages participants from attempting to solve the given tasks with brute force approaches, where they implement a large number of models and model variations. As a result, participants are incentivized to delve deeper into the data, critically analyzing why certain methods succeed and others fall short.

Scoring and reproducibility. EaaS platforms generally deploy two paradigms, sometimes side-by-side, with regards to AI model testing and results generation [1, 2]: the Data-to-Algorithm (D2A) approach, and the Algorithm-to-Data (A2D) approach. The former refers to competitions where participants must download the testing set, run the prediction systems on their own machines, and provide the predictions to the organizers, usually in CSV format for the multimedia domain. In this setup, the ground truth data for the testing set is kept private, and after the organizers receive the prediction result files, they communicate the performance to the participants, or the results are automatically computed by the platform by organizer-provided scripts, once the files are uploaded to it. The A2D approach on the other hand is more complex, may incur additional financial costs, and may be more time consuming for both organizers and task participants, but increases the trustworthiness and reproducibility of the task and AI models themselves. In this setup, organizers provide cloud-based computing resources via Virtual Machines (VMs) and containers, and a common processing pipeline or API that competitors must integrate in their source code. The participants develop the wrappers that integrate their AI models accordingly, and upload the model to the EaaS platforms directly. The AI models are then executed according to the common pipeline and results are automatically provided to the participants, while also allowing for the testing data to be kept completely private. Traditionally, in order to achieve this, EaaS platforms offer the possibility of integration with cloud computing platforms like Amazon AWS, Microsoft Azure, or Google Cloud, and offer Docker integration for the creation of containers where the code can be hosted.

Communication and dissemination. EaaS platforms allow the interaction between competition organizers and participants, either through emails, automatic notifications, or forums where interested parties can exchange ideas, ask questions, offer help, signal potential problems in the data or scripts associated with the tasks.

Popular multimedia EaaS platforms

This section presents some of the most popular benchmarking platforms aimed at the multimedia domain. We will present some key features and associated popular multimedia datasets for the following platforms: Kaggle, AIcrowd, Codabench, Drivendata, and EvalAI.

Kaggle represents perhaps the top-most popular benchmarking platform at this moment, and goes beyond the scope of providing datasets and benchmarking competitions, also hosting AI models, courses, and source code repositories. Competition organizers can design the tasks under either of the D2A or A2D paradigms, giving participants the possibility of integrating their AI models in Jupyter Notebooks for reproducibility. The platform also gives the option of alloting CPU and GPU cloud-based resources for A2D competitions. The Kaggle repository offers code for a large number of additional competition management tools and communication APIs. Among an impressive number of datasets and competitions, Kaggle currently hosts competitions that use the MNIST original data [3], as well as other MNIST-like datasets like Fashion-MNIST [4], as well as datasets on varied subjects ranging from sentiment analysis in social media [5] to medical image processing [6].

AIcrowd is an open source EaaS platform for open benchmarking challenges that puts an accent on connections and collaborative work between data science and machine learning experts. This platform offers the source code for command line interface (CLI) and API clients that can interact with AIcrowd servers. ImageCLEF, between 2018 and 2022 [7 – 11], is one of the most popular multimedia benchmarking initiatives hosted on AICrowd, featuring diverse multimedia topics such as lifelogging, medical image processing, image processing for environment health prediction, the analysis of social media dangers with regards to image sharing, and ensemble learning for multimedia data.

Codabench, launched in August 2023, and its precursor CodaLab, are two open source benchmarking platforms that provide a large number of options, including A2D and D2A approaches, as well as “inverted benchmarks”, where organizers provide the reference algorithms and participants contribute with the datasets. Among the current running challenges on this platform standouts are the two Quality-of-Service-oriented challenges on audio-video synchronization error detection and error measurement challenges that are part of the 3rd Workshop on Image/Video/Audio Quality in Computer Vision and Generative AI at the Winter Conference on Applications of Computer Vision – WACV2024.

Drivendata targets the intersection of data science and social impact. This platform hosts competitions that integrate the social aspect of their domain of interest directly in their mission and definition, while also hosting a number of open-source projects and competition-winning AI models. Given its accent on social impact, this platform hosts a number of benchmarking challenges that target social issues like the detection of hateful memes [12] and image-based nature conservation efforts.

EvalAI is another open source platform that is able to create A2D and D2A competition environments, while also integrating optimization steps that allow for evaluation code to run faster on multi-core cloud infrastructure. The EvalAI platform holds many diverse multimedia-centric competitions, including image segmentation tasks based on LVIS [13] and a wide range of sport tasks [14].

Future directions, developments and other tools

While the tools and platforms described in the previous section represent just a portion of the number of EaaS platform currently online in the research community, we would also like to mention some projects that are currently in the development stage or that can be considered additional tools for benchmarking initiatives:

  • The AI4Media benchmarking platform, is a benchmarking platform that is currently in the prototype and development stage. Among its most interesting features and ideas promoted by the platform developers is the creation of complexity metrics that would help competition organizers understand the computational efficiency and resource requirements for the submitted systems.
  • The BenchmarkSTT started as a specialized benchmarking platform for speech-to-text, but is now evolving in different directions, including facial recognition in videos.
  • The PapersWithCode platform, while not a benchmarking platform per se, is useful as a repository that collects the results AI model on datasets throughout the years, and groups different datasets studying the same concepts under the same umbrella (i.e., Image Classification, Object Detection, Medical Image Segmentation, etc.), while also providing links to scientific papers, github implementations of the models, and links to the datasets. This may represent a good starting point for young researchers that are trying to understand the history and state-of-the-art for certain domains and applications.

Conclusions

Benchmarking platforms represent a key component of benchmarking, pushing for fairness and trustworthiness in AI model comparison, while also providing tools that may foster reproducibility in AI. We are happy to see that many of the platforms discussed in this article are open source, or have open source components, thus allowing interested scientists to create their own custom implementations of these platforms, and to adapt them when necessary to their particular fields.

Acknowledgements

The work presented in this column is supported under the H2020 AI4Media “A European Excellence Centre for Media, Society and Democracy” project, contract #951911.

References

[1] Hanbury, A., Müller, H., Balog, K., Brodt, T., Cormack, G. V., Eggel, I., Gollub, T., Hopfgartner, F., Kalpathy-Cramer, J., Kando, N., Krithara, A., Lin, J., Mercer, S. & Potthast, M. (2015). Evaluation-as-a-service: Overview and outlook. arXiv preprint arXiv:1512.07454.
[2] Hanbury, A., Müller, H., Langs, G., Weber, M. A., Menze, B. H., & Fernandez, T. S. (2012). Bringing the algorithms to the data: cloud–based benchmarking for medical image analysis. In Information Access Evaluation. Multilinguality, Multimodality, and Visual Analytics: Third International Conference of the CLEF Initiative, CLEF 2012, Rome, Italy, September 17-20, 2012. Proceedings 3 (pp. 24-29). Springer Berlin Heidelberg.
[3] LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278-2324.
[4] Xiao, H., Rasul, K., & Vollgraf, R. (2017). Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747.
[5] Niu, T., Zhu, S., Pang, L., & El Saddik, A. (2016). Sentiment analysis on multi-view social data. In MultiMedia Modeling: 22nd International Conference, MMM 2016, Miami, FL, USA, January 4-6, 2016, Proceedings, Part II 22 (pp. 15-27). Springer International Publishing.
[6] Thambawita, V., Hicks, S. A., Storås, A. M., Nguyen, T., Andersen, J. M., Witczak, O., … & Riegler, M. A. (2023). VISEM-Tracking, a human spermatozoa tracking dataset. Scientific Data, 10(1), 1-8.
[7] Ionescu, B., Müller, H., Villegas, M., García Seco de Herrera, A., Eickhoff, C., Andrearczyk, V., … & Gurrin, C. (2018). Overview of ImageCLEF 2018: Challenges, datasets and evaluation. In Experimental IR Meets Multilinguality, Multimodality, and Interaction: 9th International Conference of the CLEF Association, CLEF 2018, Avignon, France, September 10-14, 2018, Proceedings 9 (pp. 309-334). Springer International Publishing.
[8] Ionescu, B., Müller, H., Péteri, R., Dang-Nguyen, D. T., Piras, L., Riegler, M., … & Karampidis, K. (2019). ImageCLEF 2019: Multimedia retrieval in lifelogging, medical, nature, and security applications. In Advances in Information Retrieval: 41st European Conference on IR Research, ECIR 2019, Cologne, Germany, April 14–18, 2019, Proceedings, Part II 41 (pp. 301-308). Springer International Publishing.
[9] Ionescu, B., Müller, H., Péteri, R., Dang-Nguyen, D. T., Zhou, L., Piras, L., … & Constantin, M. G. (2020). ImageCLEF 2020: Multimedia retrieval in lifelogging, medical, nature, and internet applications. In Advances in Information Retrieval: 42nd European Conference on IR Research, ECIR 2020, Lisbon, Portugal, April 14–17, 2020, Proceedings, Part II 42 (pp. 533-541). Springer International Publishing.
[10] Ionescu, B., Müller, H., Péteri, R., Abacha, A. B., Demner-Fushman, D., Hasan, S. A., … & Popescu, A. (2021). The 2021 ImageCLEF Benchmark: Multimedia retrieval in medical, nature, internet and social media applications. In Advances in Information Retrieval: 43rd European Conference on IR Research, ECIR 2021, Virtual Event, March 28–April 1, 2021, Proceedings, Part II 43 (pp. 616-623). Springer International Publishing.
[11] de Herrera, A. G. S., Ionescu, B., Müller, H., Péteri, R., Abacha, A. B., Friedrich, C. M., … & Dogariu, M. (2022, April). Imageclef 2022: multimedia retrieval in medical, nature, fusion, and internet applications. In European Conference on Information Retrieval (pp. 382-389). Cham: Springer International Publishing.
[12] Kiela, D., Firooz, H., Mohan, A., Goswami, V., Singh, A., Fitzpatrick, C. A., … & Parikh, D. (2021, August). The hateful memes challenge: Competition report. In NeurIPS 2020 Competition and Demonstration Track (pp. 344-360). PMLR.
[13] Gupta, A., Dollar, P., & Girshick, R. (2019). Lvis: A dataset for large vocabulary instance segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 5356-5364).
[14] Giancola, S., Cioppa, A., Deliège, A., Magera, F., Somers, V., Kang, L., … & Li, Z. (2022, October). SoccerNet 2022 challenges results. In Proceedings of the 5th International ACM Workshop on Multimedia Content Analysis in Sports (pp. 75-86).

Report from CBMI 2023


The 20th International Conference on Content-based Multimedia Indexing (CBMI) was held exclusively as an in-person event in Orleans, France, on September 20-22, 2023. The conference was organized by the University of Orleans and received support from SIGMM. This edition marked a significant milestone as it was the first fully physical conference following the pandemic, providing a welcome opportunity for face-to-face interactions. The event drew a diverse and international audience, with participation from between 70 and 80 attendees representing 18 countries (12 Europeans, 4 Asians, 1 American and 1 African). Additionally, the conference included a European meeting (CHIST-ERA XAIface project) associated with the main event, which brought together approximately 15 individuals. Furthermore, several engineering students from the University of Orleans were invited to participate, allowing them to gain insights into cutting-edge multimedia research and exchange knowledge and ideas.

Program highlights

The conference was structured around two keynote presentations. The first keynote was presented by Prof. Alberto del Bimbo from the University of Florence, who spoke on the topic of “AI-Powered Personal Fashion Advising.” During his talk, Prof. Delbimbo discussed the key tasks and challenges related to using artificial intelligence in the fashion advisory field.

The closing keynote was delivered by Prof. Nicolas Hervé from the Institut National de l’Audiovisuel (French National Audiovisual Archive). Prof. Hervé highlighted the research activities conducted at Ina and how they could be integrated into information systems and enhance the value of their collections. His presentation provided insights into the practical applications of their work.

Presentation of our keynote speakers.

In conjunction with the presentation of 18 papers across four regular paper sessions, the 2023 conference adhered to the established tradition of previous editions by incorporating special sessions. These special sessions were designed to delve into the practical applications of multimedia indexing within specific domains or distinctive settings. This approach allowed for a more focused and in-depth exploration of several topics, offering valuable insights and discussions beyond the regular paper sessions.

In the ongoing year, we received a substantial volume of submissions, culminating in the approval of six special sessions. These special sessions have collectively embraced a total of 25 accepted papers.

  • Cultural Heritage and Multimedia Content
  • Interactive Video Retrieval for Beginners (IVR4B)
  • Physical Models and AI in Image and in Multi-modality 
  • Computational Memorability of Imagery
  • Cross-modal multimedia analysis and retrieval for well-being insights
  • Explainability in Multimedia Analysis (ExMA)

The coordination of these special sessions involved the collaborative efforts of multiple countries, including France, Austria, Ireland, Iceland, the UK, Romania, Japan, Norway, and Vietnam.

The special sessions encompassed a diverse range of multimedia topics, spanning from applications such as cultural heritage preservation and retrieval to machine learning, with a particular focus on facets like explainability and the utilization of physical models.

The conference program was complemented by a poster session composed of fourteen posters. The latter was followed by a demo session which comprised IVR4B video retrieval competition. 

Participants at the poster session.
Participants at the demo session.

The best paper of the conference was awarded EUR 500, generously sponsored by ACM SIGMM. The selection committee quickly found consensus to award the best paper award to Romain XU-DARME, Jenny Benois-Pineau, Romain Giot, Georges Quénot, Zakaria Chihani, Marie-Christine Rousset and Alexey Zhukov for their paper “On the stability, correctness, and plausibility of visual explanation methods based on feature importance”.

Social events

In addition to the two conference dinners organized by the conference committee, the participants had the opportunity to enjoy a guided tour through Orleans on their way to the first restaurant.

Participants enjoyed the first dinner after the guided tour

Among the social events organized during CBMI 2023, was the Music meets Science concert with the support of ACM SIGMM. After a series of scientific presentations, participants were able to appreciate the works of Beethoven, Murphy and Lizee. We thank ACM SIGMM for their support which made this cultural event possible.

The Odyssée Quartet composed of François Pineau-Benois (violinist),  Raphael Moraly (cellist), Olivier Marin (violist) and Audrey Sproule (violinist).

Outlook

The next edition of CBMI will be organized in Iceland. After several hybrid editions, we moved back on site towards the pre-pandemic level. 

Equity, Diversity and Inclusion at ACM MMSys 2023


The 14th ACM Multimedia Systems Conference (MMSys 2023) took place from June 7-10, 2023 in Vancouver, Canada. To continue the significant efforts from the last years,  and building on the strong commitment of the MMSys community to create a diverse, inclusive and accessible forum to discuss advancements in the area of multimedia systems and the technology experiences they enable, several EDI measures were adopted.  The main goals were to (1) raise awareness around the importance of diversity and inclusion for both the MMSys community and the research fields represented at MMSys and (2) to enable diverse participation and inclusion of underrepresented groups. In this column, we provide a brief overview of the main EDI activities and a number of key numbers, as well as short testimonials from two participants.  

Support and activities

Associate Professor Yvette Wohn giving her EDI keynote on “Moderating the Metaverse”

Supported by the ACM Special Interest Group on Multimedia (SIGMM) and ACM through founding for special initiatives, the provided support at MMSys 2023 included the following:

1. EDI Keynote Speech
We invited Dr. Yvette Wohn for a keynote speech on Moderating the Metaverse. Dr. Wohn (she/her) is an associate professor of Informatics at New Jersey Institute of Technology and director of the Social Interaction Lab . Her research is in the area of Human Computer Interaction (HCI) where she studies the characteristics and consequences of social interactions in online environments such as virtual worlds and social media. Yvette’s keynote speech was very well received and ignited conversations during the conference.
Abstract of the talk: Online harassment is a problem that we still have been unable to solve in the social media age of Web 2.0. As we move deeper into Web 3.0, which includes 3D virtual worlds, moderation moves beyond content to include behavioral components such as embodied interactions. How do we design these systems to be creative and generative while maintaining safety and equity? This talk will discuss the challenges and opportunities, both social and technical, in creating the next wave of networked multimedia systems.

2. EDI Luncheon & Challenge
Our goal for the luncheon and challenge was picking a topic to spark conversations during lunch that is engaging enough for all audience, is something that everyone can have some opinion on (and those opinions can be challenged during conversations), and the answers can provide us some insight about our audience and their take on EDIJ issues.
The questions were: 

  • What is the biggest diversity issue that you think can affect YOU in the metaverse?
  • What is the simplest, yet most practical solution you can think for this problem?

After the initial announcement and presentation, example scenarios and conversation icebreakers were printed and placed on the Break and Lunch tables and conversations were encouraged by volunteers, so that attendees would discuss over lunch, and submit their solution. The Rubric used for selecting the winner of this challenge was:

  • Problem (15 pts): Explorative Value, Importance, Scale of effect
  • Solution Quality (15 pts): Feasibility, Simplicity, Effectiveness
  • Each item was rated on the scale of 0-5: not meeting requirements: 0, minimal: 1, acceptable: 2, good: 3, very good: 4, excellent: 5.

We received 14 entries by the given deadline, and from two entries with 28 points, Dr. Sylvie Dijkstra-Soudrissanane was selected as the winner of the EDI Challenge for discussing the inaccurate representation of dark skin tones due to the inherent design of 3D capture devices such as LIDARs in her response. Sylvie’s wrote a short testimonial (see below).

3. Additional EDI Activities
EDI Considerations in Conference Name Tags
Preferred pronouns were used to foster a healthier and more inclusive space, safe and respectful for all attendees. In addition, the following was explicitly mentioned on the name tags:

  1. Diversity Advocate: To show we are proud of diversity and inclusion efforts, and we acknowledge and foster the enthusiasm for this important work.
  2. First-Timer: To easily find people who might not be familiar with the community to provide them further help and support, if needed.

Childcare Support
Due to financial uncertainty, we were not able to announce availability of childcare support funds before the conference which could help better planning for people with children and ensuring that we support all people with such need equally. However, we were nevertheless able to support a presenter who had planned childcare during the conference. Towards next year’s edition of MMSys, we strongly encourage that dedicated funds are made available well ahead of the conference, so that equal opportunities to attend can be offered to caregivers.

EDI volunteer support 
While most of our student volunteers were Vancouver-based and were supported with a free registration to the conference, one additional student volunteers who travelled to Vancouver and would otherwise not have been able to attend, was supported by the EDI chairs. His testimonial can be read below. 

Key numbers

  • Two out of Four Keynote Speakers for MMSys 2023 were Women (50%)
  • One out of Three Technical Program Chairs were Women (33%)
  • Nine out of 25 organizing committee members were Women (36%)
  • Four out of Fourteen (seventeen including parallel sessions) sessions of the main conference were chaired by Women (24%), and three out of four Workshop chairs were Women (75%).
Jinwei Zhao’s badge, illustrating several measures to make attendees feel welcome and included (e.g., showing self-selected preferred pronouns, diversity advocate, first time at MMSys indication, such that other attendees can make sure that new people to the conference are warmly welcomed and included).

Testimonials

Testimonial by Jinwei Zhao, Student Volunteer supported by the MMSys 2023 EDI  

“I was honored to be able to attend ACM MMSys 2023 in Vancouver as a student volunteer, an experience that afforded me a breadth of professional engagements. My responsibilities as a student volunteer encompassed assisting with the registration process and the assistance of technical sessions and workshops, thereby ensuring a seamless execution of the conference. It also gave me the invaluable opportunity to engage with distinguished researchers and talented PhD students in the multimedia community, facilitating a rich exchange of brilliant and novel ideas. The keynotes and technical sessions at the conference shed light on cutting-edge developments and emerging trends in the field of multimedia systems. These included advanced adaptive video bitrate algorithms, the integration of multimedia systems with next-generation networks like Starlink, the development of new protocols such as multipath QUIC and Media-Over-QUIC, and the future of immersive technologies in AR, VR, and XR domains. Additionally, I was deeply appreciative of receiving the ACM SIGMM MMSys Volunteer Honorarium after the conference. Although I did not have the occasion to present my research at MMSys 2023, the passion and dedication of my peers served as a catalyst for my further contributions to the field. This engagement was evidently fruitful and advantageous, as it led to the acceptance of my paper for presentation at MMSys 2024 next year. This experience also encouraged me to make further contributions more actively to the multimedia community, aligning with my decision to embark on a PhD program starting in 2024.»

Testimontial by Sylvie Dijkstra-Soudarissanane, MMSys 2023 attendee and winner of the MMSys 2023 EDI Challenge.

EDI Co-chair Dr. Dr Ouldooz Baghban Karimi hands over the EDI Challenge Award to the EDI Challenge winner Sylvie Dijkstra-Soudarissanane.

«I had the privilege of attending the ACM Multimedia Systems Conference (MMSys) in June 2023, an experience that left an incredible mark on my perspective as a scientist in the field of Social XR. The conference, held in the city of Vancouver, Canada, provided a unique platform for professionals from diverse backgrounds to converge and share cutting-edge insights in multimedia systems research and development. 
The MMSys conference proved to be an invaluable forum for hosting discussions on the latest advancements in multimedia technology. Keynotes and regular sessions covered a myriad of topics, ranging from advanced videos with 3D point clouds rendering, to multi-modal experiences and open software. This year, the rich program also included technical demo sessions, allowing participants to witness real-time systems in action, presented by leaders from organizations such as Xiaomi, Fraunhofer FOKUS, and my company TNO. Beyond the academic world, the conference facilitated networking and social interactions, providing a platform to connect with like-minded researchers. Engaging in discussions about user-interactive VR experiences, real-time holographic representations, and mobile-based deep learning video codecs … all happening in a breathtaking skyride above the Grouse Mountain added an extra layer of depth to the overall experience.
One of the highlights of my participation was the opportunity to pitch my idea on building socially responsible systems that prioritize inclusivity. The focus of my proposal revolved around designing systems that are inherently inclusive, considering factors such as skin tones, hair types, and ethnicities. The aim was to bridge the accessibility gap and ensure that these systems reach and cater to minority populations. It is a very personal endeavor, as a person of color. To my delight, this endeavor earned me recognition with a prestigious award in Diversity, Equity, and Inclusion. I am immensely proud to have received the DEI award offered by Dr Ouldooz Baghban Karimi for my commitment to inclusive research and innovation. This recognition reinforces the importance of pushing boundaries in technology to create solutions that resonate with diverse communities. The conference not only expanded my knowledge but also allowed me to forge meaningful connections with fellow researchers who share a passion for advancing the frontiers of multimedia systems.”