Authors: Mihai Gabriel Constantin, University Politehnica of Bucharest, Romania , Steven Hicks, SimulaMet, Norway, Martha Larson, Radboud University, Netherlands
Introduction
The Benchmarking Initiative for Multimedia Evaluation (MediaEval) organizes interesting and engaging tasks related to multimedia data. MediaEval is proud to be supported by SIGMM. Tasks involve analyzing and exploring multimedia collections, as well as accessing the information that they contain. MediaEval emphasizes challenges that have a human or social aspect in order to support our goal of making multimedia a positive force in society. Participants in MediaEval are encouraged to submit effective, but also creative solutions to MediaEval tasks: We carry out quantitative evaluation of the submissions, but also go beyond the scores in order to obtain insight into the tasks, data, metrics.
Participation in MediaEval is open to any team that wishes to sign up. Registration has just opened and information is available on the MediaEval 2026 website: https://multimediaeval.github.io/editions/2026 The workshop will take place in Amsterdam, Netherlands and online coordinated with ACM ICMR https://icmr2026.org
In this column, we present a short report on MediaEval 2025, which culminated with the annual workshop in Dublin, Ireland between CBMI (https://www.cbmi2025.org) and ACM Multimedia (https://acmmm2025.org). Then, we provide an outlook to MediaEval 2026, which will be the sixteenth edition of MediaEval.

A Keynote on Metascience
The workshop kicked off with a keynote on metascience for machine learning. The metascience initiative (https://metascienceforml.github.io) strives to promote discussion and development of the scientific underpinnings of machine learning. It looks at the way in which machine learning is done and examines the full range of relevant aspects, from methodologies and mindsets. The keynote was delivered by Jan van Gemert, head of the Computer Vision Lab (https://www.tudelft.nl/ewi/over-de-faculteit/afdelingen/intelligent-systems/pattern-recognition-bioinformatics/computer-vision-lab) at Delft University of Technology. He discussed the “growing pains” of the field of deep learning and the importance of the scientific method for keeping the field on course. He invited the audience to consider the question of the power of benchmarks for hypothesis-driven science in machine learning and deep learning.
Tasks at MediaEval 2025
The MediaEval 2025 tasks reflect the benchmark’s continued emphasis on human-centered and socially relevant multimedia challenges, spanning healthcare, media, memory, and responsible use of generative AI.
Several tasks this year focused on the human aspects of multimodal analysis, combining visual, textual, and physiological signals. The Medico Task challenges participants in building visual question answering models for the interpretation of gastrointestinal images, aiming to support clinical decision-making through interpretable multimodal explanations. The Memorability Task focuses on modeling long-term memory for short movie excerpts and commercial videos, requiring participants to predict how memorable a video is, whether viewers are familiar with it, and, in some cases, to leverage EEG signals alongside visual features. Multimodal understanding is further explored in the MultiSumm Task, where participants are provided with collections of multimodal web content describing food sharing initiatives in different cities and are asked to generate summaries that satisfy specific informational criteria, with evaluation exploring both traditional and emerging LLM-based assessment approaches.

The remaining two tasks emphasize the societal impact of multimedia technology in real-world settings. In the NewsImagesTask, participants worked with large collections of international news articles and images, either retrieving suitable thumbnail images or generating thumbnails for articles. The Synthetic Images Task addressed the growing prevalence of AI-generated content online, asking participants to detect synthetic or manipulated images and localize manipulated areas. The task used data created by state-of-the-art generative models as well as images collected from real-world online settings. We gratefully acknowledge the support of AI-CODE (https://aicode-project.eu), a European project focused on topics related to these two tasks.
MediaEval in Motion
MediaEval is especially proud of participants who return over the years, improving their approaches and contributing insights. We would like to highlight two previous participants who became so interested and involved in MediaEval tasks that they decided to join the task organization team and help organize the tasks. Iván Martín-Fernández, PhD student at Universidad Politécnica de Madrid, became a task organizer for the Memorability task and Lucien Heitz, PhD Student, University of Zurich, became a task organizer for NewsImages.
One aspect of the MediaEval Benchmark I value most is its effort to go beyond metric-chasing and embark on a “quest for insights,” as the organizers put it, to help us better understand the tasks and encourage creative, innovative solutions. This spirit motivated me to participate in the 2023 Memorability Task in Amsterdam. The experience was so enriching that I wanted to become more involved in the community. In 2025, I was invited to join the Memorability Task organizing team, which gave me the chance to contribute to and help foster this innovative research effort. Thanks to SIGMM’s sponsorship, I was able to attend the event in Dublin, which further enhanced the experience. Working alongside Martha and Gabi as a student volunteer is always a pleasure. As my PhD studies come to an end, I’m proud to say that MediaEval has been a core part of my research, and I’m sure it will remain so in the immediate future. See you in Amsterdam in June!
Iván Martín-Fernández, PhD Student, GTHAU – Universidad Politécnica de Madrid
I ‘graduated’ from being a participant in the previous NewsImages challenge to now taking over the organization duties of the 2025 iteration of the task. It was an incredible journey and learning experience. Big thank you to the main MediaEval organizers for their tireless support and input for shaping this new task that combines image retrieval and generation. The recent benchmark event presented an amazing platform to share and discuss our research. We got so many great submissions from teams around the globe. I was truly overwhelmed by the feedback. Getting involved with the organization of a challenge task is something I can highly recommend to all participants. It allows you to take on an active role and bring new ideas to the table on what problems to tackle next.
Lucien Heitz, PhD Student, University of Zurich
MediaEval continues its tradition of awarding a “MediaEval Distinctive Mention” to teams that dive deeply into the data, the algorithms, and the evaluation procedure. Going above and beyond in this way makes important contributions to our understanding of the task and how to make meaningful progress. Moving the state of the art forward requires improving the scores on a pre-defined benchmark task such as the tasks offered by MediaEval. However, MediaEval Distinctive Mentions underline the importance of research that does not necessarily improve scores on a given task, but rather makes an overall contribution to knowledge.
We were happy to serve as student volunteers at MediaEval 2025. In addition, we participated as a team in the NewsImage task, contributing to two subtasks, and were honored to receive a Distinctive Mention.
Xiaomeng Wang and Bram Bakker PhD Students, Data Science – Radboud University
Xiaomeng had previously participated in the same task at MediaEval 2023. Compared to the 2023 edition, she observed notable evolution in both the data and task design. These changes reflect the organizers’ careful consideration of recent advances in modeling techniques as well as the practical applicability of the datasets, which proved to be highly inspiring.
Bram participated in MediaEval for the first time and particularly found the discussions with colleagues about the challenges very rewarding. The NewsImage retrieval subtask additionally got him to learn how to deal with larger datasets.
We tried to incorporate deeper reflections on our results into our presentation. Specifically, we showed how certain types of articles are particularly suited for image generation and identified the news categories where retrieval was most effective.
The people whose work is highlighted in this section are grateful to have received support from SIGMM in order to be able to attend the MediaEval workshop in person.
Outlook to MediaEval 2026
The 2025 workshop concluded with participants collaborating with the task organizers to start to develop “benchmark biographies”, which are living documents that describe benchmarking tasks. Combining elements from data sheets and model cards, benchmark biographies document motivation, history, datasets, evaluation protocols, and baseline results to support transparency, reproducibility, and reuse by the broader research community. We plan to continue work on these benchmark biographies as we move toward MediaEval 2026.
Further, in the 2026 edition, we will offer again the tasks that were held in 2025 to provide an opportunity for teams who were not able to participate in 2025. We especially encourage “Quest for Insight” papers that examine characteristics of the data and the task definitions, the strengths and weaknesses of particular types of approaches, observations about the evaluation procedure, and the implications of the task.
We look forward to seeing you in Amsterdam for MediaEval and also ACM ICMR. Don’t forget to check out the MediaEval website (https://multimediaeval.github.io) and register your team if you are interested in participating in 2026.

