In a time where Artificial Intelligence (AI) continues to push the boundaries of what was previously thought possible, the demand for benchmarking platforms that allow to fairly assess and evaluate AI models has become paramount. These platforms serve as connecting hubs between data scientists, machine learning specialists, industry partners, and other interested parties. They mostly function under the Evaluation-as-a-Service (EaaS) paradigm [1], the idea that participants that do a certain benchmarking task should be able to test the output of their systems in similar conditions, by being provided with a common definition of the targeted concepts, datasets and data splits, metrics, and evaluation tools. These common elements are provided through online platforms that can even offer Application Programming Interfaces (APIs) or container-level integration of the participants’ AI models. This column provides an insight into these platforms, looking at their main characteristics, use cases, and particularities. In the second part of the column we will also look into some of the main benchmarking platforms that are geared towards handling multimedia-centric benchmarks and datasets, relevant to SIGMM.
Defining Characteristics of EaaS platforms
Benchmarking competitions and initiatives, and EaaS platforms attempt to tackle a number of keypoints in the development of AI algorithms and models, namely:
- Creating a fair and impartial evaluation environment, by standardizing the datasets and evaluation metrics used by all participants to an evaluation competition. In doing so, EaaS platforms play a pivotal role in promoting transparency and comparability in AI models and approaches.
- Enhancing reproducibility by giving the option to run the AI models on dedicated servers provided and managed by competition organizers. This increases the trust and bolsters the integrity of the results produced by competition participants, as the organizers are able to closely monitor the testing process for each individual AI model.
- Fostering, as a natural consequence, a higher degree of data privacy, as participants could be given access only to training data, while testing data is kept private and is only accessed via APIs on the dedicated servers, reducing the risk of data exposure.
- Creating a common repository for the sharing the data and details of a benchmarking task, building a history not only of the results of the benchmarking tasks throughout the years, but also of the evolution of the types of approaches and models used by participants. Other interesting features, like the existence of forums and discussion threads on competitions, allow new participants to quickly search for problems they encounter and hopefully have a quicker resolution of their issues.
Given these common goals, benchmarking platforms usually integrate a set of common features and user-level functionalities that are summed up in this section and grouped into three categories: task organization and scheduling, scoring and reproducibility, and communication and dissemination.
Task organization and scheduling. The platforms allow the creation, modification and maintenance of benchmarking tasks, either through a graphical user interface (GUI) or by using task bundles (most commonly using JSON, XML, Python or custom scripting languages). Competition organizers can define their task, and define sub-tasks that may explore different facets of the targeted data. Scheduling is another important feature in benchmarking competition creation, as some parts of the data may be kept private until a certain moment in time, and allow the competition organizers to hide the results of other teams until a certain point in time. We consider the last point an important one, as participants may feel discouraged from continuing their participation if their initial results are not high enough compared with other participants. Another noteworthy feature is the run quantity management that allows organizers to specify a maximum number of allowed runs per participant during the benchmarking task. This limitation discourages participants from attempting to solve the given tasks with brute force approaches, where they implement a large number of models and model variations. As a result, participants are incentivized to delve deeper into the data, critically analyzing why certain methods succeed and others fall short.
Scoring and reproducibility. EaaS platforms generally deploy two paradigms, sometimes side-by-side, with regards to AI model testing and results generation [1, 2]: the Data-to-Algorithm (D2A) approach, and the Algorithm-to-Data (A2D) approach. The former refers to competitions where participants must download the testing set, run the prediction systems on their own machines, and provide the predictions to the organizers, usually in CSV format for the multimedia domain. In this setup, the ground truth data for the testing set is kept private, and after the organizers receive the prediction result files, they communicate the performance to the participants, or the results are automatically computed by the platform by organizer-provided scripts, once the files are uploaded to it. The A2D approach on the other hand is more complex, may incur additional financial costs, and may be more time consuming for both organizers and task participants, but increases the trustworthiness and reproducibility of the task and AI models themselves. In this setup, organizers provide cloud-based computing resources via Virtual Machines (VMs) and containers, and a common processing pipeline or API that competitors must integrate in their source code. The participants develop the wrappers that integrate their AI models accordingly, and upload the model to the EaaS platforms directly. The AI models are then executed according to the common pipeline and results are automatically provided to the participants, while also allowing for the testing data to be kept completely private. Traditionally, in order to achieve this, EaaS platforms offer the possibility of integration with cloud computing platforms like Amazon AWS, Microsoft Azure, or Google Cloud, and offer Docker integration for the creation of containers where the code can be hosted.
Communication and dissemination. EaaS platforms allow the interaction between competition organizers and participants, either through emails, automatic notifications, or forums where interested parties can exchange ideas, ask questions, offer help, signal potential problems in the data or scripts associated with the tasks.
Popular multimedia EaaS platforms
This section presents some of the most popular benchmarking platforms aimed at the multimedia domain. We will present some key features and associated popular multimedia datasets for the following platforms: Kaggle, AIcrowd, Codabench, Drivendata, and EvalAI.
Kaggle represents perhaps the top-most popular benchmarking platform at this moment, and goes beyond the scope of providing datasets and benchmarking competitions, also hosting AI models, courses, and source code repositories. Competition organizers can design the tasks under either of the D2A or A2D paradigms, giving participants the possibility of integrating their AI models in Jupyter Notebooks for reproducibility. The platform also gives the option of alloting CPU and GPU cloud-based resources for A2D competitions. The Kaggle repository offers code for a large number of additional competition management tools and communication APIs. Among an impressive number of datasets and competitions, Kaggle currently hosts competitions that use the MNIST original data [3], as well as other MNIST-like datasets like Fashion-MNIST [4], as well as datasets on varied subjects ranging from sentiment analysis in social media [5] to medical image processing [6].
AIcrowd is an open source EaaS platform for open benchmarking challenges that puts an accent on connections and collaborative work between data science and machine learning experts. This platform offers the source code for command line interface (CLI) and API clients that can interact with AIcrowd servers. ImageCLEF, between 2018 and 2022 [7 – 11], is one of the most popular multimedia benchmarking initiatives hosted on AICrowd, featuring diverse multimedia topics such as lifelogging, medical image processing, image processing for environment health prediction, the analysis of social media dangers with regards to image sharing, and ensemble learning for multimedia data.
Codabench, launched in August 2023, and its precursor CodaLab, are two open source benchmarking platforms that provide a large number of options, including A2D and D2A approaches, as well as “inverted benchmarks”, where organizers provide the reference algorithms and participants contribute with the datasets. Among the current running challenges on this platform standouts are the two Quality-of-Service-oriented challenges on audio-video synchronization error detection and error measurement challenges that are part of the 3rd Workshop on Image/Video/Audio Quality in Computer Vision and Generative AI at the Winter Conference on Applications of Computer Vision – WACV2024.
Drivendata targets the intersection of data science and social impact. This platform hosts competitions that integrate the social aspect of their domain of interest directly in their mission and definition, while also hosting a number of open-source projects and competition-winning AI models. Given its accent on social impact, this platform hosts a number of benchmarking challenges that target social issues like the detection of hateful memes [12] and image-based nature conservation efforts.
EvalAI is another open source platform that is able to create A2D and D2A competition environments, while also integrating optimization steps that allow for evaluation code to run faster on multi-core cloud infrastructure. The EvalAI platform holds many diverse multimedia-centric competitions, including image segmentation tasks based on LVIS [13] and a wide range of sport tasks [14].
Future directions, developments and other tools
While the tools and platforms described in the previous section represent just a portion of the number of EaaS platform currently online in the research community, we would also like to mention some projects that are currently in the development stage or that can be considered additional tools for benchmarking initiatives:
- The AI4Media benchmarking platform, is a benchmarking platform that is currently in the prototype and development stage. Among its most interesting features and ideas promoted by the platform developers is the creation of complexity metrics that would help competition organizers understand the computational efficiency and resource requirements for the submitted systems.
- The BenchmarkSTT started as a specialized benchmarking platform for speech-to-text, but is now evolving in different directions, including facial recognition in videos.
- The PapersWithCode platform, while not a benchmarking platform per se, is useful as a repository that collects the results AI model on datasets throughout the years, and groups different datasets studying the same concepts under the same umbrella (i.e., Image Classification, Object Detection, Medical Image Segmentation, etc.), while also providing links to scientific papers, github implementations of the models, and links to the datasets. This may represent a good starting point for young researchers that are trying to understand the history and state-of-the-art for certain domains and applications.
Conclusions
Benchmarking platforms represent a key component of benchmarking, pushing for fairness and trustworthiness in AI model comparison, while also providing tools that may foster reproducibility in AI. We are happy to see that many of the platforms discussed in this article are open source, or have open source components, thus allowing interested scientists to create their own custom implementations of these platforms, and to adapt them when necessary to their particular fields.
Acknowledgements
The work presented in this column is supported under the H2020 AI4Media “A European Excellence Centre for Media, Society and Democracy” project, contract #951911.
References
[1] Hanbury, A., Müller, H., Balog, K., Brodt, T., Cormack, G. V., Eggel, I., Gollub, T., Hopfgartner, F., Kalpathy-Cramer, J., Kando, N., Krithara, A., Lin, J., Mercer, S. & Potthast, M. (2015). Evaluation-as-a-service: Overview and outlook. arXiv preprint arXiv:1512.07454.[2] Hanbury, A., Müller, H., Langs, G., Weber, M. A., Menze, B. H., & Fernandez, T. S. (2012). Bringing the algorithms to the data: cloud–based benchmarking for medical image analysis. In Information Access Evaluation. Multilinguality, Multimodality, and Visual Analytics: Third International Conference of the CLEF Initiative, CLEF 2012, Rome, Italy, September 17-20, 2012. Proceedings 3 (pp. 24-29). Springer Berlin Heidelberg.
[3] LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278-2324.
[4] Xiao, H., Rasul, K., & Vollgraf, R. (2017). Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747.
[5] Niu, T., Zhu, S., Pang, L., & El Saddik, A. (2016). Sentiment analysis on multi-view social data. In MultiMedia Modeling: 22nd International Conference, MMM 2016, Miami, FL, USA, January 4-6, 2016, Proceedings, Part II 22 (pp. 15-27). Springer International Publishing.
[6] Thambawita, V., Hicks, S. A., Storås, A. M., Nguyen, T., Andersen, J. M., Witczak, O., … & Riegler, M. A. (2023). VISEM-Tracking, a human spermatozoa tracking dataset. Scientific Data, 10(1), 1-8.
[7] Ionescu, B., Müller, H., Villegas, M., García Seco de Herrera, A., Eickhoff, C., Andrearczyk, V., … & Gurrin, C. (2018). Overview of ImageCLEF 2018: Challenges, datasets and evaluation. In Experimental IR Meets Multilinguality, Multimodality, and Interaction: 9th International Conference of the CLEF Association, CLEF 2018, Avignon, France, September 10-14, 2018, Proceedings 9 (pp. 309-334). Springer International Publishing.
[8] Ionescu, B., Müller, H., Péteri, R., Dang-Nguyen, D. T., Piras, L., Riegler, M., … & Karampidis, K. (2019). ImageCLEF 2019: Multimedia retrieval in lifelogging, medical, nature, and security applications. In Advances in Information Retrieval: 41st European Conference on IR Research, ECIR 2019, Cologne, Germany, April 14–18, 2019, Proceedings, Part II 41 (pp. 301-308). Springer International Publishing.
[9] Ionescu, B., Müller, H., Péteri, R., Dang-Nguyen, D. T., Zhou, L., Piras, L., … & Constantin, M. G. (2020). ImageCLEF 2020: Multimedia retrieval in lifelogging, medical, nature, and internet applications. In Advances in Information Retrieval: 42nd European Conference on IR Research, ECIR 2020, Lisbon, Portugal, April 14–17, 2020, Proceedings, Part II 42 (pp. 533-541). Springer International Publishing.
[10] Ionescu, B., Müller, H., Péteri, R., Abacha, A. B., Demner-Fushman, D., Hasan, S. A., … & Popescu, A. (2021). The 2021 ImageCLEF Benchmark: Multimedia retrieval in medical, nature, internet and social media applications. In Advances in Information Retrieval: 43rd European Conference on IR Research, ECIR 2021, Virtual Event, March 28–April 1, 2021, Proceedings, Part II 43 (pp. 616-623). Springer International Publishing.
[11] de Herrera, A. G. S., Ionescu, B., Müller, H., Péteri, R., Abacha, A. B., Friedrich, C. M., … & Dogariu, M. (2022, April). Imageclef 2022: multimedia retrieval in medical, nature, fusion, and internet applications. In European Conference on Information Retrieval (pp. 382-389). Cham: Springer International Publishing.
[12] Kiela, D., Firooz, H., Mohan, A., Goswami, V., Singh, A., Fitzpatrick, C. A., … & Parikh, D. (2021, August). The hateful memes challenge: Competition report. In NeurIPS 2020 Competition and Demonstration Track (pp. 344-360). PMLR.
[13] Gupta, A., Dollar, P., & Girshick, R. (2019). Lvis: A dataset for large vocabulary instance segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 5356-5364).
[14] Giancola, S., Cioppa, A., Deliège, A., Magera, F., Somers, V., Kang, L., … & Li, Z. (2022, October). SoccerNet 2022 challenges results. In Proceedings of the 5th International ACM Workshop on Multimedia Content Analysis in Sports (pp. 75-86).