Overview of Open Dataset Sessions and Benchmarking Competitions in 2022 – Part 2 (MDRE at MMM 2022, ACM MM 2022)


In this Dataset Column, we present a review of some of the notable events related to open datasets and benchmarking competitions in the field of multimedia. This year’s selection highlights the wide range of topics and datasets currently of interest to the community. Some of the events covered in this review include special sessions on open datasets and competitions featuring multimedia data. While this list is not exhaustive and contains an overview of about 40 datasets, it is meant to showcase the diversity of subjects and datasets explored in the field. This year’s review follows similar efforts from the previous year (https://records.sigmm.org/2022/01/12/overview-of-open-dataset-sessions-and-benchmarking-competitions-in-2021/), highlighting the ongoing importance of open datasets and benchmarking competitions in advancing research and development in multimedia. The column is divided into three parts, in this one we focus on MDRE at MMM 2022 and ACM MM 2022:

  • Multimedia Datasets for Repeatable Experimentation at 28th International Conference on Multimedia Modeling (MDRE at MMM 2022 – https://mmm2022.org/ssp.html#mdre). We summarize the three datasets presented during the MDRE, addressing several topics like user-centric video search competition, dataset (GPR1200) to evaluate the performance of deep neural networks for general image retrieval, and dataset for evaluating the performance of Question Answering (QA) systems on lifelog data (LLQA).
  • Selected datasets at the 30th ACM Multimedia Conference (MM ’22 – https://2022.acmmm.org/). For a general report from ACM Multimedia 2022 please see (https://records.sigmm.org/2022/12/07/report-from-acm-multimedia-2022-by-nitish-nagesh/). We summarize nine datasets presented during the conference, targeting several topics like dataset for multimodal intent recognition (MintRec), audio-visual question answering dataset (AVQA), large-scale radar dataset (mmWave), multimodal sticker emotion recognition dataset (SER30K), video-sentence dataset for vision-language pre-training (ACTION), dataset of head and gaze behavior for 360-degree videos, saliency in augmented reality dataset (SARD), multi-modal dataset spotting the differences between pairs of similar images (DialDiff), and large-scale remote sensing images dataset (RSVG).

For the overview of datasets related to QoMEX 2022 and ODS at MMSys ’22 please check the first part (https://records.sigmm.org/?p=12292), while ImageCLEF 2022 and MediaEval 2022 are addressed in the third part (http://records.sigmm.org/?p=12362).

MDRE at MMM 2022

The Multimedia Datasets for Repeatable Experimentation (MDRE) special session is part of the 2022 International Conference on Multimedia Modeling (MMM 2022), supporting both online and onsite presentation, Phu Quoc, Vietnam, June 6-10, 2022. The session was organized by Cathal Gurrin (Dublin City University, Ireland), Duc-Tien Dang-Nguyen (University of Bergen, Norway), Björn Þór Jónsson (IT University of Copenhagen, Denmark), Adam Jatowt (University of Innsbruck, Austria), Liting Zhou (Dublin City University, Ireland) and Graham Healy (Dublin City University, Ireland). Details regarding this session can be found at: https://mmm2022.org/ssp.html#mdre

The MDRE’22 special session at MMM’22, is the fourth MDRE session, and it represents an opportunity for interested researchers to submit their datasets to this track. The work submitted to MDRE is permanently available at http://mmdatasets.org, where all the current and past editions of MDRE are hosted. Authors are asked to provide a paper describing its motivation, design, and usage, a brief summary of the experiments performed to date on the dataset, and a discussion of how it can be useful to the community, along with the dataset in itself.

A Task Category Space for User-Centric Comparative Multimedia Search Evaluations
Paper available at: https://doi.org/10.1007/978-3-030-98358-1_16
Lokoč, J., Bailer, W., Barthel, K.U., Gurrin, C., Heller, S., Jónsson, B., Peška, L., Rossetto, L., Schoeffmann, K., Vadicamo, L., Vrochidis, S., Wu, J.
Charles University, Prague, Czech Republic; JOANNEUM RESEARCH, Graz, Austria; HTW Berlin, Berlin, Germany; Dublin City University, Dublin, Ireland; University of Basel, Basel, Switzerland; IT University of Copenhagen, Copenhagen, Denmark; University of Zurich, Zurich, Switzerland; Klagenfurt University, Klagenfurt, Austria; ISTI CNR, Pisa, Italy; Centre for Research and Technology Hellas, Thessaloniki, Greece; City University of Hong Kong, Hong Kong.
Dataset available at: On request

The authors have analyzed the spectrum of possible task categories and propose a list of individual axes that define a large space of possible task categories. Using this concept of category space, new user-centric video search competitions can be designed to benchmark video search systems from different perspectives. They further analyze the three task categories considered at the Video Browser Showdown and discuss possible (but sometimes challenging) shifts within the task category space.

GPR1200: A Benchmark for General-Purpose Content-Based Image Retrieval
Paper available at: https://doi.org/10.1007/978-3-030-98358-1_17
Schall, K., Barthel, K.U., Hezel, N., Jung, K.
Visual Computing Group, HTW Berlin, University of Applied Sciences, Germany.
Dataset available at: http://visual-computing.com/project/GPR1200

In this study, the authors have developed a new dataset called GPR1200 to evaluate the performance of deep neural networks for general image retrieval (CBIR). They found that large-scale pretraining significantly improves retrieval performance and that further improvement can be achieved through fine-tuning. GPR1200 is presented as an easy-to-use and accessible but challenging benchmark dataset with a broad range of image categories.

LLQA – Lifelog Question Answering Dataset
Paper available at: https://doi.org/10.1007/978-3-030-98358-1_18
Tran, L.-D., Ho, T.C., Pham, L.A., Nguyen, B., Gurrin, C., Zhou, L.
Dublin City University, Dublin, Ireland; Vietnam National University, Ho Chi Minh University of Science, Ho Chi Minh City, Viet Nam; AISIA Research Lab, Ho Chi Minh City, Viet Nam.
Dataset available at: https://github.com/allie-tran/LLQA

This study presents Lifelog Question Answering Dataset (LLQA), a new dataset for evaluating the performance of Question Answering (QA) systems on lifelog data. The dataset includes over 15,000 multiple-choice questions as an augmented 85-day lifelog collection, and is intended to serve as a benchmark for future research in this area. The results of the study showed that QA on lifelog data is a challenging task that requires further exploration.

ACM MM 2022

Numerous dataset-related papers have been presented at the 30th ACM International Conference on Multimedia (MM’ 22), organized in Lisbon, Portugal, October 10 – 14, 2022 (https://2022.acmmm.org/). The complete MM ’22: Proceedings of the 30th ACM International Conference on Multimedia are available in the ACM Digital Library (https://dl.acm.org/doi/proceedings/10.1145/3503161).

There was not a specifically dedicated Dataset session among roughly 35 sessions at the MM ’22 symposium. However, the importance of datasets can be illustrated in the following statistics, quantifying how often the term “dataset” appears in MM ’22 Proceedings. The term appears in the title of 9 papers (7 last year), the keywords of 35 papers (66 last year), and the abstracts of 438 papers (339 last year). As a small example, nine selected papers focused primarily on new datasets with publicly available data are listed below. There are contributions focused on various multimedia applications, e.g., understanding multimedia content, multimodal fusion and embeddings, media interpretation, vision and language, engaging users with multimedia, emotional and social signals, interactions and Quality of Experience, and multimedia search and recommendation.

MIntRec: A New Dataset for Multimodal Intent Recognition
Paper available at: https://doi.org/10.1145/3503161.3547906
Zhang, H., Xu, H., Wang, X., Zhou, Q., Zhao, S., Teng, J.
Tsinghua University, Beijing, China.
Dataset available at: https://github.com/thuiar/MIntRec

MIntRec is a dataset for multimodal intent recognition with 2,224 samples based on the data collected from the TV series Superstore, in text, video, and audio modalities, annotated with twenty intent categories and speaker bounding boxes. Baseline models are built by adapting multimodal fusion methods and show significant improvement over text-only modality. MIntRec is useful for studying relationships between modalities and improving intent recognition.

AVQA: A Dataset for Audio-Visual Question Answering on Videos
Paper available at: https://doi.org/10.1145/3503161.3548291
Yang, P., Wang, X., Duan, X., Chen, H., Hou, R., Jin, C., Zhu, W.
Tsinghua University, Shenzhen, China; Communication University of China, Beijing, China.
Dataset available at: https://mn.cs.tsinghua.edu.cn/avqa

Audio-visual question-answering dataset (AVQA) is introduced for videos in real-life scenarios. It includes 57,015 videos and 57,335 question-answer pairs that rely on clues from both audio and visual modalities. A Hierarchical Audio-Visual Fusing module is proposed to model correlations among audio, visual, and text modalities. AVQA can be used to test models with a deeper understanding of multimodal information on audio-visual question answering in real-life scenarios.

mmBody Benchmark: 3D Body Reconstruction Dataset and Analysis for Millimeter Wave Radar
Paper available at: https://doi.org/10.1145/3503161.3548262
Chen, A., Wang, X., Zhu, S., Li, Y., Chen, J., Ye, Q.
Zhejiang University, Hangzhou, China.
Dataset available at: On request

A large-scale mmWave radar dataset with synchronized and calibrated point clouds and RGB(D) images is presented, along with an automatic 3D body annotation system. State-of-the-art methods are trained and tested on the dataset, showing the mmWave radar can achieve better 3D body reconstruction accuracy than RGB camera but worse than depth camera. The dataset and results provide insights into improving mmWave radar reconstruction and combining signals from different sensors.

SER30K: A Large-Scale Dataset for Sticker Emotion Recognition
Paper available at: https://doi.org/10.1145/3503161.3548407
Liu, S., Zhang, X., Yan, J.
Nankai University, Tianjin, China.
Dataset available at: https://github.com/nku-shengzheliu/SER30K

A new multimodal sticker emotion recognition dataset called SER30K with 1,887 sticker themes and 30,739 images is introduced for understanding emotions in stickers. A proposed method called LORA, using a vision transformer and local re-attention module, effectively extracts visual and language features for emotion recognition on SER30K and other datasets.

Auto-captions on GIF: A Large-scale Video-sentence Dataset for Vision-language Pre-training
Paper available at: https://doi.org/10.1145/3503161.3551581
Pan, Y., Li, Y., Luo, J., Xu, J., Yao, T., Mei, T.
JD Explore Academy, Beijing, China.
Dataset available at: http://www.auto-video-captions.top/2022/dataset

A new large-scale pre-training dataset, Auto-captions on GIF (ACTION), is presented for generic video understanding. It contains video-sentence pairs extracted and filtered from web pages and can be used for pre-training and downstream tasks such as video captioning and sentence localization. Comparisons with existing video-sentence datasets are made.

Where Are You Looking?: A Large-Scale Dataset of Head and Gaze Behavior for 360-Degree Videos and a Pilot Study
Paper available at: https://doi.org/10.1145/3503161.3548200
Jin, Y., Liu, J., Wang, F., Cui, S.
The Chinese University of Hong Kong, Shenzhen, Shenzhen, China.
Dataset available at: https://cuhksz-inml.github.io/head_gaze_dataset/

A dataset of users’ head and gaze behaviors in 360° videos is presented, containing rich dimensions, large scale, strong diversity, and high frequency. A quantitative taxonomy for 360° videos is also proposed, containing three objective technical metrics. Results of a pilot study on users’ behaviors and a case of application in tile-based 360° video streaming show the usefulness of the dataset for improving the performance of existing works.

Saliency in Augmented Reality
Paper available at: https://doi.org/10.1145/3503161.3547955
Duan, H., Shen, W., Min, X., Tu, D., Li, J., Zhai, G.
Shanghai Jiao Tong University, Shanghai, China; Alibaba Group, Hangzhou, China.
Dataset available at: https://github.com/DuanHuiyu/ARSaliency

A dataset, Saliency in AR Dataset (SARD), containing 450 background, 450 AR, and 1350 superimposed images with three mixing levels, is constructed to study the interaction between background scenes and AR contents, and the saliency prediction problem in AR. An eye-tracking experiment is conducted among 60 subjects to collect data.

Visual Dialog for Spotting the Differences between Pairs of Similar Images
Paper available at: https://doi.org/10.1145/3503161.3548170
Zheng, D., Meng, F., Si, Q., Fan, H., Xu, Z., Zhou, J., Feng, F., Wang, X.
Beijing University of Posts and Telecommunications, Beijing, China; WeChat AI, Tencent Inc, Beijing, China; Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China; University of Trento, Trento, Italy.
Dataset available at: https://github.com/zd11024/Spot_Difference

A new visual dialog task called Dial-the-Diff is proposed, in which two interlocutors access two similar images and try to spot the difference between them through conversation in natural language. A large-scale multi-modal dataset called DialDiff, containing 87k Virtual Reality images and 78k dialogs, is built for the task. Benchmark models are also proposed and evaluated to bring new challenges to dialog strategy and object categorization.

Visual Grounding in Remote Sensing Images
Paper available at: https://doi.org/10.1145/3503161.3548316
Sun, Y., Feng, S., Li, X., Ye, Y., Kang, J., Huang, X.
Harbin Institute of Technology, Shenzhen, Shenzhen, China; Soochow University, Suzhou, China.
Dataset available at: https://sunyuxi.github.io/publication/GeoVG

A new problem of visual grounding in large-scale remote sensing images has been presented, in which the task is to locate particular objects in an image by a natural language expression. A new dataset, called RSVG, has been collected and a new method, GeoVG, has been designed to address the challenges of existing methods in dealing with remote sensing images.

Overview of Open Dataset Sessions and Benchmarking Competitions in 2022 – Part 3 (ImageCLEF 2022, MediaEval 2022)

In this Dataset Column, we present a review of some of the notable events related to open datasets and benchmarking competitions in the field of multimedia. This year’s selection highlights the wide range of topics and datasets currently of interest to the community. Some of the events covered in this review include special sessions on open datasets and competitions featuring multimedia data. While this list is not exhaustive and contains an overview of about 40 datasets, it is meant to showcase the diversity of subjects and datasets explored in the field. This year’s review follows similar efforts from the previous year (https://records.sigmm.org/2022/01/12/overview-of-open-dataset-sessions-and-benchmarking-competitions-in-2021/), highlighting the ongoing importance of open datasets and benchmarking competitions in advancing research and development in multimedia. The column is divided into three parts, in this one we focus on ImageCLEF 2022 and MediaEval 2022:

  • ImageCLEF 2022 (https://www.imageclef.org/2022). We summarize the 5 datasets launched for the benchmarking tasks, related to several topics like social media profile assessment (ImageCLEFaware), segmentation and labeling of underwater coral images (ImageCLEFcoral), late fusion ensembling systems for multimedia data (ImageCLEFfusion) and medical imaging analysis (ImageCLEFmedical Caption, and ImageCLEFmedical Tuberculosis).
  • MediaEval 2022 (https://multimediaeval.github.io/editions/2022/). We summarize the 11 datasets launched for the benchmarking tasks, that target a wide range of multimedia topics like the analysis of flood related media (DisasterMM), game analytics (Emotional Mario), news item processing (FakeNews, NewsImages), multimodal understanding of smells (MUSTI), medical imaging (Medico), fishing vessel analysis (NjordVid), media memorability (Memorability), sports data analysis (Sport Task, SwimTrack), and urban pollution analysis (Urban Air).

For the overview of datasets related to QoMEX 2022 and ODS at MMSys ’22 please check the first part (https://records.sigmm.org/?p=12292), while MDRE at MMM 2022 and ACM MM 2022 are addressed in the second part (http://records.sigmm.org/?p=12360).

ImageCLEF 2022

ImageCLEF is a multimedia evaluation campaign, part of the clef initiative (http://www.clef-initiative.eu/). The 2022 edition (https://www.imageclef.org/2022) is the 19th edition of this initiative and addresses four main research tasks in several domains like: medicine, nature, social media content and user interface processing. ImageCLEF 2021 is organized by Bogdan Ionescu (University Politehnica of Bucharest, Romania), Henning Müller (University of Applied Sciences Western Switzerland, Sierre, Switzerland), Renaud Péteri (University of La Rochelle, France), Ivan Eggel (University of Applied Sciences Western Switzerland, Sierre, Switzerland) and Mihai Dogariu (University Politehnica of Bucharest, Romania).

ImageCLEFaware
Paper available at: https://ceur-ws.org/Vol-3180/paper-98.pdf
Popescu, A., Deshayes-Chossar, J., Schindler, H., Ionescu, B.
CEA LIST, France; University Politehnica of Bucharest, Romania.
Dataset available at: https://www.imageclef.org/2022/aware

This represents the second edition of the aware task at ImageCLEF, and it seeks to understand in what way do public social media profiles affect users in certain important scenarios, representing a search or application for: a bank loan, an accommodation, a job as waitress/waiter, and a job in IT.

ImageCLEFcoral
Paper available at: https://ceur-ws.org/Vol-3180/paper-97.pdf
Chamberlain, J., de Herrera, A.G.S., Campello, A., Clark, A..
University of Essex, United Kingdom; Wellcome Trust, United Kingdom.
Dataset available at: https://www.imageclef.org/2022/coral

This fourth edition of the coral task addresses the problem of segmenting and labeling a set of underwater images used in the monitoring of coral reefs. The task proposes two subtasks, namely an annotation and localization subtask and a pixel-wise parsing subtask.

ImageCLEFfusion
Paper available at: https://ceur-ws.org/Vol-3180/paper-99.pdf
Ştefan, L-D., Constantin, M.G., Dogariu, M., Ionescu, B.
University Politehnica of Bucharest, Romania.
Dataset available at: https://www.imageclef.org/2022/fusion

This represents the first edition of the fusion task, and it proposes several scenarios adapted for the use of late fusion or ensembling systems. The two scenarios correspond to a regression approach, using data associated with the prediction of media interestingness, and a retrieval scenario, using data associated with search result diversification.

ImageCLEFmedical Tuberculosis
Paper available at: https://ceur-ws.org/Vol-3180/paper-96.pdf
Kozlovski, S., Dicente Cid, Y., Kovalev, V., Müller, H.
United Institute of Informatics Problems, Belarus; Roche Diagnostics, Spain; University of Applied Sciences Western Switzerland, Switzerland; University of Geneva, Switzerland.
Dataset available at: https://www.imageclef.org/2022/medical/tuberculosis

This task is now at its sixth edition, and is being upgraded to a detection problem. Furthermore, two tasks are now included: the detection of lung cavern regions in lung CT images associated with lung tuberculosis and the prediction of 4 binary features of caverns suggested by experienced radiologists.

ImageCLEFmedical Caption
Paper available at: https://ceur-ws.org/Vol-3180/paper-95.pdf
Rückert, J., Ben Abacha, A., de Herrera, A.G.S., Bloch, L., Brüngel, R., Idrissi-Yaghir, A., Schäfer, H., Müller, H., Friedrich, C.M.
University of Applied Sciences and Arts Dortmund, Germany; Microsoft, USA; University of Essex, UK; University Hospital Essen, Germany; University of Applied Sciences Western Switzerland, Switzerland; University of Geneva, Switzerland.
Dataset available at: https://www.imageclef.org/2022/medical/caption

The sixth edition of this task consists of two tasks. In the first task participants must detect relevant medical concepts in a large corpus of medical images, while in the second task coherent captions must be generated for the entirety of the context of medical images, targeting the interplay of many visible concepts.

MediaEval 2022

The MediaEval Multimedia Evaluation benchmark (https://multimediaeval.github.io/) offers challenges in artificial intelligence for multimedia data. This is the 13th edition of MediaEval (https://multimediaeval.github.io/editions/2022/) and 11 tasks were proposed for this edition, targeting a large number of challenges by creating algorithms for retrieval, analysis, and exploration. For this edition, a “Quest for Insight” is pursued, where organizers are encouraged to propose interesting and insightful questions about the concepts that will be explored, and participants are encouraged to push beyond only striving to improve evaluation scores and to also working to achieve deeper understanding about the challenges.

DisasterMM: Multimedia Analysis of Disaster-Related Social Media Data
Preprint available at: https://2022.multimediaeval.com/paper5337.pdf
Andreadis, S., Bozas, A., Gialampoukidis, I., Mavropoulos, T., Moumtzidou, A., Vrochidis, S., Kompatsiaris, I., Fiorin, R., Lombardo, F., Norbiato, D., Ferri, M.
Information Technologies Institute – Centre of Research and Technology Hellas, Greece; Eastern Alps River Basin District, Italy.
Dataset available at: https://multimediaeval.github.io/editions/2022/tasks/disastermm/

The DisasterMM task proposes the analysis of social media data extracted from Twitter, targeting the analysis of natural or man-made disaster posts. For this year, the organizers focused on the analysis of flooding events and proposed two subtasks: relevance classification of posts and location extraction from texts.

Emotional Mario: A Game Analytics Challenge
Preprint or paper not published yet.
Lux, M., Alshaer, M., Riegler, M., Halvorsen, P., Thambawita, V., Hicks, S., Dang-Nguyen, D.-T.,
Alpen-Adria-Universität Klagenfurt, Austria; SimulaMet, Norway; University of Bergen, Norway.
Dataset available at: https://multimediaeval.github.io/editions/2022/tasks/emotionalmario/

Emotional Mario focuses on the Super Mario Bros videogame, analyzing the data associated with gamers that consists of game input, demographics, biomedical data, and video associated with players’ faces. Two subtasks are proposed: event detection, seeking to identify gaming events of a significant importance based on facial videos and biometric data, and gameplay summarization, seeking to select the best moments of gameplay.

FakeNews Detection
Preprint available at: https://2022.multimediaeval.com/paper116.pdf
Pogorelov, K., Schroeder, D.T., Brenner, S., Maulana, A., Langguth, J.
Simula Research Laboratory, Norway; University of Bergen, Norway; Stuttgart Media University, Germany.
Dataset available at: https://multimediaeval.github.io/editions/2022/tasks/fakenews/

The FakeNews Detection task proposes several types of methods of analyzing fake news and the way they spread, using COVID-19 related conspiracy theories. The competition proposes three tasks: the first subtask targets conspiracy detection in text-based data, the second asks participants to analyze graphs of conspiracy posters, while the last one combines the first two, aiming at detection on both text and graph data.

MUSTI – Multimodal Understanding of Smells in Texts and Images
Preprint available at: https://2022.multimediaeval.com/paper9634.pdf
Hürriyetoğlu, A., Paccosi, T., Menini, S., Zinnen, M., Lisena, P., Akdemir, K., Troncy, R., van Erp, M.
KNAW Humanities Cluster DHLab, Netherlands; Fondazione Bruno Kessler, Italy; Friedrich-Alexander-Universität, Germany; EURECOM, France.
Dataset available at: https://multimediaeval.github.io/editions/2022/tasks/musti/

MUSTI is one of the few benchmarks that seek to analyze the underrepresented modality of smell. The organizers seek to further the understanding of descriptions of smell in texts and images, and propose two subtasks: the first one aims at classification of smells based on language and image models, predicting whether texts or images evoke the same smell source or not; while the second subtask targets the participants with identifying what are the common smell sources.

Medical Multimedia Task: Transparent Tracking of Spermatozoa
Preprint available at: https://2022.multimediaeval.com/paper5501.pdf
Thambawita, V., Hicks, S., Storås, A.M, Andersen, J.M., Witczak, O., Haugen, T.B., Hammer, H., Nguyen, T., Halvorsen, P., Riegler, M.A.
SimulaMet, Norway; OsloMet, Norway; The Arctic University of Norway, Norway.
Dataset available at: https://multimediaeval.github.io/editions/2022/tasks/medico/

The Medico Medical Multimedia Task tackles the challenge of tracking sperm cells in video recordings, while analyzing the specific characteristics of these cells. Four subtasks are proposed: a sperm-cell real-time tracking task in videos, a prediction of cell motility task, a catch and highlight task seeking to identify sperm cell speed, and an explainability task.

NewsImages
Preprint available at: https://2022.multimediaeval.com/paper8446.pdf
Kille, B., Lommatzsch, A., Özgöbek, Ö., Elahi, M., Dang-Nguyen, D.-T.
Norwegian University of Science and Technology, Norway; Berlin Institute of Technology, Germany; University of Bergen, Norway; Kristiania University College, Norway.
Dataset available at: https://multimediaeval.github.io/editions/2022/tasks/newsimages/

The goal of the NewsImages task is to further the understanding of the relationship between textual and image content in news articles. Participants are tasked with re-linking and re-matching textual news articles with the corresponding images, based on data gathered from social media, news portals and RSS feeds.

NjordVid: Fishing Trawler Video Analytics Task
Preprint available at: https://2022.multimediaeval.com/paper5854.pdf
Nordmo, T.A.S., Ovesen, A.B., Johansen, H.D., Johansen, D., Riegler, M.A.
The Arctic University of Norway, Norway; SimulaMet, Norway.
Dataset available at: https://multimediaeval.github.io/editions/2022/tasks/njord/

The NjordVid task proposes data associated with fishing vessel recordings, representing a solution to maintaining sustainable fishing practices. Two different tasks are proposed: detection of events on the boat, like movement of people, catching fish, etc, and privacy of on-board personnel.

Predicting Video Memorability
Preprint available at: https://2022.multimediaeval.com/paper2265.pdf
Sweeney, L., Constantin, M.G., Demarty, C.-H., Fosco, C., de Herrera, A.G.S., Halder, S., Healy, G., Ionescu, B., Matran-Fernandez, A., Smeaton, A.F., Sultana, M.
Dublin City University, Ireland; University Politehnica of Bucharest, Romania; InterDigital, France; Massachusetts Institute of Technology Cambridge, USA; University of Essex, UK.
Dataset available at: https://multimediaeval.github.io/editions/2022/tasks/memorability/

The Video Memorability task asks participants to predict how memorable a video sequence is, targeting short-term memorability. Three subtasks are proposed for this edition: a general video-based prediction task where participants are asked to predict the memorability score of a video, a generalization task where training and testing are performed on different sources of data, and an EEG-based task where annotator EEG scans are provided.

Sport Task: Fine Grained Action Detection and Classification of Table Tennis Strokes from Videos
Preprint available at: https://2022.multimediaeval.com/paper4766.pdf
Martin, P.-E., Calandre, J., Mansencal, B., Benois-Pineau, J., Péteri, R., Mascarilla, L., Morlier, J.
Max Planck Institute for Evolutionary Anthropology, Germany; La Rochelle University, France; Univ. Bordeaux, France.
Dataset available at: https://multimediaeval.github.io/editions/2022/tasks/sportsvideo/

The Sport Task aims at action detection and classification in videos recorded at table tennis events. Low inter-class variability makes this task harder than other traditional action classification benchmarks. Two subtasks are proposed: a classification task where participants are asked to label table tennis videos according to the strokes the players make, and a detection task where participants must detect whether a stroke was made.

SwimTrack: Swimmers and Stroke Rate Detection in Elite Race Videos
Preprint available at: https://2022.multimediaeval.com/paper6876.pdf
Jacquelin, N., Jaunet, T., Vuillemot, R., Duffner, S.
École Centrale de Lyon, France; INSA-Lyon, France.
Dataset available at: https://multimediaeval.github.io/editions/2022/tasks/swimtrack/

The SwimTrack comprises 5 different multimedia tracks related to the analysis of competition-level swimming videos, and provides multimodal video, image and audio data. The five subtasks are as follows: a position detection task associating swimmers with the numbers of swimming lanes, a stroke rate detection task, a camera registration task where participants must apply homography projection methods to create a top-view of the pool, a character recognition on scoreboards task, and a sound detection task associated with buzzer sounds.

Urban Air: Urban Life and Air Pollution
Preprint available at: https://2022.multimediaeval.com/paper586.pdf
Dao, M.-S., Dang, T.-H., Nguyen-Tai, T.-L., Nguyen, T.-B., Dang-Nguyen, D.-T.
National Institute of Information and Communications Technology, Japan; Dalat University, Vietnam; LOC GOLD Technology MTV Ltd. Co, Vietnam; University of Science, Vietnam National University in HCM City, Vietnam; Bergen University, Norway.
Dataset available at: https://multimediaeval.github.io/editions/2022/tasks/urbanair/

The Urban Air task provides multimodal data that allows the analysis of air pollution and pollution patterns in urban environments. The organizers created two subtasks for this competition: a multimodal/crossmodal air quality index prediction task using station and/or CCTV data, and a periodic traffic pollution pattern discovery task.

Overview of Open Dataset Sessions and Benchmarking Competitions in 2022 – Part 1 (QoMEX 2022, ODS at MMSys ’22)

In this Dataset Column, we present a review of some of the notable events related to open datasets and benchmarking competitions in the field of multimedia. This year’s selection highlights the wide range of topics and datasets currently of interest to the community. Some of the events covered in this review include special sessions on open datasets and competitions featuring multimedia data. While this list is not exhaustive and contains an overview of about 40 datasets, it is meant to showcase the diversity of subjects and datasets explored in the field. This year’s review follows similar efforts from the previous year (https://records.sigmm.org/2022/01/12/overview-of-open-dataset-sessions-and-benchmarking-competitions-in-2021/), highlighting the ongoing importance of open datasets and benchmarking competitions in advancing research and development in multimedia. The column is divided into three parts, in this one we focus on QoMEX 2022 and ODS at MMSys ’22:

  • 14th International Conference on Quality of Multimedia Experience (QoMEX 2022 – https://qomex2022.itec.aau.at/). We summarize three datasets included in this conference, that address QoE studies on audiovisual 360° video, storytelling for quality perception and energy consumption while streaming video QoE.
  • Open Dataset and Software Track at 13th ACM Multimedia Systems Conference (ODS at MMSys ’22 – https://mmsys2022.ie/). We summarize nine datasets presented at the ODS track, targeting several topics, including surveillance videos from a fishing vessel (Njord), multi-codec 8K UHD videos (8K MPEG-DASH dataset), light-field (LF) synthetic immersive large-volume plenoptic dataset (SILVR), a dataset of online news items and the related task of rematching (NewsImages), video sequences, characterized by various complexity categories (VCD), QoE dataset of realistic video clips for real networks, dataset of 360° videos with subjective emotional ratings (PEM360), free-viewpoint video dataset, and cloud gaming dataset (CGD).

For the overview of datasets related to MDRE at MMM 2022 and ACM MM 2022 please check the second part (http://records.sigmm.org/?p=12360), while ImageCLEF 2022 and MediaEval 2022 are addressed in the third part (http://records.sigmm.org/?p=12362).

QoMEX 2022

Three dataset papers were presented at the International Conference on Quality of Multimedia Experience (QoMEX 2022), organized in Lippstadt, Germany, September 5 – 7, 2022 (https://qomex2022.itec.aau.at/). The complete QoMEX ’22 Proceeding is available in the IEEE Digital Library (https://ieeexplore.ieee.org/xpl/conhome/9900491/proceeding).

These datasets were presented within the Databases session, chaired by Professor Oliver Hohlfeld. These three papers present contributions focused on audiovisual 360-degree videos, storytelling for quality perception and modelling of energy consumption and streaming of video QoE.

Audiovisual Database with 360° Video and Higher-Order Ambisonics Audio for Perception, Cognition, Behavior and QoE Evaluation Research
Paper available at: https://ieeexplore.ieee.org/document/9900893
Robotham, T., Singla, A., Rummukainen, O., Raake, A. and Habets, E.
International Audio Laboratories Erlangen, A joint institution of the Friedrich-Alexander-Universitat Erlangen-Nurnberg (FAU) and Fraunhofer Institute for Integrated Circuits (IIS), Germany; TU Ilmenau, Germany.
Dataset available at: https://qoevave.github.io/database/

This publicly available database provides audiovisual 360° content with high-order Ambisonics audio. It consists of twelve scenes capturing real-life nature and urban environments with a video resolution of 7680×3840 at 60 frames-per-second and with 4th-order Ambisonics audio. These 360° video sequences, with an average duration of 60 seconds, represent real-life settings for systematically evaluating various dimensions of uni-/multi-modal perception, cognition, behavior, and QoE. It provides high-quality reference material with a balanced focus on auditory and visual sensory information.

The Storytime Dataset: Simulated Videotelephony Clips for Quality Perception Research
Paper available at: https://ieeexplore.ieee.org/document/9900888
Spang, R. P., Voigt-Antons, J. N. and Möller, S.
Technische Universität Berlin, Berlin, Germany; Hamm-Lippstadt University of Applied Sciences, Lippstadt, Germany.
Dataset available at: https://osf.io/cyb8w/

This is a dataset of simulated videotelephony clips to act as stimuli in quality perception research. It consists of four different stories in the German language that are told through ten consecutive parts, each about 10 seconds long. Each of these parts is available in four different quality levels, ranging from perfect to stalling. All clips (FullHD, H.264 / AAC) are actual recordings from end-user video-conference software to ensure ecological validity and realism of quality degradation. Apart from a detailed description of the methodological approach, we contribute the entire stimuli dataset containing 160 videos and all rating scores for each file.

Modelling of Energy Consumption and Streaming Video QoE using a Crowdsourcing Dataset
Paper available at: https://ieeexplore.ieee.org/document/9900886
Herglotz, C, Robitza, W., Kränzler, M., Kaup, A. and Raake, A.
Friedrich-Alexander-Universität, Erlangen, Germany; Audiovisual Technology Group, TU Ilmenau, Germany; AVEQ GmbH, Vienna, Austria.
Dataset available at: On request

This paper performs a first analysis of end-user power efficiency and Quality of Experience of a video streaming service. A crowdsourced dataset comprising 447,000 streaming events from YouTube is used to estimate both the power consumption and perceived quality. The power consumption is modelled based on previous work, which extends toward predicting the power usage of different devices and codecs. The user-perceived QoE is estimated using a standardized model.

ODS at MMSys ’22

The traditional Open Dataset and Software Track (ODS) was a part of the 13th ACM Multimedia Systems Conference (MMSys ’22) organized in Athlone, Ireland, June 14 – 17, 2022 (https://mmsys2022.ie/). The complete MMSys ’22: Proceedings of the 13th ACM Multimedia Systems Conference are available in the ACM Digital Library (https://dl.acm.org/doi/proceedings/10.1145/3524273).

The Open Dataset and Software Chairs for MMSys ’22 were Roberto Azevedo (Disney Research, Switzerland), Saba Ahsan (Nokia Technologies, Finland), and Yao Liu (Rutgers University, USA). The ODS session with 14 papers has been initiated with pitches on Wednesday, June 15, followed by a poster session. There have been nine dataset papers presented out of fourteen contributions. A listing of the paper titles, dataset summaries, and associated DOIs is included below for your convenience.

Njord: a fishing trawler dataset
Paper available at: https://doi.org/10.1145/3524273.3532886
Nordmo, T.-A.S., Ovesen, A.B., Juliussen, B.A., Hicks, S.A., Thambawita, V., Johansen, H.D., Halvorsen, P., Riegler, M.A., Johansen, D.
UiT the Arctic University of Norway, Norway; SimulaMet, Norway; Oslo Metropolitan University, Norway.
Dataset available at: https://doi.org/10.5281/zenodo.6284673

This paper presents Njord, a dataset of surveillance videos from a commercial fishing vessel. The dataset aims to demonstrate the potential for using data from fishing vessels to detect accidents and report fish catches automatically. The authors also provide a baseline analysis of the dataset and discuss possible research questions that it could help answer.

Multi-codec ultra high definition 8K MPEG-DASH dataset
Paper available at: https://doi.org/10.1145/3524273.3532889
Taraghi, B., Amirpour, H., Timmerer, C.
Christian Doppler Laboratory Athena, Institute of Information Technology (ITEC), Alpen-Adria-Universität Klagenfurt, Klagenfurt, Austria.
Dataset available at: http://ftp.itec.aau.at/datasets/mmsys22/

This paper presents a dataset of multimedia assets encoded with various video codecs, including AVC, HEVC, AV1, and VVC, and packaged using the MPEG-DASH format. The dataset includes resolutions up to 8K and has a maximum media duration of 322 seconds, with segment lengths of 4 and 8 seconds. It is intended to facilitate research and development of video encoding technology for streaming services.

SILVR: a synthetic immersive large-volume plenoptic dataset
Paper available at: https://doi.org/10.1145/3524273.3532890
Courteaux, M., Artois, J., De Pauw, S., Lambert, P., Van Wallendael, G.
Ghent University – Imec, Oost-Vlaanderen, Zwijnaarde, Belgium.
Dataset available at: https://idlabmedia.github.io/large-lightfields-dataset/

SILVR (synthetic immersive large-volume plenoptic dataset) is a light-field (LF) image dataset allowing for six-degrees-of-freedom navigation in larger volumes while maintaining full panoramic field of view. It includes three virtual scenes with 642-2226 views, rendered with 180° fish-eye lenses and featuring color images and depth maps. The dataset also includes multiview rendering software and a lens-reprojection tool. SILVR can be used to evaluate LF coding and rendering techniques.

NewsImages: addressing the depiction gap with an online news dataset for text-image rematching
Paper available at: https://doi.org/10.1145/3524273.3532891
Lommatzsch, A., Kille, B., Özgöbek, O., Zhou, Y., Tešić, J., Bartolomeu, C., Semedo, D., Pivovarova, L., Liang, M., Larson, M.
DAI-Labor, TU-Berlin, Berlin, Germany; NTNU, Trondheim, Norway; Texas State University, San Marcos, TX, United States; Universidade Nova de Lisboa, Lisbon, Portugal.
Dataset available at: https://multimediaeval.github.io/editions/2021/tasks/newsimages/

NewsImages is a dataset of online news items and the related task of news images rematching, which aims to study the “depiction gap” between the content of an image and the text that accompanies it. The dataset is useful for studying connections between image and text and addressing the depiction gap, including sparse data, diversity of content, and the importance of background knowledge.

VCD: Video Complexity Dataset
Paper available at: https://doi.org/10.1145/3524273.3532892
Amirpour, H., Menon, V.V., Afzal, S., Ghanbari, M., Timmerer, C.
Christian Doppler Laboratory Athena, Institute of Information Technology (ITEC), Alpen-Adria-Universität Klagenfurt, Klagenfurt, Austria; School of Computer Science and Electronic Engineering, University of Essex, Colchester, United Kingdom.
Dataset available at: https://ftp.itec.aau.at/datasets/video-complexity/

The Video Complexity Dataset (VCD) is a collection of 500 Ultra High Definition (UHD) resolution video sequences, characterized by spatial and temporal complexities, rate-distortion complexity, and encoding complexity with the x264 AVC/H.264 and x265 HEVC/H.265 video encoders. It is suitable for video coding applications such as video streaming, two-pass encoding, per-title encoding, and scene-cut detection. These sequences are provided at 24 frames per second (fps) and stored online in losslessly encoded 8-bit 4:2:0 format.

Realistic video sequences for subjective QoE analysis
Paper available at: https://doi.org/10.1145/3524273.3532894
Hodzic, K., Cosovic, M., Mrdovic, S., Quinlan, J.J., Raca, D.
Faculty of Electrical Engineering, University of Sarajevo, Bosnia and Herzegovina; School of Computer Science & Information Technology, University College Cork, Ireland.
Dataset available at: https://shorturl.at/dtISV

The DashReStreamer framework is designed to recreate adaptively streamed video in real networks to evaluate user Quality of Experience (QoE). The authors have also created a dataset of 234 realistic video clips, based on video logs collected from real mobile and wireless networks, including video logs and network bandwidth profiles. This dataset and framework will help researchers understand the impact of video QoE dynamics on multimedia streaming.

PEM360: a dataset of 360° videos with continuous physiological measurements, subjective emotional ratings and motion traces
Paper available at: https://doi.org/10.1145/3524273.3532895
Guimard, Q., Robert, F., Bauce, C., Ducreux, A., Sassatelli, L., Wu, H.-Y., Winckler, M., Gros, A.
Université Côte d’Azur, Inria, CNRS, I3S, Sophia-Antipolis, France.
Dataset available at: https://gitlab.com/PEM360/PEM360/

PEM360 is a dataset of user head movements and gaze recordings in 360° videos, along with self-reported emotional ratings and continuous physiological measurement data. It aims to understand the connection between user attention, emotions, and immersive content, and includes software tools and joint instantaneous visualization of user attention and emotion, called “emotional maps.” The entire data and code are available in a reproducible framework.

A New Free Viewpoint Video Dataset and DIBR Benchmark
Paper available at: https://doi.org/10.1145/3524273.3532897
Guo, S., Zhou, K., Hu, J., Wang, J., Xu, J., Song, L.
Institute of Image Communication and Network Engineering, Shanghai Jiao Tong University, Shanghai, China.
Dataset available at: https://github.com/sjtu-medialab/Free-Viewpoint-RGB-D-Video-Dataset

A new dynamic RGB-D video dataset for FVV research is presented, including 13 groups of dynamic scenes and one group of static scenes, each with 12 HD video sequences and 12 corresponding depth video sequences. Also, the FVV synthesis benchmark is introduced based on depth image-based rendering to aid data-driven method validation. The dataset and benchmark aim to advance FVV synthesis with improved robustness and performance.

CGD: a cloud gaming dataset with gameplay video and network recordings
Paper available at: https://doi.org/10.1145/3524273.3532898
Slivar, I., Bacic, K., Orsolic, I., Skorin-Kapov, L., Suznjevic, M.
University of Zagreb, Faculty of Electrical Engineering and Computing, Zagreb, Croatia.
Dataset available at: https://muexlab.fer.hr/muexlab/research/datasets

The cloud gaming (CGD) dataset contains 600 game streaming sessions from 10 games of different genres, with various encoding parameters (bitrate, resolution, and frame rate) to evaluate the impact of these parameters on Quality of Experience (QoE). The dataset includes gameplay video recordings, network traffic traces, user input logs, and streaming performance logs, and can be used to understand relationships between network and application layer data for cloud gaming QoE and QoE-aware network management mechanisms.

Green Video Streaming: Challenges and Opportunities

Introduction

Regarding the Intergovernmental Panel on Climate Change (IPCC) report in 2021 and Sustainable Development Goal (SDG) 13 “climate action”, urgent action is needed against climate change and global greenhouse gas (GHG) emissions in the next few years [1]. This urgency also applies to the energy consumption of digital technologies. Internet data traffic is responsible for more than half of digital technology’s global impact, which is 55% of energy consumption annually. The Shift Project forecast [2] shows an increase of 25% in data traffic associated with 9% more energy consumption per year, reaching 8% of all GHG emissions in 2025. 

Video flows represented 80% of global data flows in 2018, and this video data volume is increasing by 80% annually [2].  This exponential increase in the use of streaming video is due to (i) improvements in Internet connections and service offerings [3], (ii) the rapid development of video entertainment (e.g., video games and cloud gaming services), (iii) the deployment of Ultra High-Definition (UHD, 4K, 8K), Virtual Reality (VR), and Augmented Reality (AR), and (iv) an increasing number of video surveillance and IoT applications [4]. Interestingly, video processing and streaming generate 306 million tons of CO2, which is 20% of digital technology’s total GHG emissions and nearly 1% of worldwide GHG emissions [2].

While research has shown that the carbon footprint of video streaming has been decreasing in recent years [5], there is still a high need to invest in research and development of efficient next-generation computing and communication technologies for video processing technologies. This carbon footprint reduction is due to technology efficiency trends in cloud computing (e.g., renewable power), emerging modern mobile networks (e.g., growth in Internet speed), and end-user devices (e.g., users prefer less energy-intensive mobile and tablet devices over larger PCs and laptops). However, since the demand for video streaming is growing dramatically, it raises the risk of increased energy consumption. 

Investigating energy efficiency during video streaming is essential to developing sustainable video technologies. The processes from video encoding to decoding and displaying the video on the end user’s screen require electricity, which results in CO2 emissions. Consequently, the key question becomes: “How can we improve energy efficiency for video streaming systems while maintaining an acceptable Quality of Experience (QoE)?”.

Challenges and Opportunities 

In this section, we will outline challenges and opportunities to tackle the associated emissions for video streaming of (i) data centers, (ii) networks, and (iii) end-user devices [5] – presented in Figure 1.

Figure 1. Challenges and opportunities to tackle emissions for video streaming.

Data centers are responsible for the video encoding process and storage of the video content. The video data traffic volume grows through data centers, driving their workloads with the estimated total power consumption of more than 1,000 TWh by 2025 [6]. Data centers are the most prioritized target of regulatory initiatives. National and regional policies are established related to the growing number of data centers and the concern over their energy consumption [7]. 

  • Suitable cloud services: Select energy-optimized and sustainable cloud services to help reduce CO2 emissions. Recently, IT service providers have started innovating in energy-efficient hardware by designing highly efficient Tensor Processing Units, high-performance servers, and machine-learning approaches to optimize cooling automatically to reduce the energy consumption in their data centers [8]. In addition to advances in hardware designs, it is also essential to consider the software’s potential for improvements in energy efficiency [9].
  • Low-carbon cloud regions: IT service providers offer cloud computing platforms in multiple regions delivered through a global network of data centers. Various power plants (e.g., fuel, natural gas, coal, wind, sun, and water) supply electricity to run these data centers generating different amounts of greenhouse gases. Therefore, it is essential to consider how much carbon is emitted by the power plants that generate electricity to run cloud services in the selected region for cloud computing. Thus, a cloud region needs to be considered by its entire carbon footprint, including its source of energy production.
  • Efficient and fast transcoders (and encoders): Another essential factor to be considered is using efficient transcoders/encoders that can transcode/encode the video content faster and with less energy consumption but still at an acceptable quality for the end-user [10][11][12].
  • Optimizing the video encoding parameters: There is a huge potential in optimizing the overall energy consumption of video streaming by optimizing the video encoding parameters to reduce the bitrates of encoded videos without affecting quality, including choosing a more power-efficient codec, resolution, frame rate, and bitrate among other parameters.

The next component within the video streaming process is video delivery within heterogeneous networks. Two essential energy consumption factors for video delivery are the network technology used and the amount of data to be transferred.

  • Energy-efficient network technology for video streaming: the network technology used to transmit data from the data center to the end-users determine energy performance since the networks’ GHG emissions vary widely [5]. A fiber-optic network is the most climate-friendly transmission technology, with only 2 grams of CO2 per hour of HD video streaming, while a copper cable (VDSL) generates twice as much (i.e., 4 grams of CO2 per hour). UMTS data transmission (3G) produces 90 grams of CO2 per hour, reduced to 5 grams of CO2 per hour when using 5G [13]. Therefore, research shows that expanding fiber-optic networks and 5G transmission technology are promising for climate change mitigation [5].
  • Lower data transmission: Lower data transmission drops energy consumption. Therefore, the amount of video data needs to be reduced without compromising video quality [2]. The video data per hour for various resolutions and qualities range from 30 MB/hr for very low resolutions to 7 GB/hr for UHD resolutions. A higher data volume causes more transmission energy. Another possibility is the reduction of unnecessary video usage, for example, by avoiding autoplay and embedded videos. Such video content aims to maximize the quantity of content consumed. Broadcasting platforms also play a central role in how viewers consume content and, thus, the impact on the environment [2].

The last component of the video streaming process is video usage at the end-user device, including decoding and displaying the video content on the end-user devices like personal computers, laptops, tablets, phones, or television sets.

  • End-user devices: Research works [3][14] show that the end-user devices and decoding hardware account for the greatest portion of energy consumption and CO2 emission in video streaming. Thus, most reduction strategies lay within the energy efficiency of the end-user devices, for instance, by improving screen display technologies or shifting from desktops to using more energy-efficient laptops, tablets, and smartphones.
  • Streaming parameters: Energy consumption of the video decoding process depends on video streaming parameters similar to the end-user QoE. Thus, it is important to intelligently select video streaming parameters to optimize the QoE and power efficiency of the end-user device. Moreover, different underlying video encoding parameters also impact the video decodings’ energy usage.
  • End-user device environment: A wide variety of browsers (including legacy versions), codecs, and operating systems besides the hardware (e.g., CPU, display) determine the final power consumption.

In this column, we argue that these challenges and opportunities for green video streaming can help to gain insights that further drive the adoption of novel, more sustainable usage patterns to reduce the overall energy consumption of video streaming without sacrificing end-user’s QoE.  

End-to-end video streaming: While we have highlighted the main factors of each video streaming component that impact energy consumption to create a generic power consumption model, we need to study and holistically analyze video streaming and its impact on all components. Implementing a dedicated system for optimizing energy consumption may introduce additional processing on top of regular service operations if not done efficiently. For instance, overall traffic will be reduced when using the most recent video codec (e.g., VVC) compared to AVC (the most deployed video codec up to date), but its encoding and decoding complexity will be increased and, thus, require more energy.

Optimizing the video streaming parameters: There is a huge potential in optimizing the overall energy consumption for video service providers by optimizing the video streaming parameters, including choosing a more power-efficient codec implementation, resolution, frame rate, and bitrate, among other parameters.

GAIA: Intelligent Climate-Friendly Video Platform 

Recently, we started the “GAIA” project to research the aspects mentioned before. In particular, the GAIA project researches and develops a climate-friendly adaptive video streaming platform that provides (i) complete energy awareness and accountability, including energy consumption and GHG emissions along the entire delivery chain, from content creation and server-side encoding to video transmission and client-side rendering; and (ii) reduced energy consumption and GHG emissions through advanced analytics and optimizations on all phases of the video delivery chain.

Figure 2. GAIA high-level approach for the intelligent climate-friendly video platform.

As shown in Figure 2, the research considered in GAIA comprises benchmarking, energy-aware and machine learning-based modeling, optimization algorithms, monitoring, and auto-tuning.

  • Energy-aware benchmarking involves a functional requirement analysis of the leading project objectives, measurement of the energy for transcoding video tasks on various heterogeneous cloud and edge resources, video delivery, and video decoding on end-user devices. 
  • Energy-aware modelling and prediction use the benchmarking results and the data collected from real deployments to build regression and machine learning. The models predict the energy consumed by heterogeneous cloud and edge resources, possibly distributed across various clouds and delivery networks. We further provide energy models for video distribution on different channels and consider the relation between bitrate, codec, and video quality.
  • Energy-aware optimization and scheduling researches and develops appropriate generic algorithms according to the requirements for real-time delivery (including encoding and transmission) of video processing tasks (i.e., transcoding) deployed on heterogeneous cloud and edge infrastructures. 
  • Energy-aware monitoring and auto-tuning perform dynamic real-time energy monitoring of the different video delivery chains for improved data collection, benchmarking, modelling and optimization. 

GMSys 2023: First International ACM Green Multimedia Systems Workshop

Finally, we would like to use this opportunity to highlight and promote the first International ACM Green Multimedia Systems Workshop (GMSys’23). The GMSys’23 takes place in Vancouver, Canada, in June 2023 co-located with ACM Multimedia Systems 2023. We expect a series of at least three consecutive workshops since this topic may critically impact the innovation and development of climate-effective approaches. This workshop strongly focuses on recent developments and challenges for energy reduction in multimedia systems and the innovations, concepts, and energy-efficient solutions from video generation to processing, delivery, and consumption. Please see the Call for Papers for further details.

Final Remarks 

In both the GAIA project and ACM GMSys workshop, there are various actions and initiatives to put energy efficiency-related topics for video streaming on the center stage of research and development. In this column, we highlighted major video streaming components concerning their possible challenges and opportunities enabling energy-efficient, sustainable video streaming, sometimes also referred to as green video streaming. Having a thorough understanding of the key issues and gaining meaningful insights are essential for successful research.

References

[1] IPCC, 2021: Climate Change 2021: The Physical Science Basis. Contribution of Working Group I to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change[Masson-Delmotte, V., P. Zhai, A. Pirani, S.L. Connors, C. Péan, S. Berger, N. Caud, Y. Chen, L. Goldfarb, M.I. Gomis, M. Huang, K. Leitzell, E. Lonnoy, J.B.R. Matthews, T.K. Maycock, T. Waterfield, O. Yelekçi, R. Yu, and B. Zhou (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, In press, doi:10.1017/9781009157896.
[2] M. Efoui-Hess, Climate Crisis: the unsustainable use of online video – The practical case for digital sobriety, Technical Report, The Shift Project, July, 2019.
[3] IEA (2020), The carbon footprint of streaming video: fact-checking the headlines, IEA, Paris https://www.iea.org/commentaries/the-carbon-footprint-of-streaming-video-fact-checking-the-headlines.
[4] Cisco Annual Internet Report (2018–2023) White Paper, 2018 (updated 2020), https://www.cisco.com/c/en/us/solutions/collateral/executive-perspectives/annual-internet-report/white-paper-c11-741490.html.
[5] C. Fletcher, et al., Carbon impact of video streaming, Technical Report, 2021, https://s22.q4cdn.com/959853165/files/doc_events/2021/Carbon-impact-of-video-streaming.pdf.
[6] Huawei Releases Top 10 Trends of Data Center Facility in 2025, 2020, https://www.huawei.com/en/news/2020/2/huawei-top10-trends-datacenter-facility-2025.
[7] COMMISSION REGULATION (EC) No 642/2009, Official Journal of the European Union, 2009, https://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=OJ:L:2009:191:0042:0052:EN:PDF#:~:text=COMMISSION%20REGULATION%20(EC)%20No%20642/2009%20of%2022%20July,regard%20to%20the%20Treaty%20establishing%20the%20European%20Community.
[8] U. Hölzle, Data centers are more energy efficient than ever, Technical Report, 2020, https://blog.google/outreach-initiatives/sustainability/data-centers-energy-efficient/.
[9] Charles E. Leiserson, Neil C. Thompson, Joel S. Emer, Bradley C. Kuszmaul, Butler W. Lampson, Daniel Sanchez, and Tao B. Schardl. 2020. There’s plenty of room at the Top: What will drive computer performance after Moore’s law? Science 368, 6495 (2020), eaam9744. DOI:https://doi.org/10.1126/science.aam9744
[10] M. G. Koziri, P. K. Papadopoulos, N. Tziritas, T. Loukopoulos, S. U. Khan and A. Y. Zomaya, “Efficient Cloud Provisioning for Video Transcoding: Review, Open Challenges and Future Opportunities,” in IEEE Internet Computing, vol. 22, no. 5, pp. 46-55, Sep./Oct. 2018, doi: 10.1109/MIC.2017.3301630.
[11] J. -F. Franche and S. Coulombe, “Fast H.264 to HEVC transcoder based on post-order traversal of quadtree structure,” 2015 IEEE International Conference on Image Processing (ICIP), Quebec City, QC, Canada, 2015, pp. 477-481, doi: 10.1109/ICIP.2015.7350844.
[12] E. de la Torre, R. Rodriguez-Sanchez and J. L. Martínez, “Fast video transcoding from HEVC to VP9,” in IEEE Transactions on Consumer Electronics, vol. 61, no. 3, pp. 336-343, Aug. 2015, doi: 10.1109/TCE.2015.7298293.
[13] Federal Ministry for the Environment, Nature Conservation and Nuclear Safety, Video streaming: data transmission technology crucial for climate footprint, No. 144/20, 2020, https://www.bmuv.de/en/pressrelease/video-streaming-data-transmission-technology-crucial-for-climate-footprint/
[14] Malmodin, Jens, and Dag Lundén. 2018. “The Energy and Carbon Footprint of the Global ICT and E&M Sectors 2010–2015” Sustainability 10, no. 9: 3027. https://doi.org/10.3390/su10093027



Students Report from ACM Multimedia 2022

ACM Multimedia 2022 was held in a hybrid format in Lisbon, Portugal from October 10-14, 2022.

This was the first local participation in three years for many participants, as the strict travel restrictions associated with Covid-19 in 2020 and 2021 made it difficult to participate locally by travelling out of the host and neighbouring countries.

In Portugal, the Covid-19 restrictions were almost lifted, and the city was bustling with tourists. Participants were careful to avoid infectious diseases and enjoyed Lisbon’s local wine “Vinho Verde” and cod dishes with their colleagues and engaged in lively discussions about multimedia research.

For many students, this was their first time presenting at an international conference, and it was a wonderful experience.

To encourage student authors to participate on-site, SIGMM has sponsored a group of students with Student Travel Grant Awards. Students who wanted to apply for this travel grant needed to submit an online form before the submission deadline. The selected students received either 1,000 or 2,000 USD to cover their airline tickets as well as accommodation costs for this event. Of the recipients, 25 were able to attend the conference. We asked them to share their unique experience attending ACM Multimedia 2022. In this article, we share their reports of the event.


Xiangming Gu, PhD student, National University of Singapore, Singapore

It is a great honour to receive a SIGMM Student Grant. ACM Multimedia 2022 is my first time attending an academic conference physically. During the conference, I presented my oral paper “MM-ALT: Multimodal Automatic Lyric Transcription”, which was also selected as “Top Rated Papers”. Besides the presentation, I also met a lot of people who shared similar research interests. It was very inspiring to learn from others’ papers and discuss them with the authors directly. Moreover, I was also a volunteer for ACM Multimedia 2022 and attended the session of the 5th International ACM Workshop on Multimedia Content Analysis in Sports. During the session, I learnt how to organize a workshop, which was a great exercise for me. Now, after I come back to Singapore, I still miss the conference. I wish I can get my paper accepted next year and attend the conference again.

Avinash Madasu, Computer Science Master’s student at the University of North Carolina Chapel Hill, USA.

It is my absolute honour to receive the student travel grant for attending the ACM Multimedia 2022 conference. This is the first time I have attended a top AI conference in-person. I enjoyed it a lot during the conference and I was sad that the conference ended quickly. Within the conference days, I was able to attend a lot of oral sessions, keynote talks and poster sessions. I was able to interact with fellow researchers from both academia and industry. I learnt a lot about exciting research going on in my area of interest as well as other areas. It provided a new refreshing experience and I hope to bring this to my research. I presented a poster and felt happy when fellow researchers appreciated my work. Apart from technical details, I was able to forge a lot of new friendships which I truly cherish for my whole life.

Moreno La Quatra, PhD student, Politecnico di Torino

The ACM Multimedia 2022 conference was an amazing experience. After a few years of remote conferences, it was a pleasure to be able to attend the conference in person. I got the opportunity to meet many researchers of different seniorities and backgrounds, and I learned a lot from them. The poster sessions were one of the highlights of the conference. They were a very valuable opportunity to present interesting ideas and explore the details of other researchers’ work. I found the keynotes, presentations, and workshops to be very inspiring and engaging as well. Throughout them, I learned about specific topics and interacted with friendly, passionate researchers from around the world. I would like to thank the ACM Multimedia 2022 organization for the opportunity to attend the conference in Lisbon, all the other volunteers for their friendly and helpful attitude, and the SIGMM Student Travel Grant committee for the financial support.

Sheng-Ming Tang, Master student, National Tsing Hua University, Hsinchu, Taiwan

My name is Sheng-Ming Tang from National Tsing Hua University, Hsinchu, Taiwan. It is a great honour for me to receive the student travel grant. First, I want to thank the committee for organizing this fantastic event. As ACM MM 2022 is my first in-person experience presenting at a conference, I felt a little bit nervous in the first place. However, I started to get comfortable in the conference through the interaction of those astonishing researchers and the volunteers. It was great to not only present in front of the public but also participate in the events. I met a lot of people who solved problems with different and creative approaches, learned brand-new mindsets from the keynote sessions, and gained abundant feedback from the audience, which would boost my research. Thank the committee again for giving me this greatest opportunity to present and share my work in person. I enjoyed a lot during the event.

Tai-Chen Tsai, Graduate student, National Tsing Hua University Taiwan

First, I would like to thank ACM for providing a student travel grant that allowed me to attend the conference. This is my first time presenting my work at a conference. The conference I attended was the interactive art session. I was worried that the setup would be complicated abroad. However, as soon as I arrived at the site, volunteers assisted me with the installation. The conference provided complete hardware resources, allowing me to have a smooth and excellent exhibition experience. Also, I took the opportunity to see many interesting researchers from different countries. The work “Emotional Machines” in the interactive art exhibition surprised me. His system collects and combines what participants are saying and their current emotions. The data is transformed into 360-degree image content in the VR environment through the model so that everyone’s information forms a small universe in the VR environment. The idea is creative.
Additionally, I can chat and discuss projects with published researchers while volunteering at workshops. They shared their lifestyle and work experiences as researchers in European countries, and we discussed what interesting study is and what is not. This is the best reward for me.

Bhalaji Nagarajan, PhD Student, Universitat de Barcelona, Spain

ACM-Multimedia was the first big conference I was able to participate in person after two years of complete virtual participation. I presented my work both as oral and poster presentations at the Workshop on Multimedia-Assisted Dietary Management (MADiMa). It gave me an excellent opportunity to present my work and to get valuable input from reputed pioneers regarding the future scope. It gave me a new dimension and helped in expanding my technical skill set. This was also my first volunteering experience on such a massive scale. It gave me a great learning experience to see and learn how to manage conferences of such a large scale.
I am very happy that I attended the conference in person. I was able to meet new people, and reputed pioneers in the field, learn new things and of course, made some new friends. A big thank you for the SIGMM Travel Grant that allowed me to attend the conference in-person in Lisbon.

Kiruthika Kannan, MS by Research, International Institute of Information Technology, Hyderabad, India. 

My paper on “DrawMon: A Distributed System for Detection of Atypical Sketch Content in Concurrent Pictionary Games” was accepted at the 30th ACM International Conference on Multimedia. It was my first international conference, and I felt honoured to be able to present my research in front of experienced researchers. The conference also exhibited diverse research projects addressing fascinating scientific and technological problems. The poster sessions and talks at the conference improved my knowledge of the research trends in multimedia. In addition to this, I was able to interact with fellow researchers from diverse cultures. It was interesting to hear about their experiences and learn about their work at their institution. As a volunteer at the conference, I witnessed the hard work of the behind the scene organizers and volunteering team to smoothly run the events. I am grateful to the SIGMM Student Travel Grant for supporting my attendance at the ACMMM 22 conference.

Garima Sharma, PhD Student, Department of Human-Centred Computing, Monash University

It was a pleasure to receive a SIGMM travel grant and to attend the ACM Multimedia 2022 conference in person. ACM Multimedia is one of the top conferences in my research area and it was my first in-person conference during my PhD. I had a great experience interacting with numerous researchers and fellow PhD students. Along with all the interesting keynotes, I attended as many oral sessions as possible. Some of these sessions were aligned with my research work and some were outside of my work. This gave me a new research perspective at different levels. Also, working with organisers in a few sessions gave me a whole new experience in managing these events. Overall, I got many insightful comments, suggestions and feedback which motivated me with some interesting directions in my research work. I would like to thank the organisers for making this year’s ACM Multimedia a wonderful experience for every attendee.

Alon Harell, PhD student at the Multimedia Lab at Simon Fraser University

I had the pleasure to receive the SIGMM Student Travel Grant and to attend and volunteer at ACM Multimedia 22 in Lisbon, Portugal. The work I submitted to the conference was done outside of my regular PhD research, and thus without this grant, I would have not been able to participate. The workshop at which I presented, ACM MM Sport 22, was incredibly eye-opening with many fantastic papers, great presentations, and above all great people with which I was able to exchange ideas, form bonds, and perhaps even create future collaborations. The main conference, which coincides more closely with my main research on image and video coding for machines, was just as good. With fascinating talks, some in person, and some virtual, I was exposed to many new ideas (or perhaps just new to me) and learned a great deal. I was also able to benefit from the generosity and experience of Prof.  Chong Wah Ngo from Singapore Management University, during my PhD. Mentor lunch, who shared with me his thoughts on pursuing a career in academia. Overall, ACM Multimedia 22 was an especially unique experience because it was the first in-person conference I was able to attend since the beginning of the COVID-19 pandemic, and being back face-to-face with fellow researchers was a great pleasure.

Lorenzo Vaiani, Ph.D. student (1st year), Politecnico di Torino, Italy

ACM MM 2022 was my first in-person conference. Being able to present my works and discuss them with other participants in person was an incredible experience. I enjoyed every activity, from presentations and posters to workshops and demos. I received excellent feedback and new inspiration to continue my research. The best part was definitely strengthening the bonds with friends I already knew and making more with the amazing people I met there. I learned a lot from all of them. Volunteer activities helped a lot in making these kinds of connections. Thanks to the organizers for this fantastic opportunity and the SIGMM Student Travel Grant committee for the financial support. This edition of ACM MM was just the first for me, but I hope for many more in the future.

Xiaoyu Lin, third-year PhD student at Inria Grenoble, France

It is a great honour to attend ACM MM 2022 in Lisbon. It was a great experience. I have met lots of nice professors and researchers. Discussing with them gave me lots of inspiration on both research directions and career development. I presented my work during the doctoral symposium. I’ve got plenty of useful feedback which can help me to improve our work. During the “Ask Me Anything” lunch, I have the chance to discuss with several senior researchers. They provide me with some kind and very useful advice on how to do research. Besides, I have also served as a volunteer for a workshop. It also helped me to meet other volunteers and made some new friends. Thanks to all the chairs and organizers who have worked hard to make ACM MM 2022 such a wonderful conference.  It’s really an impressive experience!

Zhixin Ma, PhD student, Singapore Management University, Singapore

I would like to thank the ACM Multimedia Committee provided me with the student travel grant so that I can attend the conference in person. ACM Multimedia is the worldwide top conference in the Multimedia field. It provides me with an opportunity to present my work and communicate with the researchers working on this topic of multimedia search.
Besides, the excellent keynotes and passionate panel talk also picture a good vision of future research in the multimedia field. Overall, I must express that ACM MM22 is amazing and well-organized. I again appreciate the ACM MM committee for the student travel grant, which made my attendance possible.

Report from ACM Multimedia 2022 by Nitish Nagesh


Nitish Nagesh (@nitish_nagesh) is a Ph.D. student in the Computer Science department, the University of California, Irvine, USA. He has been awarded as Best Social Media Reporter of ACM Multimedia 2022 conference. To celebrate this award, Nitish Nagesh reported on his wonderful experience at ACM Multimedia 2022 as follows.


I was excited when our paper “World Food Atlas for Food Navigation” was accepted to the Multimedia Assisted Dietary Management Workshop (MADiMA). Being held in conjunction with ACM Multimedia 2022, the premier multimedia conference was the icing on the cake. It being in Lisbon, Portugal was the cherry on top of the cake. It is said that a picture is worth a thousand words. It is fitting to describe a multimedia conference experience report through pictures.

Prof. Ramesh Jain organized an informal meetup at the Choupana Caffe based on the advice of Joao Magalhaes, general chair of ACMMM 2022. It was great to meet researchers working on food computing including Prof. Yoko Yamakata, Prof. Agnieszka, Maija Kale. It was great to also have the company of students and professors from Singapore Management University including Prof. Chong Wah along with Prof. Phoebe Chen. Since this was the first in-person conference for many folks, we had great conversations over waffles, pear salad and watermelon mint juice!

The MADiMA workshop and the Cooking and Eating Activities (CEA) workshop had stellar keynote speakers and presentations about topics ranging from adherence to a mediterranean diet to mental health estimation through food images. 

The workshop was at the Lisbon Congress Center. It was a treat to watch the sun shine brightly on the congress center in the morning and the mellow sunset only a few minutes away near the Tagus Estuary rendering an orangish hue to the red bridge overlooking the train tracks below.

After a great set of presentations, the MADIMA and CEA workshop was drawn to a close with a group picture, of one large family of people who love food and want to help people enjoy food while maintaining their health goals. A huge shout out to the workshop chairs Prof. Stavroula Mougiakakou, Prof. Keiji Yanai, Prof. Dario Allegra and Prof. Yoko Yamakata. (I tried my best to include a photo where everyone looks good!)

All work and no play makes us dull people! And all research with no food makes us hungry people! We had a post-workshop dinner at an authentic Portuguese restaurant. The food was great and it was a delightful evening because of the surprise treat from the professors! 

Prof. Jain’s Ph.D. talk was inspiring as he shared his personal journey that led him to focus on healthcare. He urged students in the multimedia community to pursue multimodal healthcare research as he shared his insights on building a personal health navigator.

I had signed up to be a mentee for the Ph.D. school Ask Me Anything (AMA) session. We asked Prof. Ming Dong questions about his time at graduate school, balancing teaching and research responsibilities, tips on maximizing research output and strategies to cope with rejections. He was candid in his responses and emphasized the need to focus on incremental progress while striving to do impactful research. I must thank Prof. Wei Tsang and other organizers for their leadership in organizing a first-of-its-kind session.

In between running around oral sessions, poster presentations, keynote talks, networking, grabbing lunch, and enjoying Portuguese Tart, we managed to have fun while volunteering. Huge credit to the students and staff (the Rafael’s, the Diogo’s, the David’s, the Gustavo’s, the Pedro’s) from Nova university for doing the heavy lifting to ensure a smooth online, hybrid and in-person experience!

It was a pleasure to watch Prof. Alan Smeaton deliver an inspiring speech about the journey of information retrieval and multimedia. The community congratulates you once again on the Technical Achievement Award – more power to you, Alan!

The highlight of the conference was the grand banquet at Centro Cultural de Belém. There could not have been a better climax to the gala event than the Fado music. One aspect of Fado music symbolizes longing where the spouse sings a melancholy when her partner sets sail on long voyages. It is accompanied by the unique 12 string guitar and is sung very close to the audience to heighten the intimacy. I could fully relate to the artists’ melody and rhythms since I had been longing to see my family and friends back home, whom I have not visited for the past three years due to the pandemic. Another tune described the beauty of Lisbon in superlatives including the sun shining the brightest compared to any other part of the world. There was a happy ending to the tune when the artists recreated the moment of joy after the war was over and everyone was merry again. It reinvigorated a fresh hope and breathed a new lease of life into our cluttered worlds. For once, I was truly present in the moment!

VQEG Column: VQEG Meeting May 2022

Introduction

Welcome to this new column on the ACM SIGMM Records from the Video Quality Experts Group (VQEG), which will provide an overview of the last VQEG plenary meeting that took place from 9 to 13 May 2022. It was organized by INSA Rennes (France), and it was the first face-to-face meeting after the series of online meetings due to the Covid-19 pandemic. Remote attendance was also offered, which made possible that around 100 participants, from 17 different countries, attended the meeting (more than 30 of them attended in person). During the meeting, more than 40 presentations were provided, and interesting discussion took place. All the related information, minutes, and files from the meeting are available online in the VQEG meeting website, and video recordings of the meeting are available in Youtube.

Many of the works presented at this meeting can be relevant for the SIGMM community working on quality assessment. Particularly interesting can be the proposals to update the ITU-T Recommendations P.910 and P.913, as well as the presented publicly available datasets. We encourage those readers interested in any of the activities going on in the working groups to check their websites and subscribe to the corresponding reflectors, to follow them and get involved.

Group picture of the VQEG Meeting 9-13 May 2022 in Rennes (France).

Overview of VQEG Projects

Audiovisual HD (AVHD)

The AVHD group investigates improved subjective and objective methods for analyzing commonly available video systems. In this sense, the group continues working on extensions of the ITU-T Recommendation P.1204 to cover other encoders (e.g., AV1) apart from H.264, HEVC, and VP9. In addition, the project’s Quality of Experience (QoE) Metrics for Live Video Streaming Applications (Live QoE) and Advanced Subjective Methods (AVHD-SUB) are still ongoing. 

In this meeting, several AVHD-related topics were discussed, supported by six different presentations. In the first one, Mikolaj Leszczuk (AGH University, Poland) presented an analysis of the influence on the subjective assessment of the quality of video transmission of experiment conditions, such as video sequence order, variation and repeatability that can entail a “learning” process of the test participants during the test. In the second presentation, Lucjan Janowski (AGH University, Poland) presented two proposals towards more ecologically valid experiment designs: the first one using the Absolute Category Rating [1] without scale but in a “think aloud” manner, and the second one called “Your Youtube, our lab” in which the user selects the content that he or she prefers and a question quality appears during the viewing experience through a specifically designed interface. Also dealing with the study of testing methodologies, Babak Naderi (TU-Berlin, Germany) presented work on subjective evaluation of video quality with a crowdsourcing approach, while Pierre David (Capacités, France) presented a three-lab experiment, involving Capacités (France), RISE (Sweden) and AGH University (Poland) on quality evaluation of social media videos. Kjell Brunnström (RISE, Sweden) continued by giving an overview of video quality assessment of Video Assistant Refereeing (VAR) systems, and lastly, Olof Lindman (SVT, Sweden) presented another effort to reduce the lack of open datasets with the Swedish Television (SVT) Open Content.

Quality Assessment for Health applications (QAH)

The QAH group works on the quality assessment of health applications, considering both subjective evaluation and the development of datasets, objective metrics, and task-based approaches. In this meeting, Lucie Lévêque (Nantes Université, France) provided an overview of the recent activities of the group, including a submitted review paper on objective quality assessment for medical images, a special session accepted for IEEE International Conference on Image Processing (ICIP) that will take place in October in Bordeaux (France), and a paper submitted to IEEE ICIP on quality assessment through detection task of covid-19 pneumonia. The work described in this paper was also presented by Meriem Outtas (INSA Rennes, France).

In addition, there were two more presentations related to the quality assessment of medical images. Firstly, Yuhao Sun (University of Edinburgh, UK) presented their research on a no-reference image quality metric for visual distortions on Computed Tomography (CT) scans [2]. Finally, Marouane Tliba (Université d’Orleans, France) presented his studies on quality assessment of medical images through deep-learning techniques using domain adaptation.

Statistical Analysis Methods (SAM)

The SAM group works on improving analysis methods both for the results of subjective experiments and for objective quality models and metrics. The group is currently working on a proposal to update the ITU-T Recommendation P.913, including new testing methods for subjective quality assessment and statistical analysis of the results. Margaret Pinson presented this work during the meeting.   

In addition, five presentations were delivered addressing topics related to the group activities. Jakub Nawała (AGH University, Poland) presented the Generalised Score Distribution to accurately describe responses from subjective quality experiments. Three presentations were provided by members of Nantes Université (France): Ali Ak presented his work on spammer detection on pairwise comparison experiments, Andreas Pastor talked about how to improve the maximum likelihood difference scaling method in order to measure the inter-content scale, and Chama El Majeny presented the functionalities of a subjective test analysis tool, whose code will be publicly available. Finally, Dietmar Saupe (Univerity of Konstanz, Germany) delivered a presentation on subjective image quality assessment with boosted triplet comparisons.

Computer Generated Imagery (CGI)

CGI group is devoted to analyzing and evaluating computer-generated content, with a focus on gaming in particular. Currently, the group is working on the ITU-T Work Item P.BBQCG on Parametric bitstream-based Quality Assessment of Cloud Gaming Services. Apart from this, Jerry (Xiangxu) Yu (University of Texas at Austin, US) presented a work on subjective and objective quality assessment of user-generated gaming videos and Nasim Jamshidi (TUB, Germany) presented a deep-learning bitstream-based video quality model for CG content.

No Reference Metrics (NORM)

The NORM group is an open collaborative project for developing no-reference metrics for monitoring visual service quality. Currently, the group is working on three topics: the development of no-reference metrics, the clarification of the computation of the Spatial and Temporal Indexes (SI and TI, defined in the ITU-T Recommendation P.910), and on the development of a standard for video quality metadata.  

At this meeting, this was one of the most active groups and the corresponding sessions included several presentations and discussions. Firstly, Yiannis Andreopoulos (iSIZE, UK) presented their work on domain-specific fusion of multiple objective quality metrics. Then, Werner Robitza (AVEQ GmbH/TU Ilmenau, Germany) presented the updates on SI/TI clarification activities, which is leading an update of the ITU-T Recommendation P.910. In addition, Lukas Krasula (Netflix, US) presented their investigations on the relation between banding annoyance and the overall quality perceived by the viewers. Hadi Amirpour (University of Klagenfurt, Austria) delivered two presentations related to their Video Complexity Analyzer and their Video Complexity Dataset, which are both publicly available. Finally, Mikołaj Leszczuk (AGH University , Poland) gave two talks on their research related to User-Generated Content (UGC) (a.k.a. in-the-wild video content) recognition and on advanced video quality indicators to characterise video content.   

Joint Effort Group (JEG) – Hybrid

The JEG group was focused on joint work to develop hybrid perceptual/bitstream metrics and gradually evolved over time to include several areas of Video Quality Assessment (VQA), such as the creation of a large dataset for training such models using full-reference metrics instead of subjective metrics. A report on the ongoing activities of the group was presented by Enrico Masala (Politecnico di Torino, Italy), which included the release of a new website to reflect the evolution that happened in the last few years within the group. Although currently the group is not directly seeking the development of new metrics or tools readily available for VQA, it is still working on related topics such as the studies by Lohic Fotio Tiotsop (Politecnico di Torino, Italy) on the sensitivity of artificial intelligence-based observers to input signal modification.

5G Key Performance Indicators (5GKPI)

The 5GKPI group studies the relationship between key performance indicators of new 5G networks and QoE of video services on top of them. In this meeting, Pablo Pérez (Nokia, Spain) presented an extended report on the group activities, from which it is worth noting the joint work on a contribution to the ITU-T Work Item G.QoE-5G

Immersive Media Group (IMG)

The IMG group is focused on the research on the quality assessment of immersive media. Currently, the main joint activity of the group is the development of a test plan for evaluating the QoE of immersive interactive communication systems. In this sense, Pablo Pérez (Nokia, Spain) and Jesús Gutiérrez (Universidad Politécnica de Madrid, Spain) presented a follow up on this test plan including an overview of the state-of-the-art on related works and a taxonomy classifying the existing systems [3]. This test plan is closely related to the work carried out by the ITU-T on QoE Assessment of eXtended Reality Meetings, so Gunilla Berndtsson (Ericsson, Sweden) presented the latest advances on the development of the P.QXM.  

Apart from this, there were four presentations related to the quality assessment of immersive media. Shirin Rafiei (RISE, Sweden) presented a study on QoE assessment of an augmented remote operating system for scaling in smart mining applications. Zhengyu Zhang (INSA Rennes, France) gave a talk on a no-reference quality metric for light field images based on deep-learning and exploiting angular and spatial information. Ali Ak (Nantes Université, France) presented a study on the effect of temporal sub-sampling on the accuracy of the quality assessment of volumetric video. Finally, Waqas Ellahi (Nantes Université, France) showed their research on a machine-learning framework to predict Tone-Mapping Operator (TMO) preference based on image and visual attention features [4].

Quality Assessment for Computer Vision Applications (QACoViA)

The goal of the QACoViA group is to study the visual quality requirements for computer vision methods. In this meeting, there were three presentations related to this topic. Mikołaj Leszczuk (AGH University, Poland) presented an objective video quality assessment method for face recognition tasks. Also, Alban Marie  (INSA Rennes, France) showed an analysis of the correlation of quality metrics with artificial intelligence accuracy. Finally, Lucie Lévêque (Nantes Université, France) gave an overview of a study on the reliability of existing algorithms for facial expression recognition [5]. 

Intersector Rapporteur Group on Audiovisual Quality Assessment (IRG-AVQA)

The IRG-AVQA group studies topics related to video and audiovisual quality assessment (both subjective and objective) among ITU-R Study Group 6 and ITU-T Study Group 12. In this sense, Chulhee Lee (Yonsei University, South Korea) and Alexander Raake (TU Ilmenau, Germany) provided an overview on ongoing activities related to quality assessment within ITU-R and ITU-T.

Other updates

In addition, the Human Factors for Visual Experiences (HFVE), whose objective is to uphold the liaison relation between VQEG and the IEEE standardization group P3333.1, presented their advances in relation to two standards: IEEE P3333.1.3 – Deep-Learning-based assessment of VE based on HF, which has been approved and published, and the IEEE P3333.1.4 on Light field imaging, which has been submitted and is in the process to be approved. Also, although there were not many activities in this meeting within the Implementer’s Guide for Video Quality Metrics (IGVQM) and the Psycho-Physiological Quality Assessment (PsyPhyQA) they are still active. Finally, as a reminder, the VQEG GitHub with tools and subjective labs setup is still online and kept updated.

The next VQEG plenary meeting will take place online in December 2022. Please, see VQEG Meeting information page for more information.

References

[1] ITU, “Subjective video quality assessment methods for multimedia applications”, ITU-T Recommendation P.910, Jul. 2022.
[2] Y. Sun, G. Mogos, “Impact of Visual Distortion on Medical Images”, IAENG International Journal of Computer Science, 1:49, Mar. 2022.
[3] P. Pérez, E. González-sosa, J. Gutiérrez, N. García, “Emerging Immersive Communication Systems: Overview, Taxonomy, and Good Practices for QoE Assessment”, Frontiers in Signal Processing, Jul. 2022.
[4] W. Ellahi, T. Vigier, P. Le Callet, “A machine-learning framework to predict TMO preference based on image and visual attention features”, International Workshop on Multimedia Signal Processing, Oct. 2021.
[5] E. M. Barbosa Sampaio, L. Lévêque, P. Le Callet, M. Perreira Da Silva, “Are facial expression recognition algorithms reliable in the context of interactive media? A new metric to analyse their performance”, ACM International Conference on Interactive Media Experiences, Jun. 2022.

JPEG Column: 96th JPEG Meeting

JPEG analyses the responses of the Calls for Proposals for the standardisation of the first codecs based on machine learning

The 96th JPEG meeting was held online from 25 to 29 July 2022. The meeting was one of the most productive in the recent history of JPEG with the analysis of the responses of two Calls for Proposals (CfP) for machine learning-based coding solutions, notably JPEG AI and JPEG Pleno Point Cloud Coding. The superior performance of the CfP responses compared to the state-of-the-art anchors leave little doubt about the future of coding technologies becoming dominated by machine learning-based solutions with the expected consequences on the standardisation pathway. A new era of multimedia coding standardisation has begun. Both activities had defined a verification model, and are pursuing a collaborative process that will select the best technologies for the definition of the new machine learning-based standards.

The 96th JPEG meeting had the following highlights:

JPEG AI and JPEG Pleno Point Cloud, the two first machine learning-based coding standards under development by JPEG.
  • JPEG AI response to the Call for Proposals;
  • JPEG Pleno Point Cloud begins the collaborative standardisation phase;
  • JPEG Fake Media and NFT
  • JPEG Systems
  • JPEG Pleno Light Field
  • JPEG AIC
  • JPEG XS
  • JPEG 2000
  • JPEG DNA

The following summarises the major achievements of the 96th JPEG meeting.

JPEG AI

The 96th JPEG meeting represents an important milestone for the JPEG AI standardisation as it marks the beginning of the collaborative phase of this project. The main JPEG AI objective is to design a solution that offers significant compression efficiency improvement over coding standards in common use at equivalent subjective quality and an effective compressed domain processing for machine learning-based image processing and computer vision tasks. 

During the 96th JPEG meeting, several activities occurred, notably presentation of the eleven responses to all tracks of the Call for Proposals (CfP). Furthermore, discussions on the evaluation process used to assess submissions to the CfP took place, namely, subjective, objective and complexity assessment as well as the identification of device interoperability issues by cross-checking. For the standard reconstruction track, several contributions showed significantly higher compression efficiency in both subjective quality methodologies and objective metrics when compared to the best-performing conventional image coding.

From the analysis and discussion of the results obtained, the most promising technologies were identified and a new JPEG AI verification model under consideration (VMuC) was approved. The VMuC corresponds to a combination of two proponents’ solutions (following the ‘one tool for one functionality’ principle), selected by consensus and considering the CfP decision criteria and factors. In addition, a set of JPEG AI Core Experiments were defined to obtain further improvements in both performance efficiency and complexity, notably the use of learning-based GAN training, alternative analysis/synthesis transforms and an evaluation study for the compressed-domain denoising as an image processing task. Several further activities were also discussed and defined, such as the design of a compressed domain image classification decoder VMuC, the creation of a large screen content dataset for the training of learning-based image coding solutions and the definition of a new and larger JPEG AI test set.

JPEG Pleno Point Cloud begins collaborative standardisation phase

JPEG Pleno integrates various modalities of plenoptic content under a single framework in a seamless manner. Efficient and powerful point cloud representation is a key feature of this vision. A point cloud refers to data representing positions of points in space, expressed in a given three-dimensional coordinate system, the so-called geometry. This geometrical data can be accompanied by per-point attributes of varying nature (e.g. color or reflectance). Such datasets are usually acquired with a 3D scanner, LIDAR or created using 3D design software and can subsequently be used to represent and render 3D surfaces. Combined with other types of data (like light field data), point clouds open a wide range of new opportunities, notably for immersive browsing and virtual reality applications.

Learning-based solutions are the state of the art for several computer vision tasks, such as those requiring a high-level understanding of image semantics, e.g., image classification, face recognition and object segmentation, but also 3D processing tasks, e.g. visual enhancement and super-resolution. Recently, learning-based point cloud coding solutions have shown great promise to achieve competitive compression efficiency compared to available conventional point cloud coding solutions at equivalent subjective quality. Building on a history of successful and widely adopted coding standards, JPEG is well positioned to develop a standard for learning-based point cloud coding.

During its 94th meeting, the JPEG Committee released a Final Call for Proposals on JPEG Pleno Point Cloud Coding. This call addressed learning-based coding technologies for point cloud content and associated attributes with emphasis on both human visualization and decompressed/reconstructed domain 3D processing and computer vision with competitive compression efficiency compared to point cloud coding standards in common use, with the goal of supporting a royalty-free baseline. During its 96th meeting, the JPEG Committee evaluated 5 codecs submitted in response to this Call. Following a comprehensive evaluation process, the JPEG Committee selected one of the proposals to form the basis of a future standard and initialised a sub-division to form Part 6 of ISO/IEC 21794. The selected submission was a learning-based approach to point cloud coding that met the requirements of the Call and showed competitive performance, both in terms of coding geometry and color, against existing solutions.

JPEG Fake Media and NFT

At the 96th JPEG meeting, 6 pre-registrations to the Final Call for Proposals (CfP) on JPEG Fake Media were received. The scope of JPEG Fake Media is the creation of a standard that can facilitate the secure and reliable annotation of media asset creation and modifications. The standard shall address use cases that are in good faith as well as those with malicious intent. The CfP welcomes contributions that address at least one of the extensive list of requirements specified in the associated “Use Cases and Requirements for JPEG Fake Media” document. Proponents who have not yet made a pre-registration are still welcome to submit their final proposal before 19 October 2022. Full details about the timeline, submission requirements and evaluation processes are documented in the CfP available on jpeg.org.

In parallel with the work on Fake Media, JPEG explores use cases and requirements related to Non Fungible Tokens (NFTs). Although the use cases between both topics are different, there is a significant overlap in terms of requirements and relevant solutions. The presentations and video recordings of the joint 5th JPEG NFT and Fake Media Workshop that took place prior to the 96th meeting are available on the JPEG website. In addition, a new version of the “Use Cases and Requirements for JPEG NFT” was produced and made publicly available for review and feedback.

JPEG Systems

During the 96th JPEG Meeting, the IS texts for both JLINK (ISO/IEC 19566-7) and JPEG Snack (ISO/IEC 19566-8) were prepared and submitted for final publication. JLINK specifies a format to store multiple images inside of JPEG files and supports interactive navigation between them. JLINK addresses use cases such as virtual museum tours, real estate visits, hotspot zoom into other images and many others. JPEG Snack on the other hand enables self-running multimedia experiences such as animated image sequences and moving image overlays. Both standards are based on the JPEG Universal Metadata Box Format (JUMBF, ISO/IEC 19566-5) for which a second edition is in progress. This second edition adds extensions to the native support of CBOR (Concise Binary Object Representation) and attaches private fields to the JUMBF Description Box.

JPEG Pleno Light Field

During its 96th meeting, the JPEG Committee released the “JPEG Pleno Second Draft Call for Contributions on Light Field Subjective Quality Assessment”, to collect new procedures and best practices for light field subjective quality evaluation methodologies to assess artefacts induced by coding algorithms. All contributions, which can be test procedures, datasets, and any additional information, will be considered to develop the standard by consensus among JPEG experts following a collaborative process approach. The Final Call for Contributions will be issued at the 97th JPEG meeting. The deadline for submission of contributions is 1 April 2023.

A JPEG Pleno Light Field AhG has also started the preparation of a first workshop on Subjective Light Field Quality Assessment and a second workshop on Learning-based Light field Coding, to exchange experiences, to present technological advances and research results on light field subjective quality assessment and to present technological advances and research results on learning-based coding solutions for light field data, respectively.

JPEG AIC

During its 96th meeting, a Second Draft Call for Contributions on Subjective Image Quality Assessment was issued. The final Call for Contributions is now planned to be issued at the 97th JPEG meeting. The standardization process will be collaborative from the very beginning, i.e. all submissions will be considered in developing the next extension of the JPEG AIC standard. The deadline for submissions has been extended to 1 April 2023 at 23:59 UTC. Multiple types of contributions are accepted, namely subjective assessment methods including supporting evidence and detailed description, test material, interchange format, software implementation, criteria and protocols for evaluation, additional relevant use cases and requirements, and any relevant evidence or literature. A dataset of sample images with compression-based distortions in the target quality range is planned to be prepared for the 97th JPEG meeting.

JPEG XS

With the 2nd edition of JPEG XS now in place, the JPEG Committee continues with the development of the 3rd edition of JPEG XS Part 1 (Core coding system) and Part 2 (Profiles and buffer models). These editions will address new use cases and requirements for JPEG XS by defining additional coding tools to further improve the coding efficiency, while keeping the low-latency and low-complexity core aspects of JPEG XS. The primary goal of the 3rd edition is to deliver the same image quality as the 2nd edition, but for specific content such as screen content with half of the required bandwidth. In this respect, experiments have indicated that it is possible to increase the quality in static regions of an image sequence by more than 10dB when compared to the 2nd edition. Based on the input contributions, a first working draft for 21122-1 has been created, along with the necessary core experiments for further evaluation and verification.

In addition, JPEG has finalized the work on the amendment for Part 2 2nd edition that defines a new High 4:2:0 profile and the new sublevel Sublev4bpp. This amendment is now ready for publication by ISO. In the context of Part 4 (Conformance testing) and Part 5 (Reference software), the JPEG Committee decided to make both parts publicly available.

Finally, the JPEG Committee decided to create a series of public documents, called the “JPEG XS in-depth series” that will explain various features and applications of JPEG XS to a broad audience. The first document in this series explains the advantages of using JPEG XS for raw image compression and will be published soon on jpeg.org.

JPEG 2000

The JPEG Committee published a case study that compares HT2K, ProRes and JPEG 2000 Part 1 when processing motion picture content with widely available commercial software tools running on notebook computers, available at https://ds.jpeg.org/documents/jpeg2000/wg1n100269-096-COM-JPEG_Case_Study_HTJ2K_performance_on_laptop_desktop_PCs.pdf

JPEG 2000 is widely used in the media and entertainment industry for Digital Cinema distribution, studio video masters and broadcast contribution links. High Throughput JPEG 2000 (HTJ2K or JPEG 2000 Part 15) is an update to JPEG 2000 that provides an order of magnitude speed up over legacy JPEG 2000 Part 1.

JPEG DNA

The JPEG Committee has continued its exploration of the coding of images in quaternary representations, as it is particularly suitable for DNA storage applications. The scope of JPEG DNA is the creation of a standard for efficient coding of images that considers biochemical constraints and offers robustness to noise introduced by the different stages of the storage process that is based on DNA synthetic polymers. During the 96th JPEG meeting, a new version of the overview document on Use Cases and Requirements for DNA-based Media Storage was issued and has been made publicly available. The JPEG Committee also updated two additional documents: the JPEG DNA Benchmark Codec and the JPEG DNA Common Test Conditions in order to allow for concrete exploration experiments to take place. This will allow further validation and extension of the JPEG DNA benchmark codec to simulate an end-to-end image storage pipeline using DNA and in particular, include biochemical noise simulation which is an essential element in practical implementations. A new branch has been created in the JPEG Gitlab that now contains two anchors and two JPEG DNA benchmark codecs.

Final Quote

“After successful calls for contributions, the JPEG Committee sets precedence by launching the collaborative phase of two learning based visual information coding standards, hence announcing the start of a new era in coding technologies relying on AI.” said Prof. Touradj Ebrahimi, the Convenor of the JPEG Committee.

Upcoming JPEG meetings are planned as follows:

  • No 97, will be held online from 24-28 October 2022.
  • No 98, will be in Sydney, Australia from 14-20 January 2022

MPEG Column: 139th MPEG Meeting (virtual/online)

The original blog post can be found at the Bitmovin Techblog and has been modified/updated here to focus on and highlight research aspects.

The 139th MPEG meeting was once again held as an online meeting, and the official press release can be found here and comprises the following items:

  • MPEG Issues Call for Evidence for Video Coding for Machines (VCM)
  • MPEG Ratifies the Third Edition of Green Metadata, a Standard for Energy-Efficient Media Consumption
  • MPEG Completes the Third Edition of the Common Media Application Format (CMAF) by adding Support for 8K and High Frame Rate for High Efficiency Video Coding
  • MPEG Scene Descriptions adds Support for Immersive Media Codecs
  • MPEG Starts New Amendment of VSEI containing Technology for Neural Network-based Post Filtering
  • MPEG Starts New Edition of Video Coding-Independent Code Points Standard
  • MPEG White Paper on the Third Edition of the Common Media Application Format

In this report, I’d like to focus on VCM, Green Metadata, CMAF, VSEI, and a brief update about DASH (as usual).

Video Coding for Machines (VCM)

MPEG’s exploration work on Video Coding for Machines (VCM) aims at compressing features for machine-performed tasks such as video object detection and event analysis. As neural networks increase in complexity, architectures such as collaborative intelligence, whereby a network is distributed across an edge device and the cloud, become advantageous. With the rise of newer network architectures being deployed amongst a heterogenous population of edge devices, such architectures bring flexibility to systems implementers. Due to such architectures, there is a need to efficiently compress intermediate feature information for transport over wide area networks (WANs). As feature information differs substantially from conventional image or video data, coding technologies and solutions for machine usage could differ from conventional human-viewing-oriented applications to achieve optimized performance. With the rise of machine learning technologies and machine vision applications, the amount of video and images consumed by machines has rapidly grown. Typical use cases include intelligent transportation, smart city technology, intelligent content management, etc., which incorporate machine vision tasks such as object detection, instance segmentation, and object tracking. Due to the large volume of video data, extracting and compressing the feature from a video is essential for efficient transmission and storage. Feature compression technology solicited in this Call for Evidence (CfE) can also be helpful in other regards, such as computational offloading and privacy protection.

Over the last three years, MPEG has investigated potential technologies for efficiently compressing feature data for machine vision tasks and established an evaluation mechanism that includes feature anchors, rate-distortion-based metrics, and evaluation pipelines. The evaluation framework of VCM depicted below comprises neural network tasks (typically informative) at both ends as well as VCM encoder and VCM decoder, respectively. The normative part of VCM typically includes the bitstream syntax which implicitly defines the decoder whereas other parts are usually left open for industry competition and research.

Further details about the CfP and how interested parties can respond can be found in the official press release here.

Research aspects: the main research area for coding-related standards is certainly compression efficiency (and probably runtime). However, this video coding standard will not target humans as video consumers but as machines. Thus, video quality and, in particular, Quality of Experience needs to be interpreted differently, which could be another worthwhile research dimension to be studied in the future.

Green Metadata

MPEG Systems has been working on Green Metadata for the last ten years to enable the adaptation of the client’s power consumption according to the complexity of the bitstream. Many modern implementations of video decoders can adjust their operating voltage or clock speed to adjust the power consumption level according to the required computational power. Thus, if the decoder implementation knows the variation in the complexity of the incoming bitstream, then the decoder can adjust its power consumption level to the complexity of the bitstream. This will allow less energy use in general and extended video playback for the battery-powered devices.

The third edition enables support for Versatile Video Coding (VVC, ISO/IEC 23090-3, a.k.a. ITU-T H.266) encoded bitstreams and enhances the capability of this standard for real-time communication applications and services. While finalizing the support of VVC, MPEG Systems has also started the development of a new amendment to the Green Metadata standard, adding the support of Essential Video Coding (EVC, ISO/IEC 23094-1) encoded bitstreams.

Research aspects: reducing global greenhouse gas emissions will certainly be a challenge for humanity in the upcoming years. The amount of data on today’s internet is dominated by video, which all consumes energy from production to consumption. Therefore, there is a strong need for explicit research efforts to make video streaming in all facets friendly to our environment. 

Third Edition of Common Media Application Format (CMAF)

The third edition of CMAF adds two new media profiles for High Efficiency Video Coding (HEVC, ISO/IEC 23008-2, a.k.a. ITU-T H.265), namely for (i) 8K and (ii) High Frame Rate (HFR). Regarding the former, the media profile supporting 8K resolution video encoded with HEVC (Main 10 profile, Main Tier with 10 bits per colour component) has been added to the list of CMAF media profiles for HEVC. The profile will be branded as ‘c8k0’ and will support videos with up to 7680×4320 pixels (8K) and up to 60 frames per second. Regarding the latter, another media profile has been added to the list of CMAF media profiles, branded as ‘c8k1’ and supports HEVC encoded video with up to 8K resolution and up to 120 frames per second. Finally, chroma location indication support has been added to the 3rd edition of CMAF.

Research aspects: basically, CMAF serves two purposes: (i) harmonizing DASH and HLS at the segment format level by adopting the ISOBMFF and (ii) enabling low latency streaming applications by introducing chunks (that are smaller than segments). The third edition supports resolutions up to 8K and HFR, which raises the question of how low latency can be achieved for 8K/HFR applications and services and under which conditions.

New Amendment for Versatile Supplemental Enhancement Information (VSEI) containing Technology for Neural Network-based Post Filtering

At the 139th MPEG meeting, the MPEG Joint Video Experts Team with ITU-T SG 16 (WG 5; JVET) issued a Committee Draft Amendment (CDAM) text for the Versatile Supplemental Enhancement Information (VSEI) standard (ISO/IEC 23002-7, a.k.a. ITU-T H.274). Beyond the Supplemental Enhancement Information (SEI) message for shutter interval indication, which is already known from its specification in Advanced Video Coding (AVC, ISO/IEC 14496-10, a.k.a. ITU-T H.264) and High Efficiency Video Coding (HEVC, ISO/IEC 23008-2, a.k.a. ITU-T H.265), and a new indicator for subsampling phase indication which is relevant for variable-resolution video streaming, this new amendment contains two SEI messages for describing and activating post filters using neural network technology in video bitstreams. This could reduce coding noise, upsampling, colour improvement, or denoising. The description of the neural network architecture itself is based on MPEG’s neural network coding standard (ISO/IEC 15938-17). Results from an exploration experiment have shown that neural network-based post filters can deliver better performance than conventional filtering methods. Processes for invoking these new post-processing filters have already been tested in a software framework and will be made available in an upcoming version of the Versatile Video Coding (VVC, ISO/IEC 23090-3, a.k.a. ITU-T H.266) reference software (ISO/IEC 23090-16, a.k.a. ITU-T H.266.2).

Research aspects: quality enhancements such as reducing coding noise, upsampling, colour improvement, or denoising have been researched quite substantially either with or without neural networks. Enabling such quality enhancements via (V)SEI messages enable system-level support for research and development efforts in this area. For example, integration in video streaming applications or/and conversational services, including performance evaluations.

The latest MPEG-DASH Update

Finally, I’d like to provide a brief update on MPEG-DASH! At the 139th MPEG meeting, MPEG Systems issued a new working draft related to Extended Dependent Random Access Point (EDRAP) streaming and other extensions, which will be further discussed during the Ad-hoc Group (AhG) period (please join the dash email list for further details/announcements). Furthermore, Defects under Investigation (DuI) and Technologies under Consideration (TuC) have been updated. Finally, a new part has been added (ISO/IEC 23009-9), which is called encoder and packager synchronization, for which also a working draft has been produced. Publicly available documents (if any) can be found here.

An updated overview of DASH standards/features can be found in the Figure below.

Research aspects: in the Christian Doppler Laboratory ATHENA we aim to research and develop novel paradigms, approaches, (prototype) tools and evaluation results for the phases (i) multimedia content provisioning (i.e., video coding), (ii) content delivery (i.e., video networking), and (iii) content consumption (i.e., video player incl. ABR and QoE) in the media delivery chain as well as for (iv) end-to-end aspects, with a focus on, but not being limited to, HTTP Adaptive Streaming (HAS). Recent DASH-related publications include “Low Latency Live Streaming Implementation in DASH and HLS” and “Segment Prefetching at the Edge for Adaptive Video Streaming” among others.

The 140th MPEG meeting will be face-to-face in Mainz, Germany, from October 24-28, 2022. Click here for more information about MPEG meetings and their developments.

Towards the design and evaluation of more sustainable multimedia experiences: which role can QoE research play?

In this column, we reflect on the environmental impact and broader sustainability implications of resource-demanding digital applications and services such as video streaming, VR/AR/XR and videoconferencing. We put emphasis not only on the experiences and use cases they enable but also on the “cost” of always striving for high Quality of Experience (QoE) and better user experiences. Starting by sketching the broader context, our aim is to raise awareness about the role that QoE research can play in the context of various of the United Nations’ Sustainable Development Goals (SDGs), either directly (e.g., SDG 13 “climate action”) or more indirectly (e.g., SDG 3 “good health and well-being” and SDG 12 “responsible consumption and production”).

UNs Sustainable Development goals (Figure taken from https://www.un.org/en/sustainable-development-goals)

The ambivalent role of digital technology

One of the latest reports from the Intergovernmental Panel on Climate Change (IPCC) confirmed the urgency of drastically reducing emissions of carbon dioxide and other human-induced greenhouse gas (GHG) emissions in the years to come (IPCC, 2021). This report, directly relevant in the context of SDG 13 “climate action”, confirmed the undeniable and negative human influence on global warming and the need for collective action. While the potential of digital technology (and ICT more broadly) for sustainable development has been on the agenda for some time, the context of the COVID-19 pandemic has made it possible to better understand a set of related opportunities and challenges.

First of all, it has been observed that long-lasting lockdowns and restrictions due to the COVID-19 pandemic and its aftermath have triggered a drastic increase in internet traffic (see e.g., Feldmann, 2020). This holds particularly for the use of videoconferencing and video streaming services for various purposes (e.g., work meetings, conferences, remote education, and social gatherings, just to name a few). At the same time, the associated drastic reduction of global air traffic and other types of traffic (e.g., road traffic) with their known environmental footprint, has had undeniable positive effects on the environment (e.g., reduced air pollution, better water quality see e.g., Khan et al., 2020). Despite this potential, the environmental gains enabled by digital technology and recent advances in energy efficiency are threatened by digital rebound effects due to increased energy consumption and energy demands related to ICT (Coroamua et al., 2019; Lange et al., 2020). In the context of ever-increasing consumption, there has for instance been a growing focus in the literature on the negative environmental impact of unsustainable use and viewing practices such as binge-watching, multi-watching and media-multitasking, which have become more common over the last years (see e.g., Widdicks, 2019). While it is important to recognize that the overall emission factor will vary depending on the mix of energy generation technologies used and region in the world (Preist et al., 2014), the above observation also fits with other recent reports and articles, which expect the energy demands linked to digital infrastructure, digital services and their use to further expand and which expect the greenhouse gas emissions of ICT relative to the overall worldwide footprint to significantly increase (see e.g., Belkhir et al., 2018, Morley et al., 2018, Obringer et al., 2021). Hence, these and other recent forecasts show a growing and even unsustainable high carbon footprint of ICT in the middle-term future, due to, among others, the increasing energy demand of data centres (including e.g., also the energy needed for cooling) and the associated traffic (Preist et al., 2016).

Another set of challenges that became more apparent can be linked to the human mental resources and health involved as well as environmental effects. Here, there is a link to the abovementioned Sustainable development goals 3 (good health and well-being) and 12 (sustainable consumption and production). For instance, the transition to “more sustainable” digital meetings, online conferences, and online education has also pointed to a range of challenges from a user point of view.  “Zoom fatigue” being a prominent example illustrates the need to strike the right balance between the more sustainable character of experiences provided by and enabled through technology and how these are actually experienced and perceived from a user point of view (Döring et al., 2022; Raake et al., 2022). Another example is binge-watching behavior, which can in certain cases have a positive effect on an individual’s well-being, but has also been shown to have a negative effect through e.g., feelings of guilt and goal conflicts  (Granow et al., 2018) or through problematic involvement resulting in e.g., chronic sleep issues  (Flayelle, 2020).

From the “production” perspective, recent work has looked at the growing environmental impact of commonly used cloud-based services such as video streaming (see e.g., Chen et al., 2020, Suski et al., 2020, The Shift Project, 2021) and the underlying infrastructure consisting of data centers, transport network and end devices (Preist et al., 2016, Suski, 2020, Preist et al., 2014). As a result, the combination of technological advancements and user-centered approaches aiming to always improve the experience may have undesired environmental consequences. This includes stimulating increased user expectations (e.g., higher video quality, increased connectivity and availability, almost zero-latency, …) and by triggering increased use, and unsustainable use practices, resulting in potential rebound effects due to increased data traffic and electricity demand. 

These observations have started to culminate into a plea for a shift towards a more sustainable and humanity-centered paradigm, which considers to a much larger extent how digital consumption and increased data demand impact individuals, society and our planet (Widdicks et al., 2019, Priest et al., 2016, Hazas & Nathan, 2018). Here, it is obvious that experience, consumption behavior and energy consumption are tightly intertwined.

How does QoE research fit into this picture?

This leads to the question of where research on Quality of Experience and its underlying goals fit into this broader picture, to which extent related topics have gained attention so far and how future research can potentially have an even larger impact.

As the COVID-19 related examples above already indicated, QoE research, through its focus on improving the experience for users in e.g., various videoconferencing-based scenarios or immersive technology-related use cases, already plays and will continue to play a key role in enabling more sustainable practices in various domains (e.g., remote education, online conferences, digital meetings, and thus reducing unnecessary travels, …) and linking up to various SDGs. A key challenge here is to enable experiences that become so natural and attractive that they may even become preferred in the future. While this is a huge and important topic, we refrain from discussing it further in this contribution, as it already is a key focus within the QoE community. Instead, in the following, we, first of all, reflect on the extent to which environmental implications of multimedia services have explicitly been on the agenda of the QoE community in the past, what the focus is in more recent work, and what is currently not yet sufficiently addressed. Secondly, we consider a broader set of areas and concrete topics in which QoE research can be related to environmental and broader sustainability-related concerns.

Traditionally, QoE research has predominantly focused on gathering insights that can guide the optimization of technical parameters and allocation of resources at different layers, while still ensuring a high QoE from a user point of view. A main underlying driver in this respect has traditionally been the related business perspective: optimizing QoE as a way to increase profitability and users/customers’ willingness to pay for better quality  (Wechsung, 2014). While better video compression techniques or adaptive video streaming may allow the saving of resources, which overall may lead to environmental gains, the latter has traditionally not been a main or explicit motivation.

There are however some exceptions in earlier work, where the focus was more explicitly on the link between energy consumption-related aspects, energy efficiency and QoE. The study of Ickin, 2012 for instance, aimed to investigate QoE influence factors of mobile applications and revealed the key role of the battery in successful QoE provisioning. In this work, it was observed that energy modelling and saving efforts are typically geared towards the immediate benefits of end users, while less attention was paid to the digital infrastructure (Popescu, 2018). Efforts were further also made in the past to describe, analyze and model the trade-off between QoE and energy consumption (QoE perceived per user per Joule, QoEJ) (Popescu, 2018) or power consumption (QoE perceived per user per Watt, QoEW) (Zhang et al., 2013), as well as to optimize resource consumption so as to avoid sources of annoyance (see e.g., (Fiedler et al., 2016). While these early efforts did not yet result in a generic end-to-end QoE-energy-model that can be used as a basis for optimizations, they provide a useful basis to build upon.

A more recent example (Hossfeld et al., 2022) in the context of video streaming services looked into possible trade-offs between varying levels of QoE and the resulting energy consumption, which is further mapped to CO₂ emissions (taking the EU emission parameter as input, as this – as mentioned – depends on the overall energy mix of green and non-renewable energy sources). Their visualization model further considers parameters such as the type of device and type of network and while it is a simplification of the multitude of possible scenarios and factors, it illustrates that it is possible to identify areas where energy consumption can be reduced while ensuring an acceptable QoE.

Another recent work (Herglotz et al., 2022) jointly analyzed end-user power efficiency and QoE related to video streaming, based on actual real-world data (i.e., YouTube streaming events). More specifically, power consumption was modelled and user-perceived QoE was estimated in order to model where optimization is possible. They found that optimization is possible and pointed to the importance of the choice of video codec, video resolution, frame rate and bitrate in this respect.

These examples point to the potential to optimize at the “production” side, however, the focus has more recently also been extended to the actual use, user expectations and “consumption” side (Jiang et al., 2021, Lange et al., 2020, Suski et al., 2020, Elgaaied-Gambier et al., 2020) Various topics are explored in this respect, e.g., digital carbon footprint calculation at the individual level (Schien et al., 2013, Preist et al., 2014), consumer awareness and pro-environmental digital habits (Elgaaied-Gambier et al., 2020; Gnanasekaran et al., 2021), or impact of user behavior (Suski et al., 2020). While we cannot discuss all of these in detail here, they all are based on the observation that there is a growing need to involve consumers and users in the collective challenge of reducing the impact of digital applications and services on the environment (Elgaaied-Gambier et al., 2020; Priest et al., 2016).

QoE research can play an important role here, extending the understanding of carbon footprint vs. QoE trade-offs to making users more aware of the actual “cost” of high QoE. A recent interview study with digital natives conducted by some of the co-authors of this column  (Gnanasekaran et al., 2021) illustrated that many users are not aware of the environmental impact of their user behavior and expectations and that even with such insights, substantial drastic changes in behavior cannot be expected. The lack of technological understanding, public information and social awareness about the topic were identified as important factors. It is therefore of utmost importance to trigger more awareness and help users see and understand their carbon footprint related to e.g., the use of video streaming services (Gnanasekaran et al., 2021). This perspective is currently missing in the field of QoE and we argue here that QoE research could – in collaboration with other disciplines and by integrating insights from other fields – play an important role here.

In terms of the motivation for adopting pro-environmental digital habits, Gnanasekaran et al., (2021) found that several factors indirectly contribute to this goal, including the striving for personal well-being. Finally, the results indicate some willingness to change and make compromises (e.g., accepting a lower video quality), albeit not an unconditional one: the alignment with other goals (e.g., personal well-being) and the nature of the perceived sacrifice and its impact play a key role. A key challenge for future work is therefore to identify and understand concrete mechanisms that could trigger more awareness amongst users about the environmental and well-being impact of their use of digital applications and services, and those that can further motivate positive behavioral change (e.g., opting for use practices that limit one’s digital carbon footprint, mindful digital consumption). By investigating the impact of various more environmentally-friendly viewing practices on QoE (e.g., actively promoting standard definition video quality instead of HD, nudging users to switch to audio-only when a service like YouTube is used as background noise or stimulating users to switch to the least data demanding viewing configuration depending on the context and purpose), QoE research could help to bridge the gap towards actual behavioral change.

Final reflections and challenges for future research

We have argued that research on users’ Quality of Experience and overall User Experience can be highly relevant to gain insights that may further drive the adoption of new, more sustainable usage patterns and that can trigger more awareness of implications of user expectations, preferences and actual use of digital services. However, the focus on continuously improving users’ Quality Experience may also trigger unwanted rebound effects, leading to an overall higher environmental footprint due to the increased use of digital applications and services. Further, it may have a negative impact on users’ long-term well-being as well.

We, therefore, need to join efforts with other communities to challenge the current design paradigm from a more critical stance, partly as “it’s difficult to see the ecological impact of IT when its benefits are so blindingly bright” (Borning et al., 2020). Richer and better experiences may lead to increased, unnecessary or even excessive consumption, further increasing individuals’ environmental impact and potentially impeding long-term well-being. Open questions are, therefore: Which fields and disciplines should join forces to mitigate the above risks? And how can QoE research — directly or indirectly — contribute to the triggering of sustainable consumption patterns and the fostering of well-being?

Further, a key question is how energy efficiency can be improved for digital services such as video streaming, videoconferencing, online gaming, etc., while still ensuring an acceptable QoE. This also points to the question of which compromises can be made in trading QoE against its environmental impact (from “willingness to pay” to “willingness to sacrifice”), under which circumstances and how these compromises can be meaningfully and realistically assessed. In this respect, future work should extend the current modelling efforts to link QoE and carbon footprint, go beyond exploring what users are willing to (more passively) endure, and also investigate how users can be more actively motivated to adjust and lower their expectations and even change their behavior.

These and related topics will be on the agenda of the Dagstuhl seminar  23042 “Quality of Sustainable Experience” and the conference QoMEX 2023 “Towards sustainable and inclusive multimedia experiences”.

Conference QoMEX 2023 “Towards sustainable and inclusive multimedia experiences

References

Belkhir, L., Elmeligi, A. (2018). “Assessing ICT global emissions footprint: Trends to 2040 & recommendations,” Journal of cleaner production, vol. 177, pp. 448–463.

Borning, A., Friedman, B., Logler, N. (2020). The ’invisible’ materiality of information technology. Communications of the ACM, 63(6), 57–64.

Chen, X., Tan, T., et al. (2020). Context-Aware and Energy-Aware Video Streaming on Smartphones. IEEE Transactions on Mobile Computing.

Coroama, V.C., Mattern, F. (2019). Digital rebound–why digitalization will not redeem us our environmental sins. In: Proceedings 6th international conference on ICT for sustainability. Lappeenranta. http://ceur-ws.org. vol. 238

Döring, N., De Moor, K., Fiedler, M., Schoenenberg, K., Raake, A. (2022). Videoconference Fatigue: A Conceptual Analysis. Int. J. Environ. Res. Public Health, 19(4), 2061 https://doi.org/10.3390/ijerph19042061

Elgaaied-Gambier, L., Bertrandias, L., Bernard, Y. (2020). Cutting the internet’s environmental footprint: An analysis of consumers’ self-attribution of responsibility. Journal of Interactive Marketing, 50, 120–135.

Feldmann, A., Gasser, O., Lichtblau, F., Pujol, E., Poese, I., Dietzel, C., … & Smaragdakis, G. (2020, October). The lockdown effect: Implications of the COVID-19 pandemic on internet traffic. In Proceedings of the ACM internet measurement conference (pp. 1-18).

Daniel Wagner, Matthias Wichtlhuber, Juan Tapiador, Narseo Vallina-Rodriguez, Oliver Hohlfeld, and Georgios Smaragdakis.

Feldmann, A., Gasser, O., Lichtblau, F., Pujol, E., Poese, I., Dietzel, C., Wagner, D., Wichtlhuber, M., Tapiador, J., Vallina-Rodriguez, N., Hohlfeld, O., Smaragdakis, G. (2020, October). The lockdown effect: Implications of the COVID-19 pandemic on internet traffic. In Proceedings of the ACM internet measurement conference (pp. 1-18).

Fiedler, M., Popescu, A., Yao, Y. (2016), “QoE-aware sustainable throughput for energy-efficient video streaming,” in 2016 IEEE International Conferences on Big Data and Cloud Computing (BDCloud), Social Computing and Networking (SocialCom), Sustainable Computing and Communications (SustainCom)(BDCloud-SocialCom-SustainCom). pp. 493–50

Flayelle, M., Maurage, P., Di Lorenzo, K.R., Vögele, C., Gainsbury, S.M., Billieux, J. (2020). Binge-Watching: What Do we Know So Far? A First Systematic Review of the Evidence. Curr Addict Rep 7, 44–60. https://doi.org/10.1007/s40429-020-00299-8

Gnanasekaran, V., Fridtun, H. T., Hatlen, H., Langøy, M. M., Syrstad, A., Subramanian, S., & De Moor, K. (2021). Digital carbon footprint awareness among digital natives: an exploratory study. In Norsk IKT-konferanse for forskning og utdanning (No. 1, pp. 99-112).

Granow, V.C., Reinecke, L., Ziegele, M. (2018): Binge-watching and psychological well-being: media use between lack of control and perceived autonomy. Communication Research Reports 35 (5), 392–401.

Hazas, M. and Nathan, L. (Eds.)(2018). Digital Technology and Sustainability. London: Routledge.

Herglotz, C., Springer, D., Reichenbach,  M., Stabernack B. and Kaup, A. (2018). “Modeling the Energy Consumption of the HEVC Decoding Process,” in IEEE Transactions on Circuits and Systems for Video Technology, vol. 28, no. 1, pp. 217-229, Jan. 2018, doi: 10.1109/TCSVT.2016.2598705.

Hossfeld, T., Varela, M., Skorin-Kapov, L. Heegaard, P.E. (2022). What is the trade-off between CO2 emission and videoconferencing QoE. ACM SIGMM records, https://records.sigmm.org/2022/03/31/what-is-the-trade-off-between-co2-emission-and-video-conferencing-qoe/

Ickin, S., Wac, K., Fiedler, M. and Janowski, L. (2012). “Factors influencing quality of experience of commonly used mobile applications,” IEEE Communications Magazine, vol. 50, no. 4, pp. 48–56.

IPCC, 2021: Climate Change 2021: The Physical Science Basis. Contribution of Working Group I to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change [Masson-Delmotte, V., P. Zhai, A. Pirani, S.L. Connors, C. Péan, S. Berger, N. Caud, Y. Chen, L. Goldfarb, M.I. Gomis, M. Huang, K. Leitzell, E. Lonnoy, J.B.R. Matthews, T.K. Maycock, T. Waterfield, O. Yelekçi, R. Yu, and B. Zhou (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, In press, doi:10.1017/9781009157896.

Jiang, P., Van Fan, Y., Klemes, J.J. (2021). Impacts of covid-19 on energy demand and consumption: Challenges, lessons and emerging opportunities. Applied energy, 285, 116441.

Khan, D., Shah, D. and Shah, S.S. (2020). “COVID-19 pandemic and its positive impacts on environment: an updated review,” International Journal of Environmental Science and Technology, pp. 1–10, 2020.

Lange, S., Pohl, J., Santarius, T. (2020). Digitalization and energy consumption. Does ICT reduce energy demand? Ecological Economics, 176, 106760.

Morley, J., Widdicks, K., Hazas, M. (2018). Digitalisation, energy and data demand: The impact of Internet traffic on overall and peak electricity consumption. Energy Research & Social Science, 38, 128–137.

Obringer, R., Rachunok, B., Maia-Silva, D., Arbabzadeh, M., Roshanak, N., Madani, K. (2021). The overlooked environmental footprint of increasing internet use. Resources, Conservation and Recycling, 167, 105389.

Popescu, A. (Ed.)(2018). Greening Video Distribution Networks, Springer.

Preist, C., Schien, D., Blevis, E. (2016). “Understanding and mitigating the effects of device and cloud service design decisions on the environmental footprint of digital infrastructure,” in Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, pp. 1324–1337.

Preist, C., Schien, D., Shabajee, P. , Wood, S. and Hodgson, C. (2014). “Analyzing End-to-End Energy Consumption for Digital Services,” Computer, vol. 47, no. 5, pp. 92–95.

Raake, A., Fiedler, M., Schoenenberg, K., De Moor, K., Döring, N. (2022). Technological Factors Influencing Videoconferencing and Zoom Fatigue. arXiv:2202.01740, https://doi.org/10.48550/arXiv.2202.01740

Schien, D., Shabajee, P., Yearworth, M. and Preist, C. (2013), Modeling and Assessing Variability in Energy Consumption During the Use Stage of Online Multimedia Services. Journal of Industrial Ecology, 17: 800-813. https://doi.org/10.1111/jiec.12065

Suski, P., Pohl, J., Frick, V. (2020). All you can stream: Investigating the role of user behavior for greenhouse gas intensity of video streaming. In: Proceedings of the 7th International Conference on ICT for Sustainability. p. 128–138. ICT4S2020, Association for Computing Machinery, New York, NY, USA.

The Shift Project, Climate crisis: the unsustainable use of online video: Our new report on the environmental impact of ICT. https://theshiftproject.org/en/article/unsustainable-use-online-video/

Wechsung, I., De Moor, K. (2014). Quality of Experience Versus User Experience. In: Möller, S., Raake, A. (eds) Quality of Experience. T-Labs Series in Telecommunication Services. Springer, Cham. https://doi.org/10.1007/978-3-319-02681-7_3

Widdicks, K., Hazas, M., Bates, O., Friday, A. (2019). “Streaming, Multi-Screens and YouTube: The New (Unsustainable) Ways of Watching in the Home,” in Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, ser. CHI ’19. New York, NY, USA: Association for Computing Machinery, p. 1–13.

Zhang, X., Zhang, J., Huang, Y., Wang, W. (2013). “On the study of fundamental trade-offs between QoE and energy efficiency in wireless networks,” Transactions on Emerging Telecommunications Technologies, vol. 24, no. 3, pp. 259–265.