Towards Immersive Digiphysical Experiences


Immersive experiences have the potential of redefining traditional forms of media engagement by intricately combining reality with imagination. Motivated by necessities, current developments and emerging technologies, this column sets out to bridge immersive experiences in both digital and physical realities. Fitting under the umbrella term of eXtended Reality (XR), the first section describes various realizations of blending digital and physical elements to design what we refer to as immersive digiphysical experiences. We further highlight industry and research initiatives related to driving the design and development of such experiences, considered to be key building-blocks of the futuristic ‘metaverse’. The second section outlines challenges related to assessing, modeling, and managing the Quality of Experience (QoE) of immersive digiphysical experiences and reflects upon ongoing work in the area. While potential use cases span a wide range of application domains, the third section elaborates on the specific case of conference organization, which has over the past few years spanned from fully physical, to fully virtual, and finally to attempts at hybrid organization. We believe this use case provides valuable insights into needs and promising approaches, to be demonstrated and experienced at the upcoming 16th edition of the International Conference on Quality of Multimedia Experience (QoMEX 2024) in Karlshamn, Sweden in June 2024.

Multiple users engaged in a co-located mixed reality experience

Bridging The Digital And Physical Worlds

According to [IMeX WP, 2020], immersive media have been described as involving “multi-modal human-computer interaction where either a user is immersed inside a digital/virtual space or digital/virtual artifacts become a part of the physical world”. Spanning the so-called virtuality continuum [Milgram, 1995], immersive media experiences may involve various realizations of bridging the digital and physical worlds, such as the seamless integration of digital content with the real world (via Augmented or Mixed Reality, AR/MR), and vice versa by incorporating real objects into a virtual environment (Augmented Virtuality, AV). More recently, the term eXtended Reality (XR) (also sometimes referred to as xReality) has been used as an umbrella term for a wide range of levels of “realities”, with [Rauschnabel, 2022] proposing a distinction between AR/MR and Virtual Reality (VR) based on whether the physical environment is, at least visually, part of the user’s experience.

By seamlessly merging digital and physical elements and supporting real-time user engagement with both digital and physical components, immersive digiphysical (i.e., both digitally and physically accessible [Westerlund, 2020]) experiences have the potential of providing compelling experiences blurring the distinction between the real and virtual worlds. A key aspect is that of digital elements responding to user input or the physical environment, and the physical environment responding to interactions with digital objects. Going beyond only visual or auditory stimuli, the incorporation of additional senses, for example via haptic feedback or olfactory elements, can contribute to multisensory engagement [Gibbs, 2022].

The rapid development of XR technologies has been recognized as a key contributor to realizing a wide range of applications built on the fusion of the digital and physical worlds [NEM WP, 2022]. In its contribution to the European XR Coalition (launched by the European Commission), the New European Media Initiative (NEM), Europe’s Technology Platform of Horizon 2020 dedicated to driving the future of digital experiences, calls for needed actions from both industry and research perspectives addressing challenges related to social and human centered XR as well as XR communication aspects [NEM XR, 2022]. One such initiative is the Horizon 2020 TRANSMIXR project [TRANSMIXR], aimed at developing a distributed XR creation environment that supports remote collaboration practices, as well as an XR media experience environment for the delivery and consumption of social immersive media experiences. The NEM initiative further identifies the need for scalable solutions to obtain plausible and convincing virtual copies of physical objects and environments, as well as solutions supporting seamless and convincing interaction between the physical and the virtual world. Among key technologies and infrastructures needed to overcome outlined challenges, the following are identified [NEM XR, 2022]: high bandwidth and low-latency energy-efficient networks; remote computing for processing and rendering deployed on cloud and edge infrastructures; tools for the creation and updating of digital twins (DT) to strengthen the link between the real and virtual worlds, integrating Internet of Things (IoT) platforms; hardware in the form of advanced displays; and various content creation tools relying on interoperable formats.

Merging the digital and physical worlds

Looking towards the future, immersive digiphysical experiences set the stage for visions of the metaverse [Wang, 2023], described as representing the evolution of the Internet towards a platform enabling immersive, persistent, and interconnected virtual environments blending digital and physical [Lee, 2021].[Wang, 2022] see the metaverse as `created by the convergence of physically persistent virtual space and virtually enhance physical reality’. The metaverse is further seen as a platform offering the potential to host real-time multisensory social interactions (e.g., involving sight, hearing, touch) between people communicating with each other in real-time via avatars [Hennig-Thurau, 2023]. As of 2022, the Metaverse Standards Forum is proving a venue for industry coordination fostering the development of interoperability standards for an open and inclusive metaverse [Metaverse, 2023]. Relevant existing standards include: ISO/IEC 23005 (MPEG-V) (standardization of interfaces between the real world and the virtual world, and among virtual worlds) [ISO/IEC 23055], IEEE 2888 (definition of standardized interfaces for synchronization of cyber and physical worlds) [IEEE 2888], and MPEG-I (standards to digitally represent immersive media) [ISO/IEC 23090].

Research Challenges For The Qoe Community

Achieving wide-spread adoption of XR-based services providing digiphysical experiences across a broad range of application domains (e.g., education, industry & manufacturing, healthcare, engineering, etc.) inherently requires ensuring intuitive, comfortable, and positive user experiences. While research efforts in meeting such requirements are well under way, a number of open challenges remain.

Quality of Experience (QoE) for immersive media has been defined as [IMeX WP, 2020]the degree of delight or annoyance of the user of an application or service which involves an immersive media experience. It results from the fulfillment of his or her expectations with respect to the utility and/or enjoyment of the application or service in the light of the user’s personality and current state.” Furthermore, a bridge between QoE and UX has been established through the concept of Quality of User Experience (QUX), combining hedonic, eudaimonic and pragmatic aspects of QoE and UX [Egger-Lampl, 2019]. In the context of immersive communication and collaboration services, significant efforts are being invested towards understanding and optimizing the end user experience [Perez, 2022].

The White Paper [IMeX WP, 2020] ties immersion to the digital media world (“The more the system blocks out stimuli from the physical world, the more the system is considered to be immersive.”). Nevertheless, immersion as such exists in physical contexts as well, e.g., when reading a captivating book. MR, XR and AV scenarios are digiphysical in their nature. These considerations pose several challenges:

  1. Achieving intuitive and natural interactive experiences [Hennig-Thurau, 2023] when mixing realities.
  2. Developing a common understanding of MR-, XR- and AV-related challenges in digiphysical multi-modal multi-party settings.
  3. Advancing VR, AR, MR, XR and AV technologies to allow for truly digiphysical experiences.
  4. Measuring and modeling QoE, UX and QUX for immersive digiphysical services, covering overall methodology, measurement instruments, modeling approaches, test environments and application domains.
  5. Management of the networked infrastructure to support immersive digiphysical experiences with appropriate QoE, UX and QUX.
  6. Sustainability considerations in terms of environmental footprint, accessibility, equality of opportunities in various parts of the world, and cost/benefit ratio.

Challenges 1 and 2 demand for an experience-based bottom-up approach to focus on the most important aspects. Examples include designing and evaluating different user representations [Aseeri, 2021][Viola, 2023], natural interaction techniques [Spittle, 2023] and use of different environments by participants (AR/MR/VR) [Moslavac, 2023]. The latter has shown beneficial for challenges 3 (cf. the emergence of MR-/XR-/AV-supporting head-mounted devices such as the Microsoft Hololens and recent pass-through versions of the Meta Quest) and 4. Finally, challenges 5 and 6 need to be carefully addressed to allow for long-term adoption and feasibility.

Challenges 1 to 4 have been addressed in standardization. For instance, ITU-T Recommendation P.1320 specifies QoE assessment procedures and metrics for the evaluation of XR telemeetings, outlining various categories of QoE influence factors and use cases [ITU-T Rec. P.1320, 2022] (adopted from the 3GPP technical report TR 26.928 on XR technology in 5G). The corresponding ITU-T Study Group 12 (Question 10) developed a taxonomy of telemeetings [ITU-T Rec. G.1092, 2023], providing a systematic classification of telemeeting systems. Ongoing joint efforts between the VQEG Immersive Media Group and ITU-T Study Group 12 are targeted towards specifying interactive test methods for subjective assessment of XR communications [ITU-T P.IXC, 2022].

The complexity of the aforementioned challenges demand for a combination of fundamental work, use cases, implementations, demonstrations, and testing. One specific use case that has shown its urge during recent years in combining digital and physical realities is that of hybrid conference organization, touching in particular on the challenge of achieving intuitive and natural interactions between remote and physically present participants. We consider this use case in detail in the following section, referring to the organization of the International Conference on Quality of Multimedia Experience (QoMEX) as an example.

Immersive Communication And Collaboration: The Case Of Conference Organization

What seemed to be impossible and was undesirable in the past, became a necessity overnight during the CoVid-19 pandemic: running conferences as fully virtual events. Many research communities succeeded in adapting ongoing conference organizations such that communities could meet, present, demonstrate and socialize online. The conference QoMEX 2020 is one such example, whose organizers introduced a set of innovative instruments to mutually interact and enjoy, such as virtual Mozilla Hubs spaces for poster presentations and a music session with prerecorded contributions mixed to form a joint performance to be enjoyed virtually together. A yet unknown inventiveness was observed to make the best out of the heavily travel-restricted situation. Furthermore, the technical approaches varied from off-the-shelf systems (such as Zoom or Teams) to custom-built applications. However, the majority of meetings during CoVid times, no matter scale and nature, were run in unnatural 2D on-screen settings. The frequently reported phenomenon of videoconference (VC) fatigue can be attributed to a set of personal, organizational, technical and environmental factors [Döring, 2022]. Indeed, talking to one’s computer with many faces staring back, limited possibilities to move freely, technostress [Brod, 1984] and organizational mishaps made many people tired of VC technology that was designed for a better purpose, but could not get close enough to a natural real-life experience.

As CoVid was on its retreat, conferences again became physical events and communities enjoyed meeting again, e.g., at QoMEX 2022. However, voices were raised that asked for remote participation for various reasons, such as time or budget restrictions, environmental sustainability considerations, or simply the comfort of being able to work from home. With remote participation came the challenge of bridging between in-person and remote participants, i.e., turning conferences into hybrid events [Bajpai, 2022]. However, there are many mixed experiences from hybrid conferences, both with onsite and online participants: (1) The onsite participants suffer from interruptions of the session flow needed to fix problems with the online participation tool. Their readiness to devote effort, time, and money to participate in a future hybrid event in person might suffer from such issues, which in turn would weaken the corresponding communities; (2) The online participants suffer from similar issues, where sound irregularities (echo, excessive sound volumes, etc.) are felt to be particularly disturbing, along with feelings of being not properly included e.g., in Q&A-sessions and personal interactions. At both ends, clear signs of technostress and “us-and-them” feelings can be observed. Consequently, and despite good intentions and advice [Bajpai, 2022], any hybrid conference might miss its main purpose to bring researchers together to present, discuss and socialize. To avoid the above-listed issues, the post-CoVid QoMEX conferences (since 2022) avoided hybrid operations, with few exceptions.

A conference is a typical case that reveals difficulties in bringing the physical and digital worlds together [Westerlund, 2020], at least when relying upon state-of-the-art telemeeting approaches that have not explicitly been designed for hybrid and digiphysical operations. At the recent 26th ACM Conference on Computer-Supported Cooperative Work And Social Computing in Minneapolis, USA (CSCW 2023), one of the panel sessions focused on “Realizing Values in Hybrid Environments”. Panelists and audience shared experiences about successes and failures with hybrid events. The main take-aways were as follows: (1) there is a general lack of know-how, no matter how much funds are allocated, and (2) there is a significant demand for research activities in the area.

Yet, there is hope, as increasingly many VR, MR, XR and AV-supporting devices and applications keep emerging, enabling new kinds and representations of immersive experiences. In a conference context, the latter implies the feeling of “being there”, i.e., being integrated in the conference community, no matter where the participant is located. This calls for new ways of interacting amongst others through various realities (VR/MR/XR), which need to be invented, tried and evaluated in order to offer new and meaningful experiences in telemeeting scenarios [Viola, 2023]. Indeed, CSCW 2023 hosted a specific workshop titled “Emerging Telepresence Technologies for Hybrid Meetings: an Interactive Workshop”, during which visions, experiences, and solutions were shared and could be experienced locally and remotely. About half of the participants were online, successfully interacting with participants onsite via various techniques.

With these challenges and opportunities in mind, the motto of QoMEX 2024 has been set as “Towards immersive digiphysical experiences.” While the conference is organized as an in-person event, a set of carefully selected hybrid activities will be offered to interested remote participants, such as (1) 360° stereoscopic streaming of the keynote speeches and demo sessions, and (2) the option to take part in so-called hybrid experience demos. The 360° stereoscopic streaming has so far been tested successfully in local, national and transatlantic sessions (during the above-mentioned CSCW workshop) with various settings, and further fine-tuning will be done and tested before the conference. With respect to the demo session – and in addition to traditional onsite demos – this year, the conference will in particular solicit hybrid experience demos that enable both onsite and remote participants to test the demo in an immersive environment. Facilities will also be provided for onsite participants to test demos from both the perspective of a local and remote user, enabling them to experience different roles. The organizers of QoMEX 2024 hope that the hybrid activities of QoMEX 2024 will trigger more research interest in these areas along and beyond the classical lines of QoE research (to perform quantitative subjective studies of QoE features and correlating them with QoE factors).

QoMEX 2024: Towards Immersive Digiphysical Experiences

Concluding Remarks

As immersive experiences extend into both digital and physical worlds and realities, there is a great space to conquer for QoE, UX, and QUX-related research. While the recent CoVid pandemic has forced many users to replace physical with digital meetings and sustainability considerations have reduced many peoples’ and organizations’ readiness to (support) travel, shortcomings of hybrid digiphysical meetings have failed to persuade their participants of their superiority over pure online or on-site meetings. Indeed, one promising path towards a successful integration of physical and digital worlds consists of trying out, experiencing, reflecting, and deriving important research questions for and beyond the QoE research community The upcoming conference QoMEX 2024 will be a stop along this road with carefully selected hybrid experiences aimed at boosting research and best practice in the QoE domain towards immersive digiphysical experiences.

References

  • [Aseeri, 2021] Aseeri, S., & Interrante, V. (2021). The Influence of Avatar Representation on Interpersonal Communication in Virtual Social Environments. IEEE Transactions on Visualization and Computer Graphics, 27(5), 2608-2617.
  • [Bajpai, 2022] Bajpai, V., et al.. (2022). Recommendations for designing hybrid conferences. ACM SIGCOMM Computer Communication Review, 52(2), 63-69.
  • [Brod, 1984] Brod, C. (1984). Technostress: The Human Cost of the Computer Revolution. Basic Books; New York, NY, USA: 1984.
  • [Döring, 2022] Döring, N., Moor, K. D., Fiedler, M., Schoenenberg, K., & Raake, A. (2022). Videoconference Fatigue: A Conceptual Analysis. International Journal of Environmental Research and Public Health, 19(4), 2061.
  • [Egger-Lampl, 2019] Egger-Lampl, S., Hammer, F., & Möller, S. (2019). Towards an integrated view on QoE and UX: adding the Eudaimonic Dimension, ACM SIGMultimedia Records, 10(4):5.
  • [Gibbs, 2022] Gibbs, J. K., Gillies, M., & Pan, X. (2022). A comparison of the effects of haptic and visual feedback on presence in virtual reality. International Journal of Human-Computer Studies, 157, 102717.
  • [Hennig-Thurau, 2023] Hennig-Thurau, T., Aliman, D. N., Herting, A. M., Cziehso, G. P., Linder, M., & Kübler, R. V. (2023). Social Interactions in the Metaverse: Framework, Initial Evidence, and Research Roadmap. Journal of the Academy of Marketing Science, 51(4), 889-913.
  • [IMeX WP, 2020] Perkis, A., Timmerer, C., et al., “QUALINET White Paper on Definitions of Immersive Media Experience (IMEx)”, European Network on Quality of Experience in Multimedia Systems and Services, 14th QUALINET meeting (online), May 25, 2020. Online: https://arxiv.org/abs/2007.07032
  • [ISO/IEC 23055] ISO/IEC 23005 (MPEG-V) standards, Media Context and Control, https://mpeg.chiariglione.org/standards/mpeg-v, accessed January 21, 2024.
  • [ISO/IEC 23090] ISO/IEC 23090 (MPEG-I) standards, Coded representation of Immersive Media, https://mpeg.chiariglione.org/standards/mpeg-i, accessed January 21, 2024.
  • [IEEE 2888] IEEE 2888 standards, https://sagroups.ieee.org/2888/, accessed January 21, 2024.
  • [ITU-T Rec.. G.1092, 2023] ITU-T Recommendation G.1092 – Taxonomy of telemeetings from a quality of experience perspective, Oct. 2023.
  • [ITU-T Rec. P.1320, 2022] ITU-T Recommendation P.1320 – QoE assessment of extended reality (XR) meetings, 2022.
  • [ITU-T P.IXC, 2022] ITU-T Work Item: Interactive test methods for subjective assessment of extended reality communications, under study,” 2022.
  • [Lee, 2021] Lee, L. H. et al. (2021). All One Needs to Know about Metaverse: A Complete Survey on Technological Singularity, Virtual Ecosystem, and Research Agenda. arXiv preprint arXiv:2110.05352.
  • [Metaverse, 2023] Metaverse Standards Forum, https://metaverse-standards.org/
  • [Milgram, 1995] Milgram, P., Takemura, H., Utsumi, A., & Kishino, F. (1995, December). Augmented reality: A class of displays on the reality-virtuality continuum. In Telemanipulator and telepresence technologies (Vol. 2351, pp. 282-292). International Society for Optics and Photonics.
  • [Moslavac, 2023] Moslavac, M., Brzica, L., Drozd, L., Kušurin, N., Vlahović, S., & Skorin-Kapov, L. (2023, July). Assessment of Varied User Representations and XR Environments in Consumer-Grade XR Telemeetings. In 2023 17th International Conference on Telecommunications (ConTEL) (pp. 1-8). IEEE.
  • [Rauschnabel, 2022] Rauschnabel, P. A., Felix, R., Hinsch, C., Shahab, H., & Alt, F. (2022). What is XR? Towards a Framework for Augmented and Virtual Reality. Computers in human behavior, 133, 107289.
  • [NEM WP, 2022] New European Media (NEM), NEM: List of topics for the Work Program 2023-2024.
  • [NEM XR, 2022] New European Media (NEM), NEM contribution to the XR coalition, June 2022.
  • [Perez, 2022] Pérez, P., Gonzalez-Sosa, E., Gutiérrez, J., & García, N. (2022). Emerging Immersive Communication Systems: Overview, Taxonomy, and Good Practices for QoE Assessment. Frontiers in Signal Processing, 2, 917684.
  • [Spittle, 2023] Spittle, B., Frutos-Pascual, M., Creed, C., & Williams, I. (2023). A Review of Interaction Techniques for Immersive Environments. IEEE Transactions on Visualization and Computer Graphics, 29(9), Sept. 2023.
  • [TRANSMIXR] EU HORIZON 2020 TRANSMIXR project, Ignite the Immersive Media Sector by Enabling New Narrative Visions, https://transmixr.eu/
  • [Viola, 2023] Viola, I., Jansen, J., Subramanyam, S., Reimat, I., & Cesar, P. (2023). VR2Gather: A Collaborative Social VR System for Adaptive Multi-Party Real-Time Communication. IEEE MultiMedia, 30(2).
  • [Wang 2023] Wang, H. et al. (2023). A Survey on the Metaverse: The State-of-the-Art, Technologies, Applications, and Challenges. IEEE Internet of Things Journal, 10(16).
  • [Wang, 2022] Wang, Y. et al. (2022). A Survey on Metaverse: Fundamentals, Security, and Privacy. IEEE Communications Surveys & Tutorials, 25(1).
  • [Westerlund, 2020] Westerlund, T. & Marklund, B. (2020). Community pharmacy and primary health care in Sweden – at a crossroads. Pharm Pract (Granada), 18(2): 1927.

Explainable Artificial Intelligence for Quality of Experience Modelling

Data-driven Quality of Experience (QoE) modelling using Machine Learning (ML) arose as a promising alternative to the cumbersome and potentially biased manual QoE modelling. However, the reasoning of a majority of ML models is not explainable due to their black-box characteristics, which prevents us from gaining insights about how the model actually related QoE influence factors and QoE. These fundamental relationships are highly relevant for QoE researchers and service and network providers though.

With the emerging field of eXplainable Artificial Intelligence (XAI) and its recent technological advances, these issues can now be resolved. As a consequence, XAI enables data-driven QoE modelling to obtain generalizable QoE models and provides us simultaneously with the model’s reasoning on which QoE factors are relevant and how they affect the QoE score. In this work, we showcase the feasibility of explainable data-driven QoE modelling for video streaming and web browsing, before we discuss the opportunities and challenges of deploying XAI for QoE modelling.

Introduction

In order to enhance services and networks and prevent users from switching to competitors, researchers and service providers need a deep understanding of the factors that influence the Quality of Experience (QoE) [1]. However, developing an effective QoE model is a complex and costly endeavour. Typically, it requires dedicated and extensive studies, which can only cover a limited portion of the parameter space and may be influenced by the study design. These studies often generate a relatively small sample of QoE ratings from a comparatively small population, making them vulnerable to poor performance when applied to unseen data. Moreover, the process of collecting and processing data for QoE modelling is not only arduous and time-consuming, but it can also introduce biases and self-fulfilling prophecies, such as perceiving an exponential relationship when one is expected.

To overcome these challenges, data-driven QoE modelling utilizing machine learning (ML) has emerged as a promising alternative, especially in scenarios where there is a wealth of data available or where data streams can be continuously obtained. A notable example is the ITU-T standard P.1203 [2], which estimates video streaming QoE by combining manual modelling – accounting for 75% of the Mean Opinion Score (MOS) estimation – and ML-based Random Forest modelling – accounting for the remaining 25%. The inclusion of the ML component in P.1203 indicates its ability to enhance performance. However, the inner workings of P.1203’s Random Forest model, specifically how it calculates the output score, are not obvious. Also, the survey in [3] shows that ML-based QoE modelling in multimedia systems is already widely used, including Virtual Reality, 360-degree video, and gaming. However, the QoE models are based on shallow learning methods, e.g., Support Vector Machines (SVM), or on deep learning methods, which lack explainability. Thus, it is difficult to understand what QoE factors are relevant and how they affect the QoE score [13], resulting in a lack of trust in data-driven QoE models and impeding their widespread adoption by researchers and providers [14].

Fortunately, recent advancements in the field of eXplainable Artificial Intelligence (XAI) [6] have paved the way for interpretable ML-based QoE models, thereby fostering trust between stakeholders and the QoE model. These advancements encompass a diverse range of XAI techniques that can be applied to existing black-box models, as well as novel and sophisticated ML models designed with interpretability in mind. Considering the use case of modelling video streaming QoE from real subjective ratings, the work in [4] evaluates the feasibility of explainable, data-driven QoE modelling and discusses the deployment of XAI for QoE research.

The utilization of XAI for QoE modelling brings several benefits. Not only does it speed up the modelling process, but it also enables the identification of the most influential QoE factors and their fundamental relationships with the Mean Opinion Score (MOS). Furthermore, it helps eliminate biases and preferences from different research teams and datasets that could inadvertently influence the model. All that is required is a selective dataset with descriptive features and corresponding QoE ratings (labels), which covers the most important QoE influence factors and, in particular, also rare events, e.g., many stalling events in a session. Generating such complete datasets, however, is an open research question, but calls for data-centric AI [15]. By merging datasets from various studies, more robust and generalizable QoE models can theoretically be created. These studies need to have a common ground though. Another benefit is the fact that the models can also be automatically refined over time as new QoE studies are conducted and additional data becomes available.

XAI: eXplainable Artificial Intelligence

For a comprehensive understanding of eXplainable Artificial Intelligence (XAI), a general overview can be found in [5], while a thorough survey on XAI methods and a taxonomy of XAI methods, in general, is available in [6].

XAI methods can be categorized into two main types: local and global explainability techniques. Local explainability aims to provide explanations for individual stimuli in terms of QoE factors and QoE ratings. On the other hand, global explainability focuses on offering general reasoning for how a model derives the QoE rating from the underlying QoE factors. Furthermore, XAI methods can be classified into post-hoc explainers and interpretable models.

Post-hoc explainers [6] are commonly used to explain various black-box models, such as neural networks or ensemble techniques after they have been trained. One widely utilized post-hoc explainer is SHAP values [7], which originates from game theory. SHAP values quantify the contribution of each feature to the model’s prediction by considering all possible feature subsets and learning a model for each subset. Other post-hoc explainers include LIME and Anchors, although they are limited to classification tasks.

Interpretable models, by design, provide explanations for how the model arrives at its output. Well-known interpretable models include linear models and decision trees. Additionally, generalized additive models (GAM) are gaining recognition as interpretable models.

A GAM is a generalized linear model in which the model output is computed by summing up each of the arbitrarily transformed input features along with a bias [8]. The form of a GAM enables a direct interpretation of the model by analyzing the learned functions and the transformed inputs, which allows to estimate the influence of a feature. Two state-of-the-art ML-based GAM models are Explainable Boosting Machine (EBM) [9] and Neural Additive Model (NAM) [8]. While EBM uses decision trees to learn the functions and gradient boosting to improve training, NAM utilizes arbitrary neural networks to learn the functions, resulting in a neural network architecture with one sub-network per feature. EBM extends GAM by also considering additional pairwise feature interaction terms while maintaining explainability.

Exemplary XAI-based QoE Modelling using GAMs

We demonstrate the learned predictor functions for both EBM (red) and NAM (blue) on a video QoE dataset in Figure 1. All technical details about the dataset and the methodology can be found in [4]. We observe that both models provide smooth shape functions, which are easy to interpret. EBM and NAM differ only marginally and mostly in areas where the data density is low. Here, EBM outperforms NAM by overfitting on single data points using the feature interaction terms. We can see this, for example, for a high total stalling duration and a high number of quality switches, where at some point EBM stops the negative trend and strongly contrasts its previous trend to improve predictions for extreme outliers.

Figure 1: EBM and NAM for video QoE modelling

Using the smooth predictor functions, it is easy to apply curve fitting. In the bottom right plot of Figure 1, we fit the average bitrate predictor function of NAM, which was shifted by the average MOS of the dataset to obtain the original MOS scale on the y-axis, on an inverted x-axis using exponential (IQX), logarithmic (WQL), and linear functions (LIN). Note that this constitutes a univariate mapping of average bitrate to MOS, neglecting the other influencing factors. We observe that our predictor function follows the WQL hypothesis [10] (red) with a high R²=0.967. This is in line with the mechanics of P.1203, where the authors of [11] showed the same logarithmic behavior for the bitrate in mode 0.

Figure 2: EBM and NAM for web QoE modelling

As the presented XAI methods are universally applicable to any QoE dataset, Figure 2 shows a similar GAM-based QoE modelling for a web QoE dataset obtained from [12]. We can see that the loading behavior in terms of ByteIndex-Page Load Time (BI-PLT) and time to last byte (TTLB) has the strongest impact on web QoE. Moreover, we see that different URLs/webpages have a different effect on the MOS, which shows that web QoE is content dependent. Summarizing, using GAMs, we obtain valuable easy to interpret functions, which explain fundamental relationships between QoE factors and MOS. Nevertheless, further XAI methods can be utilized, as detailed in [4,5,6].

Discussion

In addition to expediting the modelling process and mitigating modelling biases, data-driven QoE modelling offers significant advantages in terms of improved accuracy and generalizability compared to manual QoE models. ML-based models are not constrained to specific classes of continuous functions typically used in manual modelling, allowing them to capture more complex relationships present in the data. However, a challenge with ML-based models is the risk of overfitting, where the model becomes overly sensitive to noise and fails to capture the underlying relationships. Overfitting can be avoided through techniques like model regularization or by collecting sufficiently large or complete datasets.

Successful implementation of data-driven QoE modelling relies on purposeful data collection. It is crucial to ensure that all (or at least the most important) QoE factors are included in the dataset, covering their full parameter range with an adequate number of samples. Controlled lab or crowdsourcing studies can define feature values easily, but budget constraints (time and cost) often limit data collection to a small set of selected feature values. Conversely, field studies can encompass a broader range of feature values observed in real-world scenarios, but they may only gather limited data samples for rare events, such as video sessions with numerous stalling events. To prevent data bias, it is essential to balance feature values, which may require purposefully generating rare events in the field. Additionally, thorough data cleaning is necessary. While it is possible to impute missing features resulting from measurement errors, doing so increases the risk of introducing bias. Hence, it is preferable to filter out missing or unusual feature values.

Moreover, adding new data and retraining an ML model is a natural and straightforward process in data-driven modelling, offering long-term advantages. Eventually, data-driven QoE models would be capable of handling concept drift, which refers to changes in the importance of influencing factors over time, such as altered user expectations. However, QoE studies are rarely conducted as temporal and population-based snapshots, limiting frequent model updates. Ideally, a pipeline could be established to provide a continuous stream of features and QoE ratings, enabling online learning and ensuring the QoE models remain up to date. Although challenging for research endeavors, service providers could incorporate such QoE feedback streams into their applications

Comparing black-box and interpretable ML models, there is a slight trade-off between performance and explainability. However, as shown in [4], it should be negligible in the context of QoE modelling. Instead, XAI allows to fully understand the model decisions, identifying relevant QoE factors and their relationships to the QoE score. Nevertheless, it has to be considered that explaining models becomes inherently more difficult when the number of input features increases. Highly correlated features and interactions may further lead to misinterpretations when using XAI since the influence of a feature may also depend on other features. To obtain reliable and trustworthy explainable models, it is, therefore, crucial to exclude highly correlated features.

Finally, although we demonstrated XAI-based QoE modelling only for video streaming and web browsing, from a research perspective, it is important to understand that the whole process is easily applicable in other domains like speech or gaming. Apart from that, it can also be highly beneficial for providers of services and networks to use XAI when implementing a continuous QoE monitoring. They could integrate visualizations of trends like Figure 1 or Figure 2 into dashboards, thus, allowing to easily obtain a deeper understanding of the QoE in their system.

Conclusion

In conclusion, the progress in technology has made data-driven explainable QoE modeling suitable for implementation. As a result, it is crucial for researchers and service providers to consider adopting XAI-based QoE modeling to gain a comprehensive and broader understanding of the factors influencing QoE and their connection to users’ subjective experiences. By doing so, they can enhance services and networks in terms of QoE, effectively preventing user churn and minimizing revenue losses.

References

[1] K. Brunnström, S. A. Beker, K. De Moor, A. Dooms, S. Egger, M.-N. Garcia, T. Hossfeld, S. Jumisko-Pyykkö, C. Keimel, M.-C. Larabi et al., “Qualinet White Paper on Definitions of Quality of Experience,” 2013.

[2] W. Robitza, S. Göring, A. Raake, D. Lindegren, G. Heikkilä, J. Gustafsson, P. List, B. Feiten, U. Wüstenhagen, M.-N. Garcia et al., “HTTP Adaptive Streaming QoE Estimation with ITU-T Rec. P. 1203: Open Databases and Software,” in ACM MMSys, 2018

[3] G. Kougioumtzidis, V. Poulkov, Z. D. Zaharis, and P. I. Lazaridis, “A Survey on Multimedia Services QoE Assessment and Machine Learning-Based Prediction,” IEEE Access, 2022.

[4] N. Wehner, A. Seufert, T. Hoßfeld, M. and Seufert, “Explainable Data-Driven QoE Modelling with XAI,” QoMEX, 2023.

[5] C. Molnar, Interpretable Machine Learning, 2nd ed., 2022. Available: https://christophm.github.io/interpretable-ml-book

[6] A. B. Arrieta, N. Diıaz-Rodriguez et al., “Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges Toward Responsible AI,” Information fusion, 2020.

[7] S. M. Lundberg and S.-I. Lee, “A Unified Approach to Interpreting Model Predictions,” NIPS, 2017.

[8] R. Agarwal, L. Melnick, N. Frosst, X. Zhang, B. Lengerich, R. Caruana, and G. E. Hinton, “Neural Additive Models: Interpretable MachineLearning with Neural Nets,” NIPS, 2021.

[9] H. Nori, S. Jenkins, P. Koch, and R. Caruana, “InterpretML: A Unified Framework for Machine Learning Interpretability,” arXiv preprint arXiv:1909.09223, 2019.

[10] T. Hoßfeld, R. Schatz, E. Biersack, and L. Plissonneau, “Internet Video Delivery in YouTube: From Traffic Measurements to Quality of Experience,” in Data Traffic Monitoring and Analysis, 2013.

[11] M. Seufert, N. Wehner, and P. Casas, “Studying the Impact of HAS QoE Factors on the Standardized Qoe Model P. 1203,” in ICDCS, 2018

[12] D. N. da Hora, A. S. Asrese, V. Christophides, R. Teixeira, D. Rossi, “Narrowing the gap between QoS metrics and Web QoE using Above-the-fold metrics,” PAM, 2018

[13] A. Seufert, F. Wamser, D. Yarish, H. Macdonald, and T. Hoßfeld, “QoE Models in the Wild: Comparing Video QoE Models Using a Crowdsourced Data Set”, in QoMEX, 2021

[14] D. Shin, “The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI”, in International Journal of Human-Computer Studies, 2021.

[15] D. Zha, Z. P. Bhat, K. H. Lai, F. Yang, & X. Hu, “Data-centric ai: Perspectives and challenges”, in SIAM International Conference on Data Mining, 2023

Sustainability vs. Quality of Experience: Striking the Right Balance for Video Streaming

The exponential growth in internet data traffic, driven by the widespread use of video streaming applications, has resulted in increased energy consumption and carbon emissions. This outcome is primarily due to higher resolution or higher framerates content and the ability to watch videos on various end-devices. However, efforts to reduce energy consumption in video streaming services may have unintended consequences on users’ Quality of Experience (QoE). This column delves into the intricate relationship between QoE and energy consumption, considering the impact of different bit rates on end-devices. We also consider other factors to provide a more comprehensive understanding of whether these end-devices have a significant environmental impact. It is essential to carefully weigh the trade-offs between QoE and energy consumption to make informed decisions and develop sustainable practices in video streaming services.

Energy Consumption for Video Streaming

In the past few years, we have seen a remarkable expansion in how online content is delivered. According to Sandvine’s 2023 Global Internet Phenomena Report [1], video usage on the Internet has increased by 24% in 2022 and now accounts for 65% of all Internet traffic. This surge in video usage is mainly due to the growing popularity of streaming video services. Videos have become an increasingly popular form of online content, capturing a significant portion of internet users’ attention and shaping how we consume information and entertainment online. Therefore, the rising quality expectations of end-users have necessitated research and implementation of video streaming management approaches that consider the Quality of Experience (QoE) [2]. The idea is to develop applications that can work within the energy and resource limits of end-devices, while still delivering the Quality of Service (QoS) needed for smooth video viewing and an improved user experience (QoE). Even though video streaming services are advancing so quickly, energy consumption is still a significant issue causing many concerns about its impact and the urgent need to boost energy efficiency [14].

The literature provides four main elements: the data centres, the data transmission networks, the end-devices and the consumer behaviour analysing of the energy consumption of video streaming [3]. In this regard, in [4], the authors present a comprehensive review of existing literature on the energy consumption of online video streaming services. Then, they outline the potential actions that can be taken by both service providers and consumers to promote sustainable video streaming, drawing from the literature studies discussed. Their summary of the current possible actions for sustainable video streaming, from both the provider’s and consumer’s perspective, is expressed in the following segments with some of the possible solutions:

  • Data center: CDN (Content Delivery Network) can be utilized to offload contents/applications to the edge from the provider’s side and choose providers that prioritize sustainability from the consumer’s side.
  • Data transmission network: Data compression/encoding algorithms from the provider’s side and no autoplay from the consumer’s side.
  • End-Device: Produce energy-efficient devices from the provider’s size and prefer small-size (mobile) devices from the consumer’s side.
  • Consumer behaviour: Reduce the number of subscribers from the provider’s size and prefer watching videos with other people than alone from the consumer’s side.

Finally, they noted that the end device and consumer behaviour are the primary contributors to energy costs in the video streaming process. This result includes actions such as reducing video resolution and using smaller devices. However, taking such actions may have a potential downside as they can negatively impact the QoE due to their effect on video quality. Therefore, in [5], they found that by sacrificing the maximum QoE and aiming for good quality instead (e.g., MOS score of 4=Good instead of MOS score 5=Excellent), significant energy savings can be achieved in video-conferencing services. This is possible by using lower video bitrates compared to higher bitrates which result in higher energy consumption, as per their logarithmic QoE model. Regarding this research, in [4], the authors propose identifying an acceptable level of QoE, rather than striving for maximum QoE, as a potential solution to reduce energy consumption while still meeting consumer satisfaction. They conducted a crowdsourcing survey to gather real consumer opinions on their willingness to save energy consumption while streaming online videos. Then, they analysed the survey results to understand how willing people are to lower video streaming quality in order to achieve energy savings.

Green Video Streaming: The Trade-Off Between QoE and Energy Consumption

To provide a trade-off between QoE and Energy Consumption, we looked at the connection between video bitrate of standard resolution, electricity usage, and perceived QoE for a video streaming service on four different devices (smartphone, tablet, laptop/PC, and smart TV) as taken from [4].

They calculated the energy consumption of streaming on devices which is provided in [6]: Q_i = t_i.(P_i+R_i.ƿ), in the given equation, Q_i represents the electricity consumption (in kWh) of the i-th device, t_i denotes the streaming duration (in hours per week) for the i-th device, P_i represents the power load (in kW) of the i-th device, R_i signifies the data traffic (in GB/h) for a specific bitrate, and ρ = 0.1 kWh/GB represents the electricity intensity of data traffic.

Then,  to estimate the perceived QoE based on the video bitrate, the authors employed a QoE model from [7], as noted in their analysis which is: QoE = a.br^b + c, where “br” represents the bitrate, and “a”, “b”, and “c” are the regression coefficients calculated for specific resolutions.

After taking this into account, we can establish a link between the QoE model, energy consumption, and the perceived QoE associated with video bitrate across various end-devices. Therefore, we implemented the green QoE model in [8] to provide a trade-off between the perceived QoE and the calculated energy consumption from above in the following way: f_γ(x)= 4/(log(x’_5)-log(x_1))*log(x)+ (log(x’_5)-5*log(x_1))/(log(x’_5)-log(x_1)). The given equation represents the mapping function between video bitrate and Mean Opinion Scores (MOS), considering both the minimum bitrate x_1 corresponding to MOS 1 and the maximum bitrate x_5 corresponding to MOS 5. Moreover, the factor γ, representing the greenness of a user, is considered in the context of maximum bitrate x’_5 = x_5/γ, which results in a MOS score of 5.

The model focuses on the concept of a “green user,” who considers the energy consumption aspect in their overall QoE evaluations. Thus, a green user might rate their QoE slightly lower in order to reduce their carbon footprint compared to a high-quality (HQ) user (or “non-green” user) who prioritizes QoE without considering energy consumption.

The numerical results for the energy consumption (in kWh) and the MOS scores depending on the video bitrate can be simplified with linear and logarithmic regressions, respectively. In Figure 1, the graph depicts a linear regression analysis conducted to examine the relationship between energy consumption (kWh) and bitrate (kbps). The y-axis represents energy consumption while the x-axis represents bitrate (kbps). The graph displays a straight-line trend that starts at 1.6 kWh and extends up to 3.5 kWh as the bitrate increases. The linear fitting function used for the analysis is formulated as: kWh = f(bitrate) = a * bitrate + c, where ‘a’ represents the slope and ‘c’ represents the y-intercept of the line.

Figure 1 visually illustrates how energy consumption tends to increase with higher bitrates, as indicated by the positive slope of the linear regression line in Figure 1. One notable observation is that as video bitrates increase, the electricity consumption of end-devices also tends to increase. This can be attributed to the larger amount of data traffic generated by higher-resolution video content, which requires higher bitrates for transmission. Consequently, smart TVs are likely to consume more energy compared to other devices. This finding is consistent with the results obtained from the linear regression model, as described in [4], further validating the relationship between bitrate and energy consumption.

As illustrated in Figure 2, the relationship between MOS and video bitrate (kbps) follows a logarithmic pattern. Therefore, we can use a straightforward QoE model to estimate the MOS if there is information about the video bitrate. This can be achieved by utilizing a logistic regression model MOS(x), where MOS = f(x) = a * log(x) + c, with x representing the video bitrate in Mbps, and a and c being coefficients, as provided in [9]. After, MOS and video bitrate (kbps) values in [4] are applied to the above-mentioned QoE green model equation regarding the logistic regression model, which is an extension of the logarithmic regression model [8]. This relationship allows to determine the green user QoE model and we exemplary show the green user QoE model for smart TV (using γ=2 in f_γ(x)).

According to Figure 2, it is categorized users into two groups: those who prioritize high-quality (HQ) video regardless of energy consumption, and green users who prioritize energy efficiency while still being satisfied with slightly lower video quality. It can be observed that the MOS value changes in video quality on their smart TVs faster compared to other end-devices.  This is evident from the steeper curve in the smart TV section. On the other hand, when looking at the curve for tablets, it shows that changes in bitrate have a milder impact on MOS values. The outcome suggests that video streaming on smaller screens, such as tablets or laptops, may contribute less to the perception of quality changes. Considering that those small-screen devices consume less energy than larger screen devices, it may be preferable to use lower resolution videos instead of high-resolution ones. Analysing the relationship between laptops and tablets, it can be seen that low-resolution video streaming on laptops resulted in lower MOS scores compared to the tablet. From this result, it can be inferred that the choice of end-device and user behaviour plays a significant role in energy savings. Figure 2 indicates that the MOS values for the green user of a smart TV is comparable to the MOS values of an HQ user using a laptop.

Concerning this outcome, in [10], the authors presented the results of a subjective assessment aimed at investigating how different factors, such as video resolution, luminance, and end devices (TV, Laptop, and Smartphone), impact the QoE and energy consumption of video streaming services. The study found that, in certain conditions such as dark or bright environments, low device backlight luminance, or small-screen devices), users may need to strike a balance between acceptable QoE and sustainable (green) choices, as consuming more energy (e.g., by streaming higher-quality videos) may not significantly enhance the QoE.

Therefore, Figure 3 plots the trade-off relationship between energy consumption (kWh) and MOS for the end devices (such as smart TV, laptop and tablet). Thereby, we differentiate the HQ user and the green user, which presents some interesting results. First, a MOS score of 4 leads to comparable energy consumption results for green and HQ users. The relative differences are rather small. However, aiming for best quality (MOS 5) leads to significant differences. Furthermore, it is seen that the device type has a significant impact on energy consumption. Even for green users, which rate lower bitrates with higher MOS scores than HQ users, the energy consumption of the smart TV is much higher than for any quality (i.e. bitrate) for laptop and tablet users. Thus, device type and user behaviour are essential to strike the right balance between QoE and energy consumption.

Discussions and Future Research

Meeting the QoE expectations of end-users is essential to fulfilling the requirements of video streaming services. As users are the primary viewers of streaming videos in most real-world scenarios, subjective QoE assessment [11] provides a direct and dependable means to evaluate the perceptual quality of video streaming. Furthermore, there is a growing need to create objective QoE assessment models provided in [12][13]. However, many studies have focused on investigating the QoE obtained through subjective and objective models and have overlooked the consideration of energy consumption in video streaming.

Therefore, in the previous section, we have discussed how the different elements within the video streaming ecosystem play a role in consuming energy and emitting CO2.  The findings pave the way for an objective response to determining an appropriate optimal video bitrate for viewing, considering both QoE and sustainability considerations, which can be further explored in future research.

It is evident that addressing energy consumption and emissions is crucial for the future of video streaming systems, while ensuring that end-users’ QoE is not compromised poses a significant and ongoing challenge. Thus, potential solutions to prevent energy consumption increase in QoE while still satisfying the user include streaming videos on smaller screen devices and watching lower resolution videos that offer sufficient quality instead of the highest resolution ones. Here, it can be highlighted the importance of user behavior to prevent energy consumption. Additionally, trade-off models can be developed using the green QoE model (especially for smarTV) by identifying ideal bitrate values for energy savings and user satisfaction in the QoE.

Delving deeper into the dynamics of the video streaming ecosystem, it becomes increasingly clear that energy consumption and emissions are critical concerns that must be addressed for the sustainable future of video streaming systems. The environmental impact of video streaming, particularly in terms of carbon emissions, cannot be understated. With the growing awareness of the urgent need to combat climate change, mitigating the environmental footprint of video streaming has become a pressing priority.

As video streaming technologies evolve, optimizing energy-efficient approaches without compromising users’ QoE is a complex task. End-users, who expect seamless and high-quality video streaming experiences, should not be deprived of their QoE while addressing the energy and emissions concerns. The outcome opens a novel door for an objective answer to the question of what constitutes an appropriate optimal video bitrate for viewing that takes into account both QoE and sustainability concerns.

Future research in this area is crucial to explore innovative techniques and strategies that can effectively reduce the energy consumption and carbon emissions of video streaming systems without sacrificing the QoE. Additionally, collaborative efforts among stakeholders, including researchers, industry practitioners, policymakers, and end-users, are essential in devising sustainable video streaming solutions that consider both environmental and user experience factors [14].

In conclusion, the discussions on the relationship between energy consumption, emissions, and QoE in video streaming systems emphasize the need for continued research and innovation to achieve a sustainable balance between environmental sustainability and user satisfaction.

References

  • [1] Sandvine. The Global Internet Phenomena Report. January 2023. Retrieved April 24, 2023
  • [2] M. Seufert, S. Egger, M. Slanina, T. Zinner, T. Hoßfeld and P. Tran-Gia, “A Survey on Quality of Experience of HTTP Adaptive Streaming,” in IEEE Communications Surveys & Tutorials, vol. 17, no. 1, pp. 469-492, Firstquarter 2015, doi: 10.1109/COMST.2014.2360940., 2015.
  • [3] Reinhard Madlener, Siamak Sheykhha, Wolfgang Briglauer,”The electricity- and CO2-saving potentials offered by regulation of European video-streaming services,” Energy Policy,vol. 161, p. 112716, 2022.
  • [4] G. Bingöl, S. Porcu, A. Floris and L. Atzori, “An Analysis of the Trade-off between Sustainability,” in IEEE ICC Workshop-GreenNet, Rome, 2023.
  • [5] T. Hoßfeld, M. Varela, L. Skorin-Kapov, P. E. Heegaard, “What is the trade-off between CO2 emission and video-conferencing QoE?,” ACM SIGMM Records, 2022.
  • [6] P. Suski, J. Pohl, and V. Frick, “All you can stream: Investigating the role of user behavior for greenhouse gas intensity of video streaming,” in Proc. of the 7th Int. Conf. on ICT for Sustainability, 2020, pp. 128–138.
  • [7] M. Mu, M. Broadbent, A. Farshad, N. Hart, D. Hutchison, Q. Ni, and N. Race, “A Scalable User Fairness Model for Adaptive Video Streaming Over SDN-Assisted Future Networks,” IEEE Journal on Selected Areas in Communications, vol. 34, no. 8, p. 2168–2184, 2016.
  • [8] T. Hossfeld, M. Varela, L. Skorin-Kapov and P. E. Heegaard, “A Greener Experience: Trade-offs between QoE and CO2 Emissions in Today’s and 6G Networks,” IEEE Communications Magazine, pp. 1-7, 2023.
  • [9] J. P. López, D. Martín, D. Jiménez and J. M. Menéndez, “Prediction and Modeling for No-Reference Video Quality Assessment Based on Machine Learning,” in 14th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS), IEEE, pp. 56-63, Las Palmas de Gran Canaria, Spain, 2018.
  • [10] G. Bingöl, A. Floris, S. Porcu, C. Timmerer and L. Atzori, “Are Quality and Sustainability Reconcilable? A Subjective Study on Video QoE, Luminance and Resolution,” in 15th International Conference on Quality of Multimedia Experience (QoMEX), Gent, Belgium, 2023.
  • [11] G. Bingol, L. Serreli, S. Porcu, A. Floris, L. Atzori, “The Impact of Network Impairments on the QoE of WebRTC applications: A Subjective study,” in 14th International Conference on Quality of Multimedia Experience (QoMEX), Lippstadt, Germany, 2022.
  • [12] D. Z. Rodríguez, R. L. Rosa, E. C. Alfaia, J. I. Abrahão and G. Bressan, “Video quality metric for streaming service using DASH standard,” IEEE Trans. Broadcasting, vol. vol. 62, no. 3, pp. 628-639, Sep. 2016.
  • [13] T. Hoßfeld, M. Seufert, C. Sieber and T. Zinner, “Assessing effect sizes of influence factors towards a QoE model for HTTP adaptive streaming,” in 6th Int. Workshop Qual. Multimedia Exper. (QoMEX), Sep. 2014.
  • [14] S. Afzal, R. Prodan, C. Timmerer, “Green Video Streaming: Challenges and Opportunities.” ACM SIGMultimedia Records, Jan. 2023.

Green Video Streaming: Challenges and Opportunities

Introduction

Regarding the Intergovernmental Panel on Climate Change (IPCC) report in 2021 and Sustainable Development Goal (SDG) 13 “climate action”, urgent action is needed against climate change and global greenhouse gas (GHG) emissions in the next few years [1]. This urgency also applies to the energy consumption of digital technologies. Internet data traffic is responsible for more than half of digital technology’s global impact, which is 55% of energy consumption annually. The Shift Project forecast [2] shows an increase of 25% in data traffic associated with 9% more energy consumption per year, reaching 8% of all GHG emissions in 2025. 

Video flows represented 80% of global data flows in 2018, and this video data volume is increasing by 80% annually [2].  This exponential increase in the use of streaming video is due to (i) improvements in Internet connections and service offerings [3], (ii) the rapid development of video entertainment (e.g., video games and cloud gaming services), (iii) the deployment of Ultra High-Definition (UHD, 4K, 8K), Virtual Reality (VR), and Augmented Reality (AR), and (iv) an increasing number of video surveillance and IoT applications [4]. Interestingly, video processing and streaming generate 306 million tons of CO2, which is 20% of digital technology’s total GHG emissions and nearly 1% of worldwide GHG emissions [2].

While research has shown that the carbon footprint of video streaming has been decreasing in recent years [5], there is still a high need to invest in research and development of efficient next-generation computing and communication technologies for video processing technologies. This carbon footprint reduction is due to technology efficiency trends in cloud computing (e.g., renewable power), emerging modern mobile networks (e.g., growth in Internet speed), and end-user devices (e.g., users prefer less energy-intensive mobile and tablet devices over larger PCs and laptops). However, since the demand for video streaming is growing dramatically, it raises the risk of increased energy consumption. 

Investigating energy efficiency during video streaming is essential to developing sustainable video technologies. The processes from video encoding to decoding and displaying the video on the end user’s screen require electricity, which results in CO2 emissions. Consequently, the key question becomes: “How can we improve energy efficiency for video streaming systems while maintaining an acceptable Quality of Experience (QoE)?”.

Challenges and Opportunities 

In this section, we will outline challenges and opportunities to tackle the associated emissions for video streaming of (i) data centers, (ii) networks, and (iii) end-user devices [5] – presented in Figure 1.

Figure 1. Challenges and opportunities to tackle emissions for video streaming.

Data centers are responsible for the video encoding process and storage of the video content. The video data traffic volume grows through data centers, driving their workloads with the estimated total power consumption of more than 1,000 TWh by 2025 [6]. Data centers are the most prioritized target of regulatory initiatives. National and regional policies are established related to the growing number of data centers and the concern over their energy consumption [7]. 

  • Suitable cloud services: Select energy-optimized and sustainable cloud services to help reduce CO2 emissions. Recently, IT service providers have started innovating in energy-efficient hardware by designing highly efficient Tensor Processing Units, high-performance servers, and machine-learning approaches to optimize cooling automatically to reduce the energy consumption in their data centers [8]. In addition to advances in hardware designs, it is also essential to consider the software’s potential for improvements in energy efficiency [9].
  • Low-carbon cloud regions: IT service providers offer cloud computing platforms in multiple regions delivered through a global network of data centers. Various power plants (e.g., fuel, natural gas, coal, wind, sun, and water) supply electricity to run these data centers generating different amounts of greenhouse gases. Therefore, it is essential to consider how much carbon is emitted by the power plants that generate electricity to run cloud services in the selected region for cloud computing. Thus, a cloud region needs to be considered by its entire carbon footprint, including its source of energy production.
  • Efficient and fast transcoders (and encoders): Another essential factor to be considered is using efficient transcoders/encoders that can transcode/encode the video content faster and with less energy consumption but still at an acceptable quality for the end-user [10][11][12].
  • Optimizing the video encoding parameters: There is a huge potential in optimizing the overall energy consumption of video streaming by optimizing the video encoding parameters to reduce the bitrates of encoded videos without affecting quality, including choosing a more power-efficient codec, resolution, frame rate, and bitrate among other parameters.

The next component within the video streaming process is video delivery within heterogeneous networks. Two essential energy consumption factors for video delivery are the network technology used and the amount of data to be transferred.

  • Energy-efficient network technology for video streaming: the network technology used to transmit data from the data center to the end-users determine energy performance since the networks’ GHG emissions vary widely [5]. A fiber-optic network is the most climate-friendly transmission technology, with only 2 grams of CO2 per hour of HD video streaming, while a copper cable (VDSL) generates twice as much (i.e., 4 grams of CO2 per hour). UMTS data transmission (3G) produces 90 grams of CO2 per hour, reduced to 5 grams of CO2 per hour when using 5G [13]. Therefore, research shows that expanding fiber-optic networks and 5G transmission technology are promising for climate change mitigation [5].
  • Lower data transmission: Lower data transmission drops energy consumption. Therefore, the amount of video data needs to be reduced without compromising video quality [2]. The video data per hour for various resolutions and qualities range from 30 MB/hr for very low resolutions to 7 GB/hr for UHD resolutions. A higher data volume causes more transmission energy. Another possibility is the reduction of unnecessary video usage, for example, by avoiding autoplay and embedded videos. Such video content aims to maximize the quantity of content consumed. Broadcasting platforms also play a central role in how viewers consume content and, thus, the impact on the environment [2].

The last component of the video streaming process is video usage at the end-user device, including decoding and displaying the video content on the end-user devices like personal computers, laptops, tablets, phones, or television sets.

  • End-user devices: Research works [3][14] show that the end-user devices and decoding hardware account for the greatest portion of energy consumption and CO2 emission in video streaming. Thus, most reduction strategies lay within the energy efficiency of the end-user devices, for instance, by improving screen display technologies or shifting from desktops to using more energy-efficient laptops, tablets, and smartphones.
  • Streaming parameters: Energy consumption of the video decoding process depends on video streaming parameters similar to the end-user QoE. Thus, it is important to intelligently select video streaming parameters to optimize the QoE and power efficiency of the end-user device. Moreover, different underlying video encoding parameters also impact the video decodings’ energy usage.
  • End-user device environment: A wide variety of browsers (including legacy versions), codecs, and operating systems besides the hardware (e.g., CPU, display) determine the final power consumption.

In this column, we argue that these challenges and opportunities for green video streaming can help to gain insights that further drive the adoption of novel, more sustainable usage patterns to reduce the overall energy consumption of video streaming without sacrificing end-user’s QoE.  

End-to-end video streaming: While we have highlighted the main factors of each video streaming component that impact energy consumption to create a generic power consumption model, we need to study and holistically analyze video streaming and its impact on all components. Implementing a dedicated system for optimizing energy consumption may introduce additional processing on top of regular service operations if not done efficiently. For instance, overall traffic will be reduced when using the most recent video codec (e.g., VVC) compared to AVC (the most deployed video codec up to date), but its encoding and decoding complexity will be increased and, thus, require more energy.

Optimizing the video streaming parameters: There is a huge potential in optimizing the overall energy consumption for video service providers by optimizing the video streaming parameters, including choosing a more power-efficient codec implementation, resolution, frame rate, and bitrate, among other parameters.

GAIA: Intelligent Climate-Friendly Video Platform 

Recently, we started the “GAIA” project to research the aspects mentioned before. In particular, the GAIA project researches and develops a climate-friendly adaptive video streaming platform that provides (i) complete energy awareness and accountability, including energy consumption and GHG emissions along the entire delivery chain, from content creation and server-side encoding to video transmission and client-side rendering; and (ii) reduced energy consumption and GHG emissions through advanced analytics and optimizations on all phases of the video delivery chain.

Figure 2. GAIA high-level approach for the intelligent climate-friendly video platform.

As shown in Figure 2, the research considered in GAIA comprises benchmarking, energy-aware and machine learning-based modeling, optimization algorithms, monitoring, and auto-tuning.

  • Energy-aware benchmarking involves a functional requirement analysis of the leading project objectives, measurement of the energy for transcoding video tasks on various heterogeneous cloud and edge resources, video delivery, and video decoding on end-user devices. 
  • Energy-aware modelling and prediction use the benchmarking results and the data collected from real deployments to build regression and machine learning. The models predict the energy consumed by heterogeneous cloud and edge resources, possibly distributed across various clouds and delivery networks. We further provide energy models for video distribution on different channels and consider the relation between bitrate, codec, and video quality.
  • Energy-aware optimization and scheduling researches and develops appropriate generic algorithms according to the requirements for real-time delivery (including encoding and transmission) of video processing tasks (i.e., transcoding) deployed on heterogeneous cloud and edge infrastructures. 
  • Energy-aware monitoring and auto-tuning perform dynamic real-time energy monitoring of the different video delivery chains for improved data collection, benchmarking, modelling and optimization. 

GMSys 2023: First International ACM Green Multimedia Systems Workshop

Finally, we would like to use this opportunity to highlight and promote the first International ACM Green Multimedia Systems Workshop (GMSys’23). The GMSys’23 takes place in Vancouver, Canada, in June 2023 co-located with ACM Multimedia Systems 2023. We expect a series of at least three consecutive workshops since this topic may critically impact the innovation and development of climate-effective approaches. This workshop strongly focuses on recent developments and challenges for energy reduction in multimedia systems and the innovations, concepts, and energy-efficient solutions from video generation to processing, delivery, and consumption. Please see the Call for Papers for further details.

Final Remarks 

In both the GAIA project and ACM GMSys workshop, there are various actions and initiatives to put energy efficiency-related topics for video streaming on the center stage of research and development. In this column, we highlighted major video streaming components concerning their possible challenges and opportunities enabling energy-efficient, sustainable video streaming, sometimes also referred to as green video streaming. Having a thorough understanding of the key issues and gaining meaningful insights are essential for successful research.

References

[1] IPCC, 2021: Climate Change 2021: The Physical Science Basis. Contribution of Working Group I to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change[Masson-Delmotte, V., P. Zhai, A. Pirani, S.L. Connors, C. Péan, S. Berger, N. Caud, Y. Chen, L. Goldfarb, M.I. Gomis, M. Huang, K. Leitzell, E. Lonnoy, J.B.R. Matthews, T.K. Maycock, T. Waterfield, O. Yelekçi, R. Yu, and B. Zhou (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, In press, doi:10.1017/9781009157896.
[2] M. Efoui-Hess, Climate Crisis: the unsustainable use of online video – The practical case for digital sobriety, Technical Report, The Shift Project, July, 2019.
[3] IEA (2020), The carbon footprint of streaming video: fact-checking the headlines, IEA, Paris https://www.iea.org/commentaries/the-carbon-footprint-of-streaming-video-fact-checking-the-headlines.
[4] Cisco Annual Internet Report (2018–2023) White Paper, 2018 (updated 2020), https://www.cisco.com/c/en/us/solutions/collateral/executive-perspectives/annual-internet-report/white-paper-c11-741490.html.
[5] C. Fletcher, et al., Carbon impact of video streaming, Technical Report, 2021, https://s22.q4cdn.com/959853165/files/doc_events/2021/Carbon-impact-of-video-streaming.pdf.
[6] Huawei Releases Top 10 Trends of Data Center Facility in 2025, 2020, https://www.huawei.com/en/news/2020/2/huawei-top10-trends-datacenter-facility-2025.
[7] COMMISSION REGULATION (EC) No 642/2009, Official Journal of the European Union, 2009, https://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=OJ:L:2009:191:0042:0052:EN:PDF#:~:text=COMMISSION%20REGULATION%20(EC)%20No%20642/2009%20of%2022%20July,regard%20to%20the%20Treaty%20establishing%20the%20European%20Community.
[8] U. Hölzle, Data centers are more energy efficient than ever, Technical Report, 2020, https://blog.google/outreach-initiatives/sustainability/data-centers-energy-efficient/.
[9] Charles E. Leiserson, Neil C. Thompson, Joel S. Emer, Bradley C. Kuszmaul, Butler W. Lampson, Daniel Sanchez, and Tao B. Schardl. 2020. There’s plenty of room at the Top: What will drive computer performance after Moore’s law? Science 368, 6495 (2020), eaam9744. DOI:https://doi.org/10.1126/science.aam9744
[10] M. G. Koziri, P. K. Papadopoulos, N. Tziritas, T. Loukopoulos, S. U. Khan and A. Y. Zomaya, “Efficient Cloud Provisioning for Video Transcoding: Review, Open Challenges and Future Opportunities,” in IEEE Internet Computing, vol. 22, no. 5, pp. 46-55, Sep./Oct. 2018, doi: 10.1109/MIC.2017.3301630.
[11] J. -F. Franche and S. Coulombe, “Fast H.264 to HEVC transcoder based on post-order traversal of quadtree structure,” 2015 IEEE International Conference on Image Processing (ICIP), Quebec City, QC, Canada, 2015, pp. 477-481, doi: 10.1109/ICIP.2015.7350844.
[12] E. de la Torre, R. Rodriguez-Sanchez and J. L. Martínez, “Fast video transcoding from HEVC to VP9,” in IEEE Transactions on Consumer Electronics, vol. 61, no. 3, pp. 336-343, Aug. 2015, doi: 10.1109/TCE.2015.7298293.
[13] Federal Ministry for the Environment, Nature Conservation and Nuclear Safety, Video streaming: data transmission technology crucial for climate footprint, No. 144/20, 2020, https://www.bmuv.de/en/pressrelease/video-streaming-data-transmission-technology-crucial-for-climate-footprint/
[14] Malmodin, Jens, and Dag Lundén. 2018. “The Energy and Carbon Footprint of the Global ICT and E&M Sectors 2010–2015” Sustainability 10, no. 9: 3027. https://doi.org/10.3390/su10093027



Towards the design and evaluation of more sustainable multimedia experiences: which role can QoE research play?

In this column, we reflect on the environmental impact and broader sustainability implications of resource-demanding digital applications and services such as video streaming, VR/AR/XR and videoconferencing. We put emphasis not only on the experiences and use cases they enable but also on the “cost” of always striving for high Quality of Experience (QoE) and better user experiences. Starting by sketching the broader context, our aim is to raise awareness about the role that QoE research can play in the context of various of the United Nations’ Sustainable Development Goals (SDGs), either directly (e.g., SDG 13 “climate action”) or more indirectly (e.g., SDG 3 “good health and well-being” and SDG 12 “responsible consumption and production”).

UNs Sustainable Development goals (Figure taken from https://www.un.org/en/sustainable-development-goals)

The ambivalent role of digital technology

One of the latest reports from the Intergovernmental Panel on Climate Change (IPCC) confirmed the urgency of drastically reducing emissions of carbon dioxide and other human-induced greenhouse gas (GHG) emissions in the years to come (IPCC, 2021). This report, directly relevant in the context of SDG 13 “climate action”, confirmed the undeniable and negative human influence on global warming and the need for collective action. While the potential of digital technology (and ICT more broadly) for sustainable development has been on the agenda for some time, the context of the COVID-19 pandemic has made it possible to better understand a set of related opportunities and challenges.

First of all, it has been observed that long-lasting lockdowns and restrictions due to the COVID-19 pandemic and its aftermath have triggered a drastic increase in internet traffic (see e.g., Feldmann, 2020). This holds particularly for the use of videoconferencing and video streaming services for various purposes (e.g., work meetings, conferences, remote education, and social gatherings, just to name a few). At the same time, the associated drastic reduction of global air traffic and other types of traffic (e.g., road traffic) with their known environmental footprint, has had undeniable positive effects on the environment (e.g., reduced air pollution, better water quality see e.g., Khan et al., 2020). Despite this potential, the environmental gains enabled by digital technology and recent advances in energy efficiency are threatened by digital rebound effects due to increased energy consumption and energy demands related to ICT (Coroamua et al., 2019; Lange et al., 2020). In the context of ever-increasing consumption, there has for instance been a growing focus in the literature on the negative environmental impact of unsustainable use and viewing practices such as binge-watching, multi-watching and media-multitasking, which have become more common over the last years (see e.g., Widdicks, 2019). While it is important to recognize that the overall emission factor will vary depending on the mix of energy generation technologies used and region in the world (Preist et al., 2014), the above observation also fits with other recent reports and articles, which expect the energy demands linked to digital infrastructure, digital services and their use to further expand and which expect the greenhouse gas emissions of ICT relative to the overall worldwide footprint to significantly increase (see e.g., Belkhir et al., 2018, Morley et al., 2018, Obringer et al., 2021). Hence, these and other recent forecasts show a growing and even unsustainable high carbon footprint of ICT in the middle-term future, due to, among others, the increasing energy demand of data centres (including e.g., also the energy needed for cooling) and the associated traffic (Preist et al., 2016).

Another set of challenges that became more apparent can be linked to the human mental resources and health involved as well as environmental effects. Here, there is a link to the abovementioned Sustainable development goals 3 (good health and well-being) and 12 (sustainable consumption and production). For instance, the transition to “more sustainable” digital meetings, online conferences, and online education has also pointed to a range of challenges from a user point of view.  “Zoom fatigue” being a prominent example illustrates the need to strike the right balance between the more sustainable character of experiences provided by and enabled through technology and how these are actually experienced and perceived from a user point of view (Döring et al., 2022; Raake et al., 2022). Another example is binge-watching behavior, which can in certain cases have a positive effect on an individual’s well-being, but has also been shown to have a negative effect through e.g., feelings of guilt and goal conflicts  (Granow et al., 2018) or through problematic involvement resulting in e.g., chronic sleep issues  (Flayelle, 2020).

From the “production” perspective, recent work has looked at the growing environmental impact of commonly used cloud-based services such as video streaming (see e.g., Chen et al., 2020, Suski et al., 2020, The Shift Project, 2021) and the underlying infrastructure consisting of data centers, transport network and end devices (Preist et al., 2016, Suski, 2020, Preist et al., 2014). As a result, the combination of technological advancements and user-centered approaches aiming to always improve the experience may have undesired environmental consequences. This includes stimulating increased user expectations (e.g., higher video quality, increased connectivity and availability, almost zero-latency, …) and by triggering increased use, and unsustainable use practices, resulting in potential rebound effects due to increased data traffic and electricity demand. 

These observations have started to culminate into a plea for a shift towards a more sustainable and humanity-centered paradigm, which considers to a much larger extent how digital consumption and increased data demand impact individuals, society and our planet (Widdicks et al., 2019, Priest et al., 2016, Hazas & Nathan, 2018). Here, it is obvious that experience, consumption behavior and energy consumption are tightly intertwined.

How does QoE research fit into this picture?

This leads to the question of where research on Quality of Experience and its underlying goals fit into this broader picture, to which extent related topics have gained attention so far and how future research can potentially have an even larger impact.

As the COVID-19 related examples above already indicated, QoE research, through its focus on improving the experience for users in e.g., various videoconferencing-based scenarios or immersive technology-related use cases, already plays and will continue to play a key role in enabling more sustainable practices in various domains (e.g., remote education, online conferences, digital meetings, and thus reducing unnecessary travels, …) and linking up to various SDGs. A key challenge here is to enable experiences that become so natural and attractive that they may even become preferred in the future. While this is a huge and important topic, we refrain from discussing it further in this contribution, as it already is a key focus within the QoE community. Instead, in the following, we, first of all, reflect on the extent to which environmental implications of multimedia services have explicitly been on the agenda of the QoE community in the past, what the focus is in more recent work, and what is currently not yet sufficiently addressed. Secondly, we consider a broader set of areas and concrete topics in which QoE research can be related to environmental and broader sustainability-related concerns.

Traditionally, QoE research has predominantly focused on gathering insights that can guide the optimization of technical parameters and allocation of resources at different layers, while still ensuring a high QoE from a user point of view. A main underlying driver in this respect has traditionally been the related business perspective: optimizing QoE as a way to increase profitability and users/customers’ willingness to pay for better quality  (Wechsung, 2014). While better video compression techniques or adaptive video streaming may allow the saving of resources, which overall may lead to environmental gains, the latter has traditionally not been a main or explicit motivation.

There are however some exceptions in earlier work, where the focus was more explicitly on the link between energy consumption-related aspects, energy efficiency and QoE. The study of Ickin, 2012 for instance, aimed to investigate QoE influence factors of mobile applications and revealed the key role of the battery in successful QoE provisioning. In this work, it was observed that energy modelling and saving efforts are typically geared towards the immediate benefits of end users, while less attention was paid to the digital infrastructure (Popescu, 2018). Efforts were further also made in the past to describe, analyze and model the trade-off between QoE and energy consumption (QoE perceived per user per Joule, QoEJ) (Popescu, 2018) or power consumption (QoE perceived per user per Watt, QoEW) (Zhang et al., 2013), as well as to optimize resource consumption so as to avoid sources of annoyance (see e.g., (Fiedler et al., 2016). While these early efforts did not yet result in a generic end-to-end QoE-energy-model that can be used as a basis for optimizations, they provide a useful basis to build upon.

A more recent example (Hossfeld et al., 2022) in the context of video streaming services looked into possible trade-offs between varying levels of QoE and the resulting energy consumption, which is further mapped to CO₂ emissions (taking the EU emission parameter as input, as this – as mentioned – depends on the overall energy mix of green and non-renewable energy sources). Their visualization model further considers parameters such as the type of device and type of network and while it is a simplification of the multitude of possible scenarios and factors, it illustrates that it is possible to identify areas where energy consumption can be reduced while ensuring an acceptable QoE.

Another recent work (Herglotz et al., 2022) jointly analyzed end-user power efficiency and QoE related to video streaming, based on actual real-world data (i.e., YouTube streaming events). More specifically, power consumption was modelled and user-perceived QoE was estimated in order to model where optimization is possible. They found that optimization is possible and pointed to the importance of the choice of video codec, video resolution, frame rate and bitrate in this respect.

These examples point to the potential to optimize at the “production” side, however, the focus has more recently also been extended to the actual use, user expectations and “consumption” side (Jiang et al., 2021, Lange et al., 2020, Suski et al., 2020, Elgaaied-Gambier et al., 2020) Various topics are explored in this respect, e.g., digital carbon footprint calculation at the individual level (Schien et al., 2013, Preist et al., 2014), consumer awareness and pro-environmental digital habits (Elgaaied-Gambier et al., 2020; Gnanasekaran et al., 2021), or impact of user behavior (Suski et al., 2020). While we cannot discuss all of these in detail here, they all are based on the observation that there is a growing need to involve consumers and users in the collective challenge of reducing the impact of digital applications and services on the environment (Elgaaied-Gambier et al., 2020; Priest et al., 2016).

QoE research can play an important role here, extending the understanding of carbon footprint vs. QoE trade-offs to making users more aware of the actual “cost” of high QoE. A recent interview study with digital natives conducted by some of the co-authors of this column  (Gnanasekaran et al., 2021) illustrated that many users are not aware of the environmental impact of their user behavior and expectations and that even with such insights, substantial drastic changes in behavior cannot be expected. The lack of technological understanding, public information and social awareness about the topic were identified as important factors. It is therefore of utmost importance to trigger more awareness and help users see and understand their carbon footprint related to e.g., the use of video streaming services (Gnanasekaran et al., 2021). This perspective is currently missing in the field of QoE and we argue here that QoE research could – in collaboration with other disciplines and by integrating insights from other fields – play an important role here.

In terms of the motivation for adopting pro-environmental digital habits, Gnanasekaran et al., (2021) found that several factors indirectly contribute to this goal, including the striving for personal well-being. Finally, the results indicate some willingness to change and make compromises (e.g., accepting a lower video quality), albeit not an unconditional one: the alignment with other goals (e.g., personal well-being) and the nature of the perceived sacrifice and its impact play a key role. A key challenge for future work is therefore to identify and understand concrete mechanisms that could trigger more awareness amongst users about the environmental and well-being impact of their use of digital applications and services, and those that can further motivate positive behavioral change (e.g., opting for use practices that limit one’s digital carbon footprint, mindful digital consumption). By investigating the impact of various more environmentally-friendly viewing practices on QoE (e.g., actively promoting standard definition video quality instead of HD, nudging users to switch to audio-only when a service like YouTube is used as background noise or stimulating users to switch to the least data demanding viewing configuration depending on the context and purpose), QoE research could help to bridge the gap towards actual behavioral change.

Final reflections and challenges for future research

We have argued that research on users’ Quality of Experience and overall User Experience can be highly relevant to gain insights that may further drive the adoption of new, more sustainable usage patterns and that can trigger more awareness of implications of user expectations, preferences and actual use of digital services. However, the focus on continuously improving users’ Quality Experience may also trigger unwanted rebound effects, leading to an overall higher environmental footprint due to the increased use of digital applications and services. Further, it may have a negative impact on users’ long-term well-being as well.

We, therefore, need to join efforts with other communities to challenge the current design paradigm from a more critical stance, partly as “it’s difficult to see the ecological impact of IT when its benefits are so blindingly bright” (Borning et al., 2020). Richer and better experiences may lead to increased, unnecessary or even excessive consumption, further increasing individuals’ environmental impact and potentially impeding long-term well-being. Open questions are, therefore: Which fields and disciplines should join forces to mitigate the above risks? And how can QoE research — directly or indirectly — contribute to the triggering of sustainable consumption patterns and the fostering of well-being?

Further, a key question is how energy efficiency can be improved for digital services such as video streaming, videoconferencing, online gaming, etc., while still ensuring an acceptable QoE. This also points to the question of which compromises can be made in trading QoE against its environmental impact (from “willingness to pay” to “willingness to sacrifice”), under which circumstances and how these compromises can be meaningfully and realistically assessed. In this respect, future work should extend the current modelling efforts to link QoE and carbon footprint, go beyond exploring what users are willing to (more passively) endure, and also investigate how users can be more actively motivated to adjust and lower their expectations and even change their behavior.

These and related topics will be on the agenda of the Dagstuhl seminar  23042 “Quality of Sustainable Experience” and the conference QoMEX 2023 “Towards sustainable and inclusive multimedia experiences”.

Conference QoMEX 2023 “Towards sustainable and inclusive multimedia experiences

References

Belkhir, L., Elmeligi, A. (2018). “Assessing ICT global emissions footprint: Trends to 2040 & recommendations,” Journal of cleaner production, vol. 177, pp. 448–463.

Borning, A., Friedman, B., Logler, N. (2020). The ’invisible’ materiality of information technology. Communications of the ACM, 63(6), 57–64.

Chen, X., Tan, T., et al. (2020). Context-Aware and Energy-Aware Video Streaming on Smartphones. IEEE Transactions on Mobile Computing.

Coroama, V.C., Mattern, F. (2019). Digital rebound–why digitalization will not redeem us our environmental sins. In: Proceedings 6th international conference on ICT for sustainability. Lappeenranta. http://ceur-ws.org. vol. 238

Döring, N., De Moor, K., Fiedler, M., Schoenenberg, K., Raake, A. (2022). Videoconference Fatigue: A Conceptual Analysis. Int. J. Environ. Res. Public Health, 19(4), 2061 https://doi.org/10.3390/ijerph19042061

Elgaaied-Gambier, L., Bertrandias, L., Bernard, Y. (2020). Cutting the internet’s environmental footprint: An analysis of consumers’ self-attribution of responsibility. Journal of Interactive Marketing, 50, 120–135.

Feldmann, A., Gasser, O., Lichtblau, F., Pujol, E., Poese, I., Dietzel, C., … & Smaragdakis, G. (2020, October). The lockdown effect: Implications of the COVID-19 pandemic on internet traffic. In Proceedings of the ACM internet measurement conference (pp. 1-18).

Daniel Wagner, Matthias Wichtlhuber, Juan Tapiador, Narseo Vallina-Rodriguez, Oliver Hohlfeld, and Georgios Smaragdakis.

Feldmann, A., Gasser, O., Lichtblau, F., Pujol, E., Poese, I., Dietzel, C., Wagner, D., Wichtlhuber, M., Tapiador, J., Vallina-Rodriguez, N., Hohlfeld, O., Smaragdakis, G. (2020, October). The lockdown effect: Implications of the COVID-19 pandemic on internet traffic. In Proceedings of the ACM internet measurement conference (pp. 1-18).

Fiedler, M., Popescu, A., Yao, Y. (2016), “QoE-aware sustainable throughput for energy-efficient video streaming,” in 2016 IEEE International Conferences on Big Data and Cloud Computing (BDCloud), Social Computing and Networking (SocialCom), Sustainable Computing and Communications (SustainCom)(BDCloud-SocialCom-SustainCom). pp. 493–50

Flayelle, M., Maurage, P., Di Lorenzo, K.R., Vögele, C., Gainsbury, S.M., Billieux, J. (2020). Binge-Watching: What Do we Know So Far? A First Systematic Review of the Evidence. Curr Addict Rep 7, 44–60. https://doi.org/10.1007/s40429-020-00299-8

Gnanasekaran, V., Fridtun, H. T., Hatlen, H., Langøy, M. M., Syrstad, A., Subramanian, S., & De Moor, K. (2021). Digital carbon footprint awareness among digital natives: an exploratory study. In Norsk IKT-konferanse for forskning og utdanning (No. 1, pp. 99-112).

Granow, V.C., Reinecke, L., Ziegele, M. (2018): Binge-watching and psychological well-being: media use between lack of control and perceived autonomy. Communication Research Reports 35 (5), 392–401.

Hazas, M. and Nathan, L. (Eds.)(2018). Digital Technology and Sustainability. London: Routledge.

Herglotz, C., Springer, D., Reichenbach,  M., Stabernack B. and Kaup, A. (2018). “Modeling the Energy Consumption of the HEVC Decoding Process,” in IEEE Transactions on Circuits and Systems for Video Technology, vol. 28, no. 1, pp. 217-229, Jan. 2018, doi: 10.1109/TCSVT.2016.2598705.

Hossfeld, T., Varela, M., Skorin-Kapov, L. Heegaard, P.E. (2022). What is the trade-off between CO2 emission and videoconferencing QoE. ACM SIGMM records, https://records.sigmm.org/2022/03/31/what-is-the-trade-off-between-co2-emission-and-video-conferencing-qoe/

Ickin, S., Wac, K., Fiedler, M. and Janowski, L. (2012). “Factors influencing quality of experience of commonly used mobile applications,” IEEE Communications Magazine, vol. 50, no. 4, pp. 48–56.

IPCC, 2021: Climate Change 2021: The Physical Science Basis. Contribution of Working Group I to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change [Masson-Delmotte, V., P. Zhai, A. Pirani, S.L. Connors, C. Péan, S. Berger, N. Caud, Y. Chen, L. Goldfarb, M.I. Gomis, M. Huang, K. Leitzell, E. Lonnoy, J.B.R. Matthews, T.K. Maycock, T. Waterfield, O. Yelekçi, R. Yu, and B. Zhou (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, In press, doi:10.1017/9781009157896.

Jiang, P., Van Fan, Y., Klemes, J.J. (2021). Impacts of covid-19 on energy demand and consumption: Challenges, lessons and emerging opportunities. Applied energy, 285, 116441.

Khan, D., Shah, D. and Shah, S.S. (2020). “COVID-19 pandemic and its positive impacts on environment: an updated review,” International Journal of Environmental Science and Technology, pp. 1–10, 2020.

Lange, S., Pohl, J., Santarius, T. (2020). Digitalization and energy consumption. Does ICT reduce energy demand? Ecological Economics, 176, 106760.

Morley, J., Widdicks, K., Hazas, M. (2018). Digitalisation, energy and data demand: The impact of Internet traffic on overall and peak electricity consumption. Energy Research & Social Science, 38, 128–137.

Obringer, R., Rachunok, B., Maia-Silva, D., Arbabzadeh, M., Roshanak, N., Madani, K. (2021). The overlooked environmental footprint of increasing internet use. Resources, Conservation and Recycling, 167, 105389.

Popescu, A. (Ed.)(2018). Greening Video Distribution Networks, Springer.

Preist, C., Schien, D., Blevis, E. (2016). “Understanding and mitigating the effects of device and cloud service design decisions on the environmental footprint of digital infrastructure,” in Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, pp. 1324–1337.

Preist, C., Schien, D., Shabajee, P. , Wood, S. and Hodgson, C. (2014). “Analyzing End-to-End Energy Consumption for Digital Services,” Computer, vol. 47, no. 5, pp. 92–95.

Raake, A., Fiedler, M., Schoenenberg, K., De Moor, K., Döring, N. (2022). Technological Factors Influencing Videoconferencing and Zoom Fatigue. arXiv:2202.01740, https://doi.org/10.48550/arXiv.2202.01740

Schien, D., Shabajee, P., Yearworth, M. and Preist, C. (2013), Modeling and Assessing Variability in Energy Consumption During the Use Stage of Online Multimedia Services. Journal of Industrial Ecology, 17: 800-813. https://doi.org/10.1111/jiec.12065

Suski, P., Pohl, J., Frick, V. (2020). All you can stream: Investigating the role of user behavior for greenhouse gas intensity of video streaming. In: Proceedings of the 7th International Conference on ICT for Sustainability. p. 128–138. ICT4S2020, Association for Computing Machinery, New York, NY, USA.

The Shift Project, Climate crisis: the unsustainable use of online video: Our new report on the environmental impact of ICT. https://theshiftproject.org/en/article/unsustainable-use-online-video/

Wechsung, I., De Moor, K. (2014). Quality of Experience Versus User Experience. In: Möller, S., Raake, A. (eds) Quality of Experience. T-Labs Series in Telecommunication Services. Springer, Cham. https://doi.org/10.1007/978-3-319-02681-7_3

Widdicks, K., Hazas, M., Bates, O., Friday, A. (2019). “Streaming, Multi-Screens and YouTube: The New (Unsustainable) Ways of Watching in the Home,” in Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, ser. CHI ’19. New York, NY, USA: Association for Computing Machinery, p. 1–13.

Zhang, X., Zhang, J., Huang, Y., Wang, W. (2013). “On the study of fundamental trade-offs between QoE and energy efficiency in wireless networks,” Transactions on Emerging Telecommunications Technologies, vol. 24, no. 3, pp. 259–265.

What is the trade-off between CO2 emission and video-conferencing QoE?

It is a natural thing that users of multimedia services want to have the highest possible Quality of Experience (QoE), when using said services. This is especially so in contexts such as video-conferencing and video streaming services, which are nowadays a large part of many users’ daily life, be it work-related Zoom calls, or relaxing while watching Netflix. This has implications in terms of the energy consumed for the provision of those services (think of the cloud services involved, the networks, and the users’ own devices), and therefore it also has an impact on the resulting CO₂ emissions. In this column, we look at the potential trade-offs involved between varying levels of QoE (which for video services is strongly correlated with the bit rates used), and the resulting CO₂ emissions. We also look at other factors that should be taken into account when making decisions based on these calculations, in order to provide a more holistic view of the environmental impact of these types of services, and whether they do have a significant impact.

Energy Consumption and CO2 Emissions for Internet Service Delivery

Understanding the footprint of Internet service delivery is a challenging task. On one hand, the infrastructure and software components involved in the service delivery need to be known. For a very fine-grained model, this requires knowledge of all components along the entire service delivery chain: end-user devices, fixed or mobile access network, core network, data center and Internet service infrastructure. Furthermore, the footprint may need to consider the CO₂ emissions for producing and manufacturing the hardware components as well as the CO₂ emissions during runtime. Life cycle assessment is then necessary to obtain CO₂ emission per year for hardware production. However, one may argue that the infrastructure is already there and therefore the focus will be on the energy consumption and CO₂ emission during runtime and delivery of the services. This is also the approach we follow here to provide quantitative numbers of energy consumption and CO₂ emission for Internet-based video services. On the other hand, quantitative numbers are needed beyond the complexity of understanding and modelling the contributors to energy consumption and C02 emission.

To overcome this complexity, the literature typically considers key figures on the overall data traffic and service consumption times aggregated over users and services over a longer period of time, e.g., one year. In addition, the total energy consumption of mobile operators and data centres is considered. Together with the information on e.g., the number of base station sites, this gives some estimates, e.g., on the average power consumption per site or the average data traffic per base station site [Feh11]. As a result, we obtain measures such as energy per bit (Joule/bit) determining the energy efficiency of a network segment. In [Yan19], the annual energy consumption of Akamai is converted to power consumption and then divided by the maximum network traffic, which results again in the energy consumption per bit of Akamai’s data centers. Knowing the share of energy sources (nonrenewable energy, including coal, natural gas, oil, diesel, petroleum; renewable energy including solar, geothermal, wind energy, biomass, hydropower from flowing water), allows relating the energy consumption to the total CO₂ emissions. For example, the total contribution from renewables exceeded 40% in 2021 in Germany and Finland, Norway has about 60%, Croatia about 36% (statistics from 2020).

A detailed model of the total energy consumption of mobile network services and applications is provided in [Yan19]. Their model structure considers important factors from each network segment from cloud to core network, mobile network, and end-user devices. Furthermore, service-specific energy consumption are provided. They found that there are strong differences between the service type and the emerging data traffic pattern. However, key factors are the amount of data traffic and the duration of the services. They also consider different end-to-end network topologies (user-to-data center, user-to-user via data center, user-to-user and P2P communication). Their model of the total energy consumption is expressed as the sum of the energy consumption of the different segments:

  • Smartphone: service-specific energy depends among others on the CPU usage and the network usage e.g. 4G over the duration of use,
  • Base station and access network: data traffic and signalling traffic over the duration of use,
  • Wireline core network: service specific energy consumption of a mobile service taking into account the data traffic volume and the energy per bit,
  • Data center: energy per bit of the data center is multiplied by data traffic volume of the mobile service.

The Shift Project [TSP19] provides a similar model which is called the “1 Byte Model”. The computation of energy consumption is transparently provided in calculation sheets and discussed by the scientific community. As a result of the discussions [Kam20a,Kam20b], an updated model was released [TSP20] clarifying a simple bit/byte conversion issue. The suggested models in [TSP20, Kam20b] finally lead to comparable numbers in terms of energy consumption and CO₂ emission. As a side remark: Transparency and reproducibility are key for developing such complex models!

The basic idea of the 1 Byte Model for computing energy consumption is to take into account the time t of Internet service usage and the overall data volume v. The time of use is directly related to the energy consumption of the display of an end-user device, but also for allocating network resources. The data volume to transmit through the network, but also to generate or process data for cloud services, drives the energy consumption additionally. The model does not differentiate between Internet services, but they will result in different traffic volumes over the time of use. Then, for each segment i (device, network, cloud) a linear model E_i(t,v)=a_i * t + b_i * v + c_i is provided to quantify the energy consumption. To be more precise, the different coefficients are provided for each segment by [TSP20]. The overall energy consumption is then E_total = E_device + E_network + E_cloud.

CO₂ emission is then again a linear model of the total energy consumption (over the time of use of a service), which depends on the share of nonrenewable and renewable energies. Again, The Shift Project derives such coefficients for different countries and we finally obtain CO2 = k_country * E_total.

The Trade-off between QoE and CO2 Emissions

As a use case, we consider hosting a scientific conference online through video-conferencing services. Assume there are 200 conference participants attending the video-conferencing session. The conference lasts for one week, with 6 hours of online program per day.  The video conference software requires the following data rates for streaming the sessions (video including audio and screen sharing):

  • high-quality video: 1.0 Mbps
  • 720p HD video: 1.5 Mbps
  • 1080p HD video: 3 Mbps

However, group video calls require even higher bandwidth consumption. To make such experiences more immersive, even higher bit rates may be necessary, for instance, if using VR systems for attendance.

A simple QoE model may map the video bit rate of the current video session to a mean opinion score (MOS). [Lop18] provides a logistic regression MOS(x) depending on the video bit rate x in Mbps: f(x) = m_1 log x + m_2

Then, we can connect the QoE model with the energy consumption and CO₂ emissions model from above in the following way. We assume a user attending the conference for time t. With a video bit rate x, the emerging data traffic is v = x*t. Those input parameters are now used in the 1 Byte Model for a particular device (laptop, smartphone), type of network (wired, wifi, mobile), and country (EU, US, China).

Figure 1 shows the trade-off between the MOS and energy consumption (left y-axis). The energy consumption is mapped to CO₂ emission by assuming the corresponding parameter for the EU, and that the conference participants are all connected with a laptop. It can be seen that there is a strong increase in energy consumption and CO₂ emission in order to reach the best possible QoE. The MOS score of 4.75 is reached if a video bit rate of roughly 11 Mbps is used. However, with 4.5 Mbps, a MOS score of 4 is already reached according to that logarithmic model. This logarithmic behaviour is a typical observation in QoE and is connected to the Weber-Fechner law, see [Rei10]. As a consequence, we may significantly save energy and CO₂ when not providing the maximum QoE, but “only” good quality (i.e., MOS score of 4). The meaning of the MOS ratings is 5=Excellent, 4=Good, 3=Fair, 2=Poor, 1=Bad quality.

Figure 1: Trade-off between MOS and energy consumption or CO2 emission.

Figure 2, therefore, visualized the gain when delivering the video in lower quality and lower video bit rates. In fact, the gain compared to the efforts for MOS 5 are visualized. To get a better understanding of the meaning of those CO₂ numbers, we express the CO₂ gain now in terms of thousands of kilometers driving by car. Since the CO₂ emission depends on the share of renewable energies, we may consider different countries and the parameters as provided in [TSP20]. We see that ensuring each conference participant a MOS score of 4 instead of MOS 5 results in savings corresponding to driving approximately 40000 kilometers by car assuming the renewable energy share in the EU – this is the distance around the Earth! Assuming the energy share in China, this would save more than 90000 kilometers. Of course, you could also save 90 000 kilometers by walking – which requires however about 2 years non-stop with a speed of 5 km/h. Note that this large amount of CO₂ emission is calculated assuming a data rate of 15 Mbps over 5 days (and 6 hours per day), resulting in about 40.5 TB of data that needs to be transferred to the 200 conference participants.

Figure 2: Relating the CO2 emission in different countries for achieving this MOS to the distance by travelling in a car (in thousands of kilometers).

Discussions

Raising awareness of CO₂ emissions due to Internet service consumption is crucial. The abstract CO₂ emission numbers may be difficult to understand, but relating this to more common quantities helps to understand the impact individuals have. Of course, the provided numbers only give an impression, since the models are very simple and do not take into account various facets. However, the numbers nicely demonstrate the potential trade-off between QoE of end-users and sustainability in terms of energy consumption and CO₂ emission. In fact, [Gna21] conducted qualitative interviews and found that there is a lack of awareness of the environmental impact of digital applications and services, even for digital natives. In particular, an underlying issue is that there is a lack of understanding among end-users as to how Internet service delivery works, which infrastructure components play a role and are included along the end-to-end service delivery path, etc. Hence, the environmental impact is unclear for many users. Our aim is thus to contribute to overcoming this issue by raising awareness on this matter, starting with simplified models and visualizations.

[Gna21] also found that users indicate a certain willingness to make compromises between their digital habits and the environmental footprint. Given global climate changes and increased environmental awareness among the general population, such a trend in willingness to make compromises may be expected to further increase in the near future. Hence, it may be interesting for service providers to empower users to decide their environmental footprint at the cost of lower (yet still satisfactory) quality. This will also reduce the costs for operators and seems to be a win-win situation if properly implemented in Internet services and user interfaces.

Nevertheless, tremendous efforts are also currently being undertaken by Internet companies to become CO₂ neutral in the future. For example, Netflix claims in [Netflix2021] that they plan to achieve net-zero greenhouse gas emissions by the close of 2022. Similarly, also economic, societal, and environmental sustainability is seen as a key driver for 6G research and development [Mat21]. However, the time horizon is on a longer scope, e.g., a German provider claims they will reach climate neutrality for in-house emissions by 2025 at the latest and net-zero from production to the customer by 2040 at the latest [DT21]. Hence, given the urgency of the matter, end-users and all stakeholders along the service delivery chain can significantly contribute to speeding up the process of ultimately achieving net-zero greenhouse gas emissions.

References

  • [TSP19] The Shift Project, “Lean ict: Towards digital sobriety,” directed by Hugues Ferreboeuf, Tech. Rep., 2019, last accessed: March 2022. Available online (last accessed: March 2022)
  • [Yan19] M. Yan, C. A. Chan, A. F. Gygax, J. Yan, L. Campbell,A. Nirmalathas, and C. Leckie, “Modeling the total energy consumption of mobile network services and applications,” Energies, vol. 12, no. 1, p. 184, 2019.
  • [TSP20] Maxime Efoui Hess and Jean-Noël Geist, “Did The Shift Project really overestimate the carbon footprint of online video? Our analysis of the IEA and Carbonbrief articles”, The Shift Project website, June 2020, available online (last accessed: March 2022) PDF
  • [Kam20a] George Kamiya, “Factcheck: What is the carbon footprint of streaming video on Netflix?”, CarbonBrief website, February 2020. Available online (last accessed: March 2022)
  • [Kam20b] George Kamiya, “The carbon footprint of streaming video: fact-checking the headlines”, IEA website, December 2020. Available online (last accessed: March 2022)
  • [Feh11] Fehske, A., Fettweis, G., Malmodin, J., & Biczok, G. (2011). The global footprint of mobile communications: The ecological and economic perspective. IEEE communications magazine, 49(8), 55-62.
  • [Lop18]  J. P. López, D. Martín, D. Jiménez, and J. M. Menéndez, “Prediction and modeling for no-reference video quality assessment based on machine learning,” in 2018 14th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS), IEEE, 2018, pp. 56–63.
  • [Gna21] Gnanasekaran, V., Fridtun, H. T., Hatlen, H., Langøy, M. M., Syrstad, A., Subramanian, S., & De Moor, K. (2021, November). Digital carbon footprint awareness among digital natives: an exploratory study. In Norsk IKT-konferanse for forskning og utdanning (No. 1, pp. 99-112).
  • [Rei10] Reichl, P., Egger, S., Schatz, R., & D’Alconzo, A. (2010, May). The logarithmic nature of QoE and the role of the Weber-Fechner law in QoE assessment. In 2010 IEEE International Conference on Communications (pp. 1-5). IEEE.
  • [Netflix21] Netflix: “Environmental Social Governance 2020”,  Sustainability Accounting Standards Board (SASB) Report, (2021, March). Available online (last accessed: March 2022)
  • [Mat21] Matinmikko-Blue, M., Yrjölä, S., Ahokangas, P., Ojutkangas, K., & Rossi, E. (2021). 6G and the UN SDGs: Where is the Connection?. Wireless Personal Communications, 121(2), 1339-1360.
  • [DT21] Hannah Schauff. Deutsche Telekom tightens its climate targets (2021, January). Available online (last accessed: March 2022)

Towards an updated understanding of immersive multimedia experiences

Bringing theories and measurement techniques up to date

Development of technology for immersive multimedia experiences

Immersive multimedia experiences, as its name is suggesting are those experiences focusing on media that is able to immerse users with different interactions into an experience of an environment. Through different technologies and approaches, immersive media is emulating a physical world through the means of a digital or simulated world, with the goal of creating a sense of immersion. Users are involved in a technologically driven environment where they may actively join and participate in the experiences offered by the generated world [White Paper, 2020]. Currently, as hardware and technologies are developing further, those immersive experiences are getting better with the more advanced feeling of immersion. This means that immersive multimedia experiences are exceeding just the viewing of the screen and are enabling bigger potential. This column aims to present and discuss the need for an up to date understanding of immersive media quality. Firstly, the development of the constructs of immersion and presence over time will be outlined. Second, influencing factors of immersive media quality will be introduced, and related standardisation activities will be discussed. Finally, this column will be concluded by summarising why an updated understanding of immersive media quality is urgent.

Development of theories covering immersion and presence

One of the first definitions of presence was established by Slater and Usoh already in 1993 and they defined presence as a “sense of presence” in a virtual environment [Slater, 1993]. This is in line with other early definitions of presence and immersion. For example, Biocca defined immersion as a system property. Those definitions focused more on the ability of the system to technically accurately provide stimuli to users [Biocca, 1995]. As technology was only slowly capable to provide systems that are able to generate stimulation to users that can mimic the real world, this was of course the main content of definitions. Quite early on questionnaires to capture the experienced immersion were introduced, such as the Igroup Presence Questionnaire (IPQ) [Schubert, 2001]. Also, the early methods for measuring experiences are mainly focused on aspects of how good the representation of the real world was done and perceived. With maturing technology, the focus was shifted more towards emotions and more cognitive phenomena besides the basics stimulus generation. For example, Baños and colleagues showed that experienced emotion and immersion are in relation to each other and also influence the sense of presence [Baños, 2004]. Newer definitions focus more on these mentioned cognitive aspects, e.g., Nilsson defines three factors that can lead to immersion: (i) technology, (ii) narratives, and (iii) challenges, where only the factor technology is a non-cognitive one [Nilsson, 2016]. In 2018, Slater defines the place illusion as the illusion of being in a place while knowing one is not really there. This is a focus on a cognitive construct, removal of disbelieve, but still leaves the focus of how the illusion is created mainly on system factors instead of cognitive ones [Slater, 2018]. In recent years, more and more activities were started to define how to measure immersive experiences as an overall construct.

Constructs of interest in relation to immersion and presence

This section discusses constructs and activities that are related to immersion and presence. In the beginning, subtypes of extended reality (XR) and the relation to user experience (UX) as well as quality of experience (QoE) are outlined. Afterwards, recent standardization activities related to immersive multimedia experiences are introduced and discussed.
Moreover, immersive multimedia experiences can be divided by many different factors, but recently the most common distinctions are regarding the interactivity where content can be made for multi-directional viewing as 360-degree videos, or where content is presented through interactive extended reality. Those XR technologies can be divided into mixed reality (MR), augmented reality (AR), augmented virtuality (AV), virtual reality (VR), and everything in between [Milgram, 1995]. Through all those areas immersive multimedia experiences have found a place on the market, and are providing new solutions to challenges in research as well as in industries, with a growing potential of adopting into different areas [Chuah, 2018].

While discussing immersive multimedia experiences, it is important to address user experience and quality of immersive multimedia experiences, which can be defined following the definition of quality of experience itself [White Paper, 2012] as a measure of the delight or annoyance of a customer’s experiences with a service, wherein this case service is an immersive multimedia experience. Furthermore, while defining QoE terms experience and application are also defined and can be utilized for immersive multimedia experience, where an experience is an individual’s stream of perception and interpretation of one or multiple events; and application is a software and/or hardware that enables usage and interaction by a user for a given purpose [White Paper 2012].

As already mentioned, immersive media experiences have an impact in many different fields, but one, where the impact of immersion and presence is particularly investigated, is gaming applications along with QoE models and optimizations that go with it. Specifically interesting is the framework and standardization for subjective evaluation methods for gaming quality [ITU-T Rec. P.809, 2018]. This standardization is providing instructions on how to assess QoE for gaming services from two possible test paradigms, i.e., passive viewing tests and interactive tests. However, even though detailed information about the environments, test set-ups, questionnaires, and game selection materials are available those are still focused on the gaming field and concepts of flow and immersion in games themselves.

Together with gaming, another step in defining and standardizing infrastructure of audiovisual services in telepresence, immersive environments, and virtual and extended reality, has been done in regards to defining different service scenarios of immersive live experience [ITU-T Rec. H.430.3, 2018] where live sports, entertainment, and telepresence scenarios have been described. With this standardization, some different immersive live experience scenarios have been described together with architectural frameworks for delivering such services, but not covering all possible use case examples. When mentioning immersive multimedia experience, spatial audio sometimes referred to as “immersive audio” must be mentioned as is one of the key features of especially of AR or VR experiences [Agrawal, 2019], because in AR experiences it can provide immersive experiences on its own, but also enhance VR visual information.
In order to be able to correctly assess QoE or UX, one must be aware of all characteristics such as user, system, content, and context because their actual state may have an influence on the immersive multimedia experience of the user. That is why all those characteristics are defined as influencing factors (IF) and can be divided into Human IF, System IF, and Context IF and are as well standardized for virtual reality services [ITU-T Rec. G.1035, 2021]. Particularly addressed Human IF is simulator sickness as it specifically occurs as a result of exposure to immersive XR environments. Simulator sickness is also known as cybersickness or VR/AR sickness, as it is visually induced motion sickness triggered by visual stimuli and caused by the sensory conflict arising between the vestibular and visual systems. Therefore, to achieve the full potential of immersive multimedia experience, the unwanted sensation of simulation sickness must be reduced. However, with the frequent change of immersive technology, some hardware improvement is leading to better experiences, but a constant updating of requirement specification, design, and development is needed together with it to keep up with the best practices.

Conclusion – Towards an updated understanding

Considering the development of theories, definitions, and influencing factors around the constructs immersion and presence, one can see two different streams. First, there is a quite strong focus on the technical ability of systems in most early theories. Second, the cognitive aspects and non-technical influencing factors gain importance in the new works. Of course, it is clear that in the 1990ies, technology was not yet ready to provide a good simulation of the real world. Therefore, most activities to improve systems were focused on that activity including measurements techniques. In the last few years, technology was fast developing and the basic simulation of a virtual environment is now possible also on mobile devices such as the Oculus Quest 2. Although concepts such as immersion or presence are applicable from the past, definitions dealing with those concepts need to capture as well nowadays technology. Meanwhile, systems have proven to provide good real-world simulators and provide users with a feeling of presence and immersion. While there is already activity in standardization which is quite strong and also industry-driven, research in many research disciplines such as telecommunication are still mainly using old questionnaires. These questionnaires are mostly focused on technological/real-world simulation constructs and, thus, not able to differentiate products and services anymore to an extent that is optimal. There are some newer attempts to create new measurement tools for e.g. social aspects of immersive systems [Li, 2019; Toet, 2021]. Measurement scales aiming at capturing differences due to the ability of systems to create realistic simulations are not able to reliably differentiate different systems due to the fact that most systems are providing realistic real-world simulations. To enhance research and industrial development in the field of immersive media, we need definitions of constructs and measurement methods that are appropriate for the current technology even if the newer measurement and definitions are not as often cited/used yet. That will lead to improved development and in the future better immersive media experiences.

One step towards understanding immersive multimedia experiences is reflected by QoMEX 2022. The 14th International Conference on Quality of Multimedia Experience will be held from September 5th to 7th, 2022 in Lippstadt, Germany. It will bring together leading experts from academia and industry to present and discuss current and future research on multimedia quality, Quality of Experience (QoE), and User Experience (UX). It will contribute to excellence in developing multimedia technology towards user well-being and foster the exchange between multidisciplinary communities. One core topic is immersive experiences and technologies as well as new assessment and evaluation methods, and both topics contribute to bringing theories and measurement techniques up to date. For more details, please visit https://qomex2022.itec.aau.at.

References

[Agrawal, 2019] Agrawal, S., Simon, A., Bech, S., Bærentsen, K., Forchhammer, S. (2019). “Defining Immersion: Literature Review and Implications for Research on Immersive Audiovisual Experiences.” In Audio Engineering Society Convention 147. Audio Engineering Society.
[Biocca, 1995] Biocca, F., & Delaney, B. (1995). Immersive virtual reality technology. Communication in the age of virtual reality, 15(32), 10-5555.
[Baños, 2004] Baños, R. M., Botella, C., Alcañiz, M., Liaño, V., Guerrero, B., & Rey, B. (2004). Immersion and emotion: their impact on the sense of presence. Cyberpsychology & behavior, 7(6), 734-741.
[Chuah, 2018] Chuah, S. H. W. (2018). Why and who will adopt extended reality technology? Literature review, synthesis, and future research agenda. Literature Review, Synthesis, and Future Research Agenda (December 13, 2018).
[ITU-T Rec. G.1035, 2021] ITU-T Recommendation G:1035 (2021). Influencing factors on quality of experience for virtual reality services, Int. Telecomm. Union, CH-Geneva.
[ITU-T Rec. H.430.3, 2018] ITU-T Recommendation H:430.3 (2018). Service scenario of immersive live experience (ILE), Int. Telecomm. Union, CH-Geneva.
[ITU-T Rec. P.809, 2018] ITU-T Recommendation P:809 (2018). Subjective evaluation methods for gaming quality, Int. Telecomm. Union, CH-Geneva.
[Li, 2019] Li, J., Kong, Y., Röggla, T., De Simone, F., Ananthanarayan, S., De Ridder, H., … & Cesar, P. (2019, May). Measuring and understanding photo sharing experiences in social Virtual Reality. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1-14).
[Milgram, 1995] Milgram, P., Takemura, H., Utsumi, A., & Kishino, F. (1995, December). Augmented reality: A class of displays on the reality-virtuality continuum. In Telemanipulator and telepresence technologies (Vol. 2351, pp. 282-292). International Society for Optics and Photonics.
[Nilsson, 2016] Nilsson, N. C., Nordahl, R., & Serafin, S. (2016). Immersion revisited: a review of existing definitions of immersion and their relation to different theories of presence. Human Technology, 12(2).
[Schubert, 2001] Schubert, T., Friedmann, F., & Regenbrecht, H. (2001). The experience of presence: Factor analytic insights. Presence: Teleoperators & Virtual Environments, 10(3), 266-281.
[Slater, 1993] Slater, M., & Usoh, M. (1993). Representations systems, perceptual position, and presence in immersive virtual environments. Presence: Teleoperators & Virtual Environments, 2(3), 221-233.
[Toet, 2021] Toet, A., Mioch, T., Gunkel, S. N., Niamut, O., & van Erp, J. B. (2021). Holistic Framework for Quality Assessment of Mediated Social Communication.
[Slater, 2018] Slater, M. (2018). Immersion and the illusion of presence in virtual reality. British Journal of Psychology, 109(3), 431-433.
[White Paper, 2012] Qualinet White Paper on Definitions of Quality of Experience (2012). European Network on Quality of Experience in Multimedia Systems and Services (COST Action IC 1003), Patrick Le Callet, Sebastian Möller and Andrew Perkis, eds., Lausanne, Switzerland, Version 1.2, March 2013.
[White Paper, 2020] Perkis, A., Timmerer, C., Baraković, S., Husić, J. B., Bech, S., Bosse, S., … & Zadtootaghaj, S. (2020). QUALINET white paper on definitions of immersive media experience (IMEx). arXiv preprint arXiv:2007.07032.

MPEG Visual Quality Assessment Advisory Group: Overview and Perspectives

Introduction

The perceived visual quality is of utmost importance in the context of visual media compression, such as 2D, 3D, immersive video, and point clouds. The trade-off between compression efficiency and computational/implementation complexity has a crucial impact on the success of a compression scheme. This specifically holds for the development of visual media compression standards which typically aims at maximum compression efficiency using state-of-the-art coding technology. In MPEG, the subjective and objective assessment of visual quality has always been an integral part of the standards development process. Due to the significant effort of formal subjective evaluations, the standardization process typically relies on such formal tests in the starting phase and for verification while in the development phase objective metrics are used. In the new MPEG structure, established in 2020, a dedicated advisory group has been installed for the purpose of providing, maintaining, and developing visual quality assessment methods suitable for use in the standardization process.

This column lays out the scope and tasks of this advisory group and reports on its first achievements and developments. After a brief overview of the organizational structure, current projects are presented, and initial results are presented.

Organizational Structure

MPEG: A Group of Groups in ISO/IEC JTC 1/SC 29

The Moving Pictures Experts Groups (MPEG) is a standardization group that develops standards for coded representation of digital audio, video, 3D Graphics and genomic data. Since its establishment in 1988, the group has produced standards that enable the industry to offer interoperable devices for an enhanced digital media experience [1]. In its new structure as defined in 2020, MPEG is established as a set of Working Groups (WGs) and Advisory Groups (AGs) in Sub-Committee (SC) 29 “Coding of audio, picture, multimedia and hypermedia information” of the Joint Technical Committee (JTC) 1 of ISO (International Standardization Organization) and IEC (International Electrotechnical Commission). The lists of WGs and AGs in SC 29 are shown in Figure 1. Besides MPEG, SC 29 also includes and JPEG (the Joint Photographic Experts Group, WG 1) as well as an Advisory Group for Chair Support Team and Management (AG 1) and an Advisory Group for JPEG and MPEG Collaboration (AG 4), thereby covering the wide field of media compression and transmission. Within this structure, the focus of AG 5 MPEG Visual Quality Assessment (MPEG VQA) is on interaction and collaboration with the working groups directly working on MPEG visual media compression, including WG 4 (Video Coding), WG 5 (JVET), and WG 7 (3D Graphics).

Figure 1. MPEG Advisory Groups (AGs) and Working Groups (WGs) in ISO/IEC JTC 1/SC 29 [2].

Setting the Field for MPEG VQA: The Terms of Reference

SC 29 has defined Terms of Reference (ToR) for all its WGs and AGs. The scope of AG5 MPEG Visual Quality Assessment is to support needs for quality assessment testing in close coordination with the relevant MPEG Working Groups, dealing with visual quality, with the following activities [2]:

  • to assess the visual quality of new technologies to be considered to begin a new standardization project;
  • to contribute to the definition of Calls for Proposals (CfPs) for new standardization work items;
  • to select and design subjective quality evaluation methodologies and objective quality metrics for the assessment of visual coding technologies, e.g., in the context of a Call for Evidence (CfE) and CfP;
  • to contribute to the selection of test material and coding conditions for a CfP;
  • to define the procedures useful to assess the visual quality of the submissions to a CfP;
  • to design and conduct visual quality tests, process, and analyze the raw data, and make the report of the evaluation results available conclusively;
  • to support in the assessment of the final status of a standard, verifying its performance compared to the existing standard(s);
  • to maintain databases of test material;
  • to recommend guidelines for selection of testing laboratories (verifying their current capabilities);
  • to liaise with ITU and other relevant organizations on the creation of new Quality Assessment standards or the improvement of the existing ones.

Way of Working

Given the fact that MPEG Visual Quality Assessment is an advisory group, and given the above-mentioned ToR, the goal of AG5 is not to produce new standards on its own. Instead, AG5 strives to communicate and collaborate with relevant SDOs in the field, applying existing standards and recommendations and potentially contributing to further development by reporting results and working practices to these groups.

In terms of meetings, AG5 adopts the common MPEG meeting cycle of typically four MPEG AG/WG meetings per year, which -due to the ongoing pandemic situation- so far have all been held online. The meetings are held to review the progress of work, agree on recommendations, and decide on further plans. During the meeting, AG5 closely collaborates with the MPEG WGs and conducts experts viewing sessions in various MPEG standardization activities. The focus of such activities includes the preparation of new standardization projects, the performance verification of completed projects, as well as support of ongoing projects, where frequent subjective evaluation results are required in the decision process. Between meetings, AG5 work is carried out in the context of Ad-hoc Groups (AhGs) which are established from meeting to meeting with well-defined tasks.

Focus Groups

Due to the broad field of ongoing standardization activities, AG5 has established so-called focus groups which cover the relevant fields of development. The focus group structure and the appointed chairs are shown in Figure 2.

Figure 2. MPEG VQA focus groups.

The focus groups are mandated to coordinate with other relevant MPEG groups and other standardization bodies on activities of mutual interest, and to facilitate the formal and informal assessment of the visual media type under their consideration. The focus groups are described as follows:

  • Standard Dynamic Range Video (SDR): This is the ‘classical’ video quality assessment domain. The group strives to support, design, and conduct testing activities on SDR content at any resolution and coding condition, and to maintain existing testing methods and best practice procedures.
  • High Dynamic Range Video (HDR): The focus group on HDR strives to facilitate the assessment of HDR video quality using different devices with combinations of spatial resolution, colour gamut, and dynamic range, and further to maintain and refine methodologies for measuring HDR video quality. A specific focus of the starting phase was on the preparation of the verification tests for Versatile Video Coding (VVC, ISO/IEC 23090-3 / ITU-T H.266).
  • 360° Video: The omnidirectional characteristics of 360° video content have to be taken into account for visual quality assessment. The groups’ focus is on continuing the development of 360° video quality assessment methodologies, including those using head-mounted devices. Like with the focus group on HDR, the verification tests for VVC had priority in the starting phase.
  • Immersive Video (MPEG Immersive Video, MIV): Since MIV allows for movement of the user at six degrees of freedom, the assessment of this type of content bears even more challenges and the variability of the user’s perception of the media has to be factored in. Given the absence of an original reference or ground truth, for the synthetically rendered scene, objective evaluation with conventional objective metrics is a challenge. The focus group strives to develop appropriate subjective expert viewing methods to support the development process of the standard and also evaluates and improve objective metrics in the context of MIV.

Ad hoc Groups

AG5 currently has three AhGs defined which are briefly presented with their mandates below:

  • Quality of immersive visual media (chaired by Christian Timmerer of AAU/Bitmovin, Joel Jung of Tencent, and Aljosa Smolic of Trinity College Dublin): Study Draft Overview of Quality Metrics and Methodologies for Immersive Visual Media (AG 05/N00013) with respect to new updates presented at this meeting; Solicit inputs for subjective evaluation methods and objective metrics for immersive video (e.g., 360, MIV, V-PCC, G-PCC); Organize public online workshop(s) on Quality of Immersive Media: Assessment and Metrics.
  • Learning-based quality metrics for 2D video (chaired by Yan Ye of Alibaba and Mathias Wien of RWTH Aachen University): Compile and maintain a list of video databases suitable and available to be used in AG5’s studies; Compile a list of learning-based quality metrics for 2D video to be studied; Evaluate the correlation between the learning-based quality metrics and subjective quality scores in the databases;
  • Guidelines for subjective visual quality evaluation (chaired by Mathias Wien of RWTH Aachen University, Lu Yu of Zhejiang University and Convenor of MPEG Video Coding (ISO/IEC JTC1 SC29/WG4), and Joel Jung of Tencent): Prepare the third draft of the Guidelines for Verification Testing of Visual Media Specifications; Prepare the second draft of the Guidelines for remote experts viewing test methods for use in the context of Ad-hoc Groups, and Core or Exploration Experiments.

AG 5 First Achievements

Reports and Guidelines

The results of the work of the AhGs are aggregated in AG5 output documents which are public (or will become public soon) in order to allow for feedback also from outside of the MPEG community.

The AhG on “Quality for Immersive Visual Media” maintains a report “Overview of Quality Metrics and Methodologies for Immersive Visual Media” [3] which documents the state-of-the-art in the field and shall serve as a reference for MPEG working groups in their work on compression standards in this domain. The AhG further organizes a public workshop on “Quality of Immersive Media: Assessment and Metrics” which takes place in an online form at the beginning of October 2021 [4]. The scope of this workshop is to raise awareness about MPEG efforts in the context of quality of immersive visual media and to invite experts outside of MPEG to present new techniques relevant to the scope of this workshop.

The AhG on “Guidelines for Subjective Visual Quality Evaluation” currently develops two guideline documents supporting the MPEG standardization work. The “Guidelines for Verification Testing of Visual Media Specifications” [5] define the process of assessing the performance of a completed standard after its publication. The concept of verification testing has already been established MPEG working practice for its media compression standards since the 1990ties. The document is intended to formalize the process, describe the steps and conditions for the verification tests, and set the requirements to meet MPEG procedural quality expectations.

The AhG has further released a first draft of “Guidelines for Remote Experts Viewing Sessions” with the intention to establish a formalized procedure for ad-hoc generation subjective test results as input to the standards development process [6]. This activity has been driven by the ongoing pandemic situation which forced MPEG to continue its work in virtual online meetings since early 2020. The procedure for remote experts viewing is intended to be applied during the (online) meeting phase or in the AhG phase and to provide measurable and reproducible subjective results in order to be input to the decision-making process in the project under consideration.

Verification Testing

With Essential Video Coding (EVC) [7], Low Complexity Enhancement Video Coding (LCEVC) [8] of ISO/IEC, and the joint coding standard Versatile Video Coding (VVC) of ISO/IEC and ITU-T [9][10], a significant number of new video coding standards has been recently released. Since its first meeting in October 2020, AG5 has been engaged in the preparation and conduction of verification tests for these video coding specifications. Further verification tests for MPEG Immersive Video (MIV) and Video-based Point Cloud Compression (V-PCC) [11] are under preparation and more are to come. Results of the verification test activities which have been completed in the first year of AG5 are summarized in the following subsections. All reported results have been achieved by formal subjective assessments according to established assessment protocols [12][13] and performed by qualified test laboratories. The bitstreams were generated with reference software encoders of the specification under consideration using established encoder configurations with comparable settings for both, the reference and the evaluated coding schemes. It has to be noted that all testing had to be done under the constrained conditions of the ongoing pandemic situation which induced an additional challenge for the test laboratories in charge.

MPEG-5 Part 1: Essential Video Coding (EVC)

The EVC standard was developed with the goal to provide a royalty-free Baseline profile and a Main profile with higher compression efficiency compared to High-Efficiency Video Coding (HEVC) [15][16][17]. Verification tests were conducted for Standard Dynamic Range (SDR) and high dynamic range (HDR, BT.2100 PQ) video content at both, HD (1920×1080 pixels) and UHD (3840×2160 pixels) resolution. The tests revealed around 40% bitrate savings at a comparable visual quality for the Main profile when compared to HEVC, and around 36% bitrate saving for the Baseline profile when compared to Advanced Video Coding (AVC) [18][19], both for SDR content [20]. For HDR PQ content, the Main profile provided around 35% bitrate savings for both resolutions [21].

MPEG-5 Part 2: Low-Complexity Enhancement Video Coding (LCEVC)

The LCEVC standard follows a layered approach where an LCEVC enhancement layer is added to a lower resolution base layer of an existing codec in order to achieve the full resolution video [22]. Since the base layer codec operates at a lower resolution and the separate enhancement layer decoding process is relatively lightweight, the computational complexity of the decoding process is typically lower compared to decoding of the full resolution with the base layer codec. The addition of the enhancement layer would typically be provided on top of the established base layer decoder implementation by an additional decoding entity, e.g., in a browser.

For verification testing, LCEVC was evaluated using AVC, HEVC, EVC, and VVC base layer bitstreams at half resolution, and comparing the performance to the respective schemes with full resolution coding as well half-resolution coding with a simple upsampling tool. For UHD resolution, the bitrate savings for LCEVC at comparable visual quality were at 46% when compared to full resolution AVC and 31% when compared to full resolution HEVC. The comparison to the more recent and more efficient EVC and VVC coding schemes led to partially overlapping confidence intervals of the subjective scores of the test subjects. The curves still revealed some benefits for the application of LCEVC. The gains compared to half-resolution coding with simple upsampling provided approximately 28%, 34%, 38%, and 33% bitrate savings at comparable visual quality, demonstrating the benefit of LCEVC enhancement layer coding compared to straight-forward plain upsampling [23].

MPEG-I Part 3 / ITU-T H.266: Versatile Video Coding (VVC)

VVC is the most recent video coding standard in the historical line of joint specifications of ISO/IEC and ITU-T, such as AVC and HEVC. The development focus for VVC was on compression efficiency improvement at a moderate increase of decode complexity as well as the versatility of the design [24][25]. Versatility features include tools designed to address HDR, WCG, resolution-adaptive multi-rate video streaming services, 360-degree immersive video, bitstream extraction and merging, temporal scalability, gradual decoding refresh, and multilayer coding to deliver layered video content to support application features such as multiview, alpha maps, depth maps, and spatial and quality scalability.

A series of verification tests have been conducted covering SDR UHD and HD, HDR PQ and HLG, as well as 360° video contents [26][27][28]. An early open-source encoder (VVenC, [14]) was additionally assessed in some categories. For SDR coding, both, the VVC reference software (VTM) and the open-source VVenC were evaluated against the HEVC reference software (HM). The results revealed bit rate savings of around 46% (SDR UHD, VTM and VVenC), 50% (SDR HD, VTM and VVenC), 49% (HDR UHD, PQ and HLG), 52%, and 50-56% (360° with different projection formats) at a similar visual quality compared to HEVC. In Figure 3, pooled MOS (Mean Opinion Score) over bit rate points for the mentioned categories are provided. The MOS values range from 10 (imperceptible impairments) down to 0 (everywhere severely annoying impairments). Pooling was done by computing the geometric mean of the bitrates and the arithmetic mean of the MOS scores across the test sequences of each test category. The results reveal a consistent benefit of VVC over its predecessor HEVC in terms of visual quality over the required bitrate.

Figure 3. Pooled MOS over bitrate plots of the VVC verification tests for the SDR UHD, SDR HD, HDR HLG, and 360° video test categories. Curves cited from [26][27][28].

Summary

This column presented an overview of the organizational structure and the activities of the Advisory Group on MPEG Visual Quality Assessment, ISO/IEC JTC 1/SC 29/AG 5, which has been formed about one year ago. The work items of AG5 include the application, documentation, evaluation, and improvement of objective quality metrics and subjective quality assessment procedures. In its first year of existence, the group has produced an overview on immersive quality metrics, draft guidelines for verification tests and for remote experts viewing sessions as well as reports of formal subjective quality assessments for the verification tests of EVC, LCEVC, and VVC. The work of the group will continue towards studying and developing quality metrics suitable for the assessment tasks emerging by the development of the various MPEG visual media coding standards and towards subjective quality evaluation in upcoming and future verification tests and new standardization projects.

References

[1] MPEG website, https://www.mpegstandards.org/.
[2] ISO/IEC JTC1 SC29, “Terms of Reference of SC 29/WGs and AGs,” Doc. SC29N19020, July 2020.
[3] ISO/IEC JTC1 SC29/AG5 MPEG VQA, “Draft Overview of Quality Metrics and Methodologies for Immersive Visual Media (v2)”, doc. AG5N13, 2nd meeting: January 2021.
[4] MPEG AG 5 Workshop on Quality of Immersive Media: Assessment and Metrics, https://multimediacommunication.blogspot.com/2021/08/mpeg-ag-5-workshop-on-quality-of.html, October 5th, 2021.
[5] ISO/IEC JTC1 SC29/AG5 MPEG VQA, “Guidelines for Verification Testing of Visual Media Specifications (draft 2)”, doc. AG5N30, 4th meeting: July 2021.
[6] ISO/IEC JTC1 SC29/AG5 MPEG VQA, “Guidelines for remote experts viewing sessions (draft 1)”, doc. AG5N31, 4th meeting: July 2021.
[7] ISO/IEC 23094-1:2020, “Information technology — General video coding — Part 1: Essential video coding”, October 2020.
[8] ISO/IEC 23094-2, “Information technology – General video coding — Part 2: Low complexity enhancement video coding”, September 2021.
[9] ISO/IEC 23090-3:2021, “Information technology — Coded representation of immersive media — Part 3: Versatile video coding”, February 2021.
[10] ITU-T H.266, “Versatile Video Coding“, August 2020. https://www.itu.int/rec/recommendation.asp?lang=en&parent=T-REC-H.266-202008-I.
[11] ISO/IEC 23090-5:2021, “Information technology — Coded representation of immersive media — Part 5: Visual volumetric video-based coding (V3C) and video-based point cloud compression (V-PCC)”, June 2021.
[12] ITU-T P.910 (2008), Subjective video quality assessment methods for multimedia applications.
[13] ITU-R BT.500-14 (2019), Methodologies for the subjective assessment of the quality of television images.
[14] Fraunhofer HHI VVenC software repository. [Online]. Available: https://github.com/fraunhoferhhi/vvenc.
[15] K. Choi, J. Chen, D. Rusanovskyy, K.-P. Choi and E. S. Jang, “An overview of the MPEG-5 essential video coding standard [standards in a nutshell]”, IEEE Signal Process. Mag., vol. 37, no. 3, pp. 160-167, May 2020.
[16] ISO/IEC 23008-2:2020, “Information technology — High efficiency coding and media delivery in heterogeneous environments — Part 2: High efficiency video coding”, August 2020.
[17] ITU-T H.265, “High Efficiency Video Coding”, August 2021.
[18] ISO/IEC 14496-10:2020, “Information technology — Coding of audio-visual objects — Part 10: Advanced video coding”, December 2020.
[19] ITU-T H.264, “Advanced Video Coding”, August 2021.
[20] ISO/IEC JTC1 SC29/WG4, “Report on Essential Video Coding compression performance verification testing for SDR Content”, doc WG4N47, 2nd meeting: January 2021.
[21] ISO/IEC JTC1 SC29/WG4, “Report on Essential Video Coding compression performance verification testing for HDR/WCG content”, doc WG4N30, 1st meeting: October 2020.
[22] G. Meardi et al., “MPEG-5—Part 2: Low complexity enhancement video coding (LCEVC): Overview and performance evaluation”, Proc. SPIE, vol. 11510, pp. 238-257, Aug. 2020.
[23] ISO/IEC JTC1 SC29/WG4, “Verification Test Report on the Compression Performance of Low Complexity Enhancement Video Coding”, doc. WG4N76, 3rd meeting: April 2020.
[24] Benjamin Bross, Jianle Chen, Jens-Rainer Ohm, Gary J. Sullivan, and Ye-Kui Wang, “Developments in International Video Coding Standardization After AVC, With an Overview of Versatile Video Coding (VVC)”, Proceedings of the IEEE, Vol. 109, Issue 9, pp. 1463–1493, doi 10.1109/JPROC.2020.3043399, Sept. 2021 (open access publication), available at https://ieeexplore.ieee.org/document/9328514.
[25] Benjamin Bross, Ye-Kui Wang, Yan Ye, Shan Liu, Gary J. Sullivan, and Jens-Rainer Ohm, “Overview of the Versatile Video Coding (VVC) Standard and its Applications”, IEEE Trans. Circuits & Systs. for Video Technol. (open access publication), available online at https://ieeexplore.ieee.org/document/9395142.
[26] Mathias Wien and Vittorio Baroncini, “VVC Verification Test Report for Ultra High Definition (UHD) Standard Dynamic Range (SDR) Video Content”, doc. JVET-T2020 of ITU-T/ISO/IEC Joint Video Experts Team (JVET), 20th meeting: October 2020.
[27] Mathias Wien and Vittorio Baroncini, “VVC Verification Test Report for High Definition (HD) and 360° Standard Dynamic Range (SDR) Video Content”, doc. JVET-V2020 of ITU-T/ISO/IEC Joint Video Experts Team (JVET), 22nd meeting: April 2021.
[28] Mathias Wien and Vittorio Baroncini, “VVC verification test report for high dynamic range video content”, doc. JVET-W2020 of ITU-T/ISO/IEC Joint Video Experts Team (JVET), 23rd meeting: July 2021.

ITU-T Standardization Activities Targeting Gaming Quality of Experience

Motivation for Research in the Gaming Domain

The gaming industry has eminently managed to intrinsically motivate users to interact with their services. According to the latest report of Newzoo, there will be an estimated total of 2.7 billion players across the globe by the end of 2020. The global games market will generate revenues of $159.3 billion in 2020 [1]. This surpasses the movie industry (box offices and streaming services) by a factor of four and almost three times the music industry market in value [2].

The rapidly growing domain of online gaming emerged in the late 1990s and early 2000s allowing social relatedness to a great number of players. During traditional online gaming, typically, the game logic and the game user interface are locally executed and rendered on the player’s hardware. The client device is connected via the internet to a game server to exchange information influencing the game state, which is then shared and synchronized with all other players connected to the server. However, in 2009 a new concept called cloud gaming emerged that is comparable to the rise of Netflix for video consumption and Spotify for music consumption. On the contrary to traditional online gaming, cloud gaming is characterized by the execution of the game logic, rendering of the virtual scene, and video encoding on a cloud server, while the player’s client is solely responsible for video decoding and capturing of client input [3].

For online gaming and cloud gaming services, in contrast to applications such as voice, video, and web browsing, little information existed on factors influencing the Quality of Experience (QoE) of online video games, on subjective methods for assessing gaming QoE, or on instrumental prediction models to plan and manage QoE during service set-up and operation. For this reason, Study Group (SG) 12 of the Telecommunication Standardization Sector of the International Telecommunication Union (ITU-T) has decided to work on these three interlinked research tasks [4]. This was especially required since the evaluation of gaming applications is fundamentally different compared to task-oriented human-machine interactions. Traditional aspects such as effectiveness and efficiency as part of usability cannot be directly applied to gaming applications like a game without any challenges and time passing would result in boredom, and thus, a bad player experience (PX). The absence of standardized assessment methods as well as knowledge about the quantitative and qualitative impact of influence factors resulted in a situation where many researchers tended to use their own self-developed research methods. This makes collaborative work through reliably, valid, and comparable research very difficult. Therefore, it is the aim of this report to provide an overview of the achievements reached by ITU-T standardization activities targeting gaming QoE.

Theory of Gaming QoE

As a basis for the gaming research carried out, in 2013 a taxonomy of gaming QoE aspects was proposed by Möller et al. [5]. The taxonomy is divided into two layers of which the top layer contains various influencing factors grouped into user (also human), system (also content), and context factors. The bottom layer consists of game-related aspects including hedonic concepts such as appeal, pragmatic concepts such as learnability and intuitivity (part of playing quality which can be considered as a kind of game usability), and finally, the interaction quality. The latter is composed of output quality (e.g., audio and video quality), as well as input quality and interactive behaviour. Interaction quality can be understood as the playability of a game, i.e., the degree to which all functional and structural elements of a game (hardware and software) enable a positive PX. The second part of the bottom layer summarized concepts related to the PX such as immersion (see [6]), positive and negative affect, as well as the well-known concept of flow that describes an equilibrium between requirements (i.e., challenges) and abilities (i.e., competence). Consequently, based on the theory depicted in the taxonomy, the question arises which of these aspects are relevant (i.e., dominant), how they can be assessed, and to which extent they are impacted by the influencing factors.

Fig. 1: Taxonomy of gaming QoE aspects. Upper panel: Influence factors and interaction performance aspects; lower panel: quality features (cf. [5]).

Introduction to Standardization Activities

Building upon this theory, the SG 12 of the ITU-T has decided during the 2013-2016 Study Period to start work on three new work items called P.GAME, G.QoE-gaming, and G.OMG. However, there are also other related activities at the ITU-T summarized in Fig. 2 about evaluation methods (P.CrowdG), and gaming QoE modelling activities (G.OMMOG and P.BBQCG).

Fig. 2: Overview of ITU-T SG12 recommendations and on-going work items related to gaming services.

The efforts on the three initial work items continued during the 2017-2020 Study Period resulting in the recommendations G.1032, P.809, and G.1072, for which an overview will be given in this section.

ITU-T Rec. G.1032 (G.QoE-gaming)

The ITU-T Rec. G.1032 aims at identifying the factors which potentially influence gaming QoE. For this purpose, the Recommendation provides an overview table and then roughly classifies the influence factors into (A) human, (B) system, and (C) context influence factors. This classification is based on [7] but is now detailed with respect to cloud and online gaming services. Furthermore, the recommendation considers whether an influencing factor carries an influence mainly in a passive viewing-and-listening scenario, in an interactive online gaming scenario, or in an interactive cloud gaming scenario. This classification is helpful to evaluators to decide which type of impact may be evaluated with which type of text paradigm [4]. An overview of the influencing factors identified for the ITU-T Rec. G.1032 is presented in Fig. 3. For subjective user studies, in most cases the human and context factors should be controlled and their influence should be reduced as much as possible. For example, even though it might be a highly impactful aspect of today’s gaming domain, within the scope of the ITU-T cloud gaming modelling activities, only single-player user studies are conducted to reduce the impact of social aspects which are very difficult to control. On the other hand, as network operators and service providers are the intended stakeholders of gaming QoE models, the relevant system factors must be included in the development process of the models, in particular the game content as well as network and encoding parameters.

Fig. 3: Overview of influencing factors on gaming QoE summarized in ITU-T Rec. G.1032 (cf. [3]).

ITU-T Rec. P.809 (P.GAME)

The aim of the ITU-T Rec. P.809 is to describe subjective evaluation methods for gaming QoE. Since there is no single standardized evaluation method available that would cover all aspects of gaming QoE, the recommendation mainly summarizes the state of the art of subjective evaluation methods in order to help to choose suitable methods to conduct subjective experiments, depending on the purpose of the experiment. In its main body, the draft consists of five parts: (A) Definitions for games considered in the Recommendation, (B) definitions of QoE aspects relevant in gaming, (C) a description of test paradigms, (D) a description of the general experimental set-up, recommendations regarding passive viewing-and-listening tests and interactive tests, and (E) a description of questionnaires to be used for gaming QoE evaluation. It is amended by two paragraphs regarding performance and physiological response measurements and by (non-normative) appendices illustrating the questionnaires, as well as an extensive list of literature references [4].

Fundamentally, the ITU-T Rec. P.809 defines two test paradigms to assess gaming quality:

  • Passive tests with predefined audio-visual stimuli passively observed by a participant.
  • Interactive tests with game scenarios interactively played by a participant.

The passive paradigm can be used for gaming quality assessment when the impairment does not influence the interaction of players. This method suggests a short stimulus duration of 30s which allows investigating a great number of encoding conditions while reducing the influence of user behaviours on the stimulus due to the absence of their interaction. Even for passive tests, as the subjective ratings will be merged with those derived from interactive tests for QoE model developments, it is recommended to give instruction about the game rules and objectives to allow participants to have similar knowledge of the game. The instruction should also explain the difference between video quality and graphic quality (e.g., graphical details such as abstract and realistic graphics), as this is one of the common mistakes of participants in video quality assessment of gaming content.

The interactive test should be used when other quality features such as interaction quality, playing quality, immersion, and flow are under investigation. While for the interaction quality, a duration of 90s is proposed, a longer duration of 5-10min is suggested in the case of research targeting engagement concepts such as flow. Finally, the recommendation provides information about the selection of game scenarios as stimulus material for both test paradigms, e.g., ability to provide repetitive scenarios, balanced difficulty, representative scenes in terms of encoding complexity, and avoiding ethically questionable content.

ITU-T Rec. G.1072 (G.OMG)

The quality management of gaming services would require quantitative prediction models. Such models should be able to predict either “overall quality” (e.g., in terms of a Mean Opinion Score), or individual QoE aspects from characteristics of the system, potentially considering the player characteristics and the usage context. ITU-T Rec. G.1072 aims at the development of quality models for cloud gaming services based on the impact of impairments introduced by typical Internet Protocol (IP) networks on the quality experienced by players. G.1072 is a network planning tool that estimates the gaming QoE based on the assumption of network and encoding parameters as well as game content.

The impairment factors are derived from subjective ratings of the corresponding quality aspects, e.g., spatial video quality or interaction quality, and modelled by non-linear curve fitting. For the prediction of the overall score, linear regression is used. To create the impairment factors and regression, a data transformation from the MOS values of each test condition to the R-scale was performed, similar to the well-known E-model [8]. The R-scale, which results from an s-shaped conversion of the MOS scale, promises benefits regarding the additivity of the impairments and compensation for the fact that participants tend to avoid using the extremes of rating scales [3].

As the impact of the input parameters, e.g. delay, was shown to be highly content-dependent, the model used two modes. If no assumption on a game sensitivity class towards degradations is available to the user of the model (e.g. a network provider), the “default” mode of operation should be used that considers the highest (sensitivity) game class. The “default” mode of operation will result in a pessimistic quality prediction for games that are not of high complexity and sensitivity. If the user of the model can make an assumption about the game class (e.g. a service provider), the “extended” mode can predict the quality with a higher degree of accuracy based on the assigned game classes.

On-going Activities

While the three recommendations provide a basis for researchers, as well as network operators and cloud gaming service providers towards improving gaming QoE, the standardization activities continue by initiating new work items focusing on QoE assessment methods and gaming QoE model development for cloud gaming and online gaming applications. Thus, three work items have been established within the past two years.

ITU-T P.BBQCG

P.BBQCG is a work item that aims at the development of a bitstream model predicting cloud gaming QoE. Thus, the model will benefit from the bitstream information, from header and payload of packets, to reach a higher accuracy of audiovisual quality prediction, compared to G.1072. In addition, three different types of codecs and a wider range of network parameters will be considered to develop a generalizable model. The model will be trained and validated for H.264, H.265, and AV1 video codecs and video resolutions up to 4K. For the development of the model, two paradigms of passive and interactive will be followed. The passive paradigm will be considered to cover a high range of encoding parameters, while the interactive paradigm will cover the network parameters that might strongly influence the interaction of players with the game.

ITU-T P.CrowdG

A gaming QoE study is per se a challenging task on its own due to the multidimensionality of the QoE concept and a large number of influence factors. However, it becomes even more challenging if the test would follow a crowdsourcing approach which is of particular interest in times of the COVID-19 pandemic or if subjective ratings are required from a highly diverse audience, e.g., for the development or investigation of questionnaires. The aim of the P.CrowdG work item is to develop a framework that describes the best practices and guidelines that have to be considered for gaming QoE assessment using a crowdsourcing approach. In particular, the crowd gaming framework provides the means to ensure reliable and valid results despite the absence of an experimenter, controlled network, and visual observation of test participants had to be considered. In addition to the crowd game framework, guidelines will be given that provide recommendations to ensure collecting valid and reliable results, addressing issues such as how to make sure workers put enough focus on the gaming and rating tasks. While a possible framework for interactive tests of simple web-based games is already presented in [9], more work is required to complete the ITU-T work item for more advanced setups and passive tests.

ITU-T G.OMMOG

G.OMMOG is a work item that focuses on the development of an opinion model predicting gaming Quality of Experience (QoE) for mobile online gaming services. The work item is a possible extension of the ITU-T Rec. G.1072. In contrast to G.1072, the games are not executed on a cloud server but on a gaming server that exchanges game states with the user’s clients instead of a video stream. This more traditional gaming concept represents a very popular service, especially considering multiplayer gaming such as recently published AAA titles of the Multiplayer Online Battle Arena (MOBA) and battle royal genres.

So far, it is decided to follow a similar model structure to ITU-T Rec. G.1072. However, the component of spatial video quality, which was a major part of G.1072, will be removed, and the corresponding game type information will not be used. In addition, for the development of the model, it was decided to investigate the impact of variable delay and packet loss burst, especially as their interaction can have a high impact on the gaming QoE. It is assumed that more variability of these factors and their interplay will weaken the error handling of mobile online gaming services. Due to missing information on the server caused by packet loss or strong delays, the gameplay is assumed to be not smooth anymore (in the gaming domain, this is called ‘rubber banding’), which will lead to reduced temporal video quality.

About ITU-T SG12

ITU-T Study Group 12 is the expert group responsible for the development of international standards (ITU-T Recommendations) on performance, quality of service (QoS), and quality of experience (QoE). This work spans the full spectrum of terminals, networks, and services, ranging from speech over fixed circuit-switched networks to multimedia applications over mobile and packet-based networks.

In this article, the previous achievements of the ITU-T SG12 with respect to gaming QoE are described. The focus was in particular on subjective assessment methods, influencing factors, and modelling of gaming QoE. We hope that this information will significantly improve the work and research in this domain by enabling more reliable, comparable, and valid findings. Lastly, the report also points out many on-going activities in this rapidly changing domain, to which everyone is gladly invited to participate.

More information about the SG12, which will host its next E-meeting from 4-13 May 2021, can be found at ITU Study Group (SG) 12.

For more information about the gaming activities described in this report, please contact Sebastian Möller (sebastian.moeller@tu-berlin.de).

Acknowledgement

The authors would like to thank all colleagues of ITU-T Study Group 12, as well as of the Qualinet gaming Task Force, for their support. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 871793 and No 643072 as well as by the German Research Foundation (DFG) within project MO 1038/21-1.

References

[1] T. Wijman, The World’s 2.7 Billion Gamers Will Spend $159.3 Billion on Games in 2020; The Market Will Surpass $200 Billion by 2023, 2020.

[2] S. Stewart, Video Game Industry Silently Taking Over Entertainment World, 2019.

[3] S. Schmidt, Assessing the Quality of Experience of Cloud Gaming Services, Ph.D. dissertation, Technische Universität Berlin, 2021.

[4] S. Möller, S. Schmidt, and S. Zadtootaghaj, “New ITU-T Standards for Gaming QoE Evaluation and Management”, in 2018 Tenth International Conference on Quality of Multimedia Experience (QoMEX), IEEE, 2018.

[5] S. Möller, S. Schmidt, and J. Beyer, “Gaming Taxonomy: An Overview of Concepts and Evaluation Methods for Computer Gaming QoE”, in 2013 Fifth International Workshop on Quality of Multimedia Experience (QoMEX), IEEE, 2013.

[6] A. Perkis and C. Timmerer, Eds., QUALINET White Paper on Definitions of Immersive Media Experience (IMEx), European Network on Quality of Experience in Multimedia Systems and Services, 14th QUALINET meeting, 2020.

[7] P. Le Callet, S. Möller, and A. Perkis, Eds, Qualinet White Paper on Definitions of Quality of Experience, COST Action IC 1003, 2013.

[8] ITU-T Recommendation G.107, The E-model: A Computational Model for Use in Transmission Planning. Geneva: International Telecommunication Union, 2015.

[9] S. Schmidt, B. Naderi, S. S. Sabet, S. Zadtootaghaj, and S. Möller, “Assessing Interactive Gaming Quality of Experience Using a Crowdsourcing Approach”, in 2020 Twelfth International Conference on Quality of Multimedia Experience (QoMEX), IEEE, 2020.

Immersive Media Experiences – Why finding Consensus is Important

An introduction to the QUALINET White Paper on Definitions of Immersive Media Experience (IMEx) [1].

Introduction

Immersive media are reshaping the way users experience reality. They are increasingly incorporated across enterprise and consumer sectors to offer experiential solutions to a diverse range of industries. Current technologies that afford an immersive media experience (IMEx) include Augmented Reality (AR), Virtual Reality (VR), Mixed Reality (MR), and 360-degree video. Popular uses can be found in enhancing connectivity applications, supporting knowledge-based tasks, learning & skill development, as well as adding immersive and interactive dimensions to the retail, business, and entertainment industries. Whereas the evolution of immersive media can be traced over the past 50 years, its current popularity boost is primarily owed to significant advances in the last decade brought about by improved connectivity, superior computing, and device capabilities. In specific, advancements witnessed in display technologies, visualizations, interaction & tracking devices, recognition technologies, platform development, new media formats, and increasing user demand for real-time & dynamic content across platforms.

Though still in its infancy, the immersive economy is growing into a dynamic and confident sector. Being an emerging sector, it is hard to find official data, but some estimations project the immersive media global market size to continue its upward growth at around 30% CAGR to reach USD180 Bn by 2022 [2,3]. Country-wise, the USA is expected to secure 1/3rd of the global immersive media market share followed by China, Japan, Germany, and the UK as likely immersive media markets where significant spending is anticipated. Consumer products and devices are poised to be the largest contributing segment. The growth in immersive consumer products is expected to continue as Head-Mounted Displays (HMD) become commonplace and interest in mobile augmented reality increase [4]. However, immersive media are no longer just a pursuit of alternative display technologies but pushing towards holistic ecosystems that seek contributions from hardware manufacturers, application & platform developers, content producers, and users. These ecosystems are making way for sophisticated content creation available on platforms that allow user participation, interaction, and skill integration through advanced tools.

Immersive media experience (IMEx), today, is not only how users view media but in fact a transformative way to consume media altogether. They draw considerable interdisciplinary interest from multiple disciplines. As stakeholders increase, the need for clarity and coherence on definitions and concepts become all the more important. In this article, we provide an overview and a brief survey of some of the key definitions that are central to IMEx including its Quality of Experience (QoE), application areas, influencing factors, and assessment methods. Our aim is to enable some clarity and initiate consensus, on topics related to IMEx that can be useful for researchers and practitioners working both inside academia and the industry.

Why understand IMEx?

IMEx combines reality with technology enabling emplaced multimedia experiences of standard media (film, photographic, or animated) as well as synthetic and interactive environments for users. They utilize visual, auditory, and haptic feedback to stimulate physical senses such that users psychologically feel immersed within these multidimensional media environments. This sense of “being there” is also referred to as presence.

As mentioned earlier, the enthusiasm for IMEx is mainly driven by the gaming, entertainment, retail, healthcare, digital marketing, and skill training industries. So far, research has tilted favourably towards innovation, with a particular interest in image capture, recognition, mapping, and display technologies over the past few years. However, the prevalence of IMEx has also ushered in a plethora of definitions, frameworks, and models to understand the psychological and phenomenological concepts associated with these media forms. Central, of course, are the closely related concepts of immersion and presence, which are interpreted varyingly across fields; for example, when one moves from literature to narratology to computer sciences. However, with immersive media, these three separate fields come together inside interactive digital narrative applications where immersive narratives are used to solve real-world problems. This is when noticeable interdisciplinary differences regarding definitions, scope, and constituents require urgent redressal to achieve a coherent understanding of the used concepts. Such consensus is vital for giving directionality to the future of immersive media that can be shared by all.

A White Paper on IMEx

A recent White Paper [1] by QUALINET, the European Network on Quality of Experience in Multimedia Systems and Services [5], is a contribution to the discussions related to Immersive Media Experience (IMEx). It attempts to build consensus around ideas and concepts that are related to IMEx but originate from multidisciplinary groups with a joint interest in multimedia experiences.

The QUALINET community aims at extending the notion of network-centric Quality of Service (QoS) in multimedia systems, by relying on the concept of Quality of Experience (QoE). The main scientific objective is the development of methodologies for subjective and objective quality metrics considering current and new trends in multimedia communication systems as witnessed by the appearance of new types of content and interactions.

The white paper was created based on an activity launched at the 13th QUALINET meeting on June 4, 2019, in Berlin as part of Task Force 7, Immersive Media Experiences (IMEx). The paper received contributions from 44 authors under 10 section leads, which were consolidated into a first draft and released among all section leads and editors for internal review. After incorporating the feedback from all section leads, the editors initially released the White Paper within the QUALINET community for review. Following feedback from QUALINET at large, the editors distributed the White Paper widely for an open, public community review (e.g., research communities/committees in ACM and IEEE, standards development organizations, various open email reflectors related to this topic). The feedback received from this public consultation process resulted in the final version which has been approved during the 14th QUALINET meeting on May 25, 2020.

Understanding the White Paper

The White Paper surveys definitions and concepts that contribute to IMEx. It describes the Quality of Experience (QoE) for immersive media by establishing a relationship between the concepts of QoE and IMEx. This article provides an outline of these concepts by looking at:

  • Survey of definitions of immersion and presence discusses various frameworks and conceptual models that are most relevant to these phenomena in terms of multimedia experiences.
  • Definition of immersive media experience describes experiential determinants for IMEx characterized through its various technological contexts.
  • Quality of experience for immersive media applies existing QoE concepts to understand the user-centric subjective feelings of “a sense of being there”, “a sense of agency”, and “cybersickness”.
  • The application area for immersive media experience presents an overview of immersive technologies in use within gaming, omnidirectional content, interactive storytelling, health, entertainment, and communications.
  • Influencing factors on immersive media experience look at the three existing influence factors on QoE with a pronounced emphasis on the human influence factor as of very high relevance to IMEx.
  • Assessment of immersive media experience underscores the importance of proper examination of multimedia systems, including IMEx, by highlighting three methods currently in use, i.e., subjective, behavioural, and psychophysiological.
  • Standardization activities discuss the three clusters of activities currently underway to achieve interoperability for IMEx: (i) data representation & formats; (ii) guidelines, systems standards, & APIs; and (iii) Quality of Experience (QoE).

Conclusions

Immersive media have significantly changed the use and experience of new digital media. These innovative technologies transcend traditional formats and present new ways to interact with digital information inside synthetic or enhanced realities, which include VR, AR, MR, and haptic communications. Earlier the need for a multidisciplinary consensus was discussed vis-à-vis definitions of IMEx. The QUALINET white paper provides such “a toolbox of definitions” for IMEx. It stands out for bringing together insights from multimedia groups spread across academia and industry, specifically the Video Quality Experts Group (VQEG) and the Immersive Media Group (IMG). This makes it a valuable asset for those working in the field of IMEx going forward.

References

[1] Perkis, A., Timmerer, C., et al., “QUALINET White Paper on Definitions of Immersive Media Experience (IMEx)”, European Network on Quality of Experience in Multimedia Systems and Services, 14th QUALINET meeting (online), May 25, 2020. Online: https://arxiv.org/abs/2007.07032
[2] Mateos-Garcia, J., Stathoulopoulos, K., & Thomas, N. (2018). The immersive economy in the UK (Rep. No. 18.1137.020). Innovate UK.
[3] Infocomm Media 2025 Supplementary Information (pp. 31-43, Rep.). (2015). Singapore: Ministry of Communications and Information.
[4] Hadwick, A. (2020). XR Industry Insight Report 2019-2020 (Rep.). San Francisco: VRX Conference & Expo.
[5] http://www.qualinet.eu/