Authors: Tobias Hoßfeld (University of Würzburg, Germany), Poul E. Heegaard (NTNU - Norwegian University of Science and Technology), Lea Skorin-Kapov (University of Zagreb, Faculty of Electrical Engineering and Computing, Croatia), Martin Varela (callstats.io, Finland)
With Quality of Experience (QoE) research having made significant advances over the years, increased attention is being put on exploiting this knowledge from a service/network provider perspective in the context of the user-centric evaluation of systems. Current research investigates the impact of system/service mechanisms, their implementation or configurations on the service performance and how it affects the corresponding QoE of its users. Prominent examples address adaptive video streaming services, as well as enabling technologies for QoE-aware service management and monitoring, such as SDN/NFV and machine learning. This is also reflected in the latest edition of conferences such as the ACM Multimedia Systems Conference (MMSys ‘19), see some selected exemplary papers.
- “ERUDITE: a Deep Neural Network for Optimal Tuning of Adaptive Video Streaming Controllers” by De Cicco, L., Cilli, G., & Mascolo, S.
- “An SDN-Based Device-Aware Live Video Service For Inter-Domain Adaptive Bitrate Streaming” by Khalid, A., Zahran, H. & Sreenan C.J.
- “Quality-aware Strategies for Optimizing ABR Video Streaming QoE and Reducing Data Usage” by Qin, Y., Hao, S., Pattipati, K., Qian, F., Sen, S., Wang, B., & Yue, C.
- “Evaluation of Shared Resource Allocation using SAND for Adaptive Bitrate Streaming” by Pham, S., Heeren, P., Silhavy, D., Arbanowski, S.
- “Requet: Real-Time QoE Detection for Encrypted YouTube Traffic” by Gutterman, C., Guo, K., Arora, S., Wang, X., Wu, L., Katz-Bassett, E., & Zussman, G.
For the evaluation of systems, proper QoE models are of utmost importance, as they provide a mapping of various parameters to QoE. One of the main research challenges faced by the QoE community is deriving QoE models for various applications and services, whereby ratings collected from subjective user studies are used to model the relationship between tested influence factors and QoE. Below is a selection of papers dealing with this topic from QoMEX 2019; the main scientific venue for the QoE community.
- “Subjective Assessment of Adaptive Media Playout for Video Streaming” by Pérez, P., García, N., & Villegas, A.
- “Assessing Texture Dimensions and Video Quality in Motion Pictures using Sensory Evaluation Techniques” by Keller, D., Seybold, T., Skowronek, J., & Raake, A.
- “Tile-based Streaming of 8K Omnidirectional Video: Subjective and Objective QoE Evaluation” by Schatz, R., Zabrovskiy, A., & Timmerer, C.
- “SUR-Net: Predicting the Satisfied User Ratio Curve for Image Compression with Deep Learning” by Fan, C., Lin, H., Hosu, V., Zhang, Y., Jiang, Q., Hamzaoui, R., & Saupe, D.
- “Analysis and Prediction of Video QoE in Wireless Cellular Networks using Machine Learning” by Minovski, D., Åhlund, C., Mitra, K., & Johansson, P.
System-centric QoE
When considering the whole service, the question arises of how to properly evaluate QoE in a systems context, i.e., how to quantify system-centric QoE. The paper [1] provides fundamental relationships for deriving system-centric QoE,which are the basis for this article.
In the QoE community, subjective user studies are conducted to derive relationships between influence factors and QoE. Typically, the results of these studies are presented in terms of Mean Opinion Scores (MOS). However, these MOS results mask user diversity, which leads to specific distributions of user scores for particular test conditions. In a systems context, QoE can be better represented as a random variable Q|t for a fixed test condition. Such models are commonly exploited by service/network providers to derive various QoE metrics [2] in their system, such as expected QoE, or the percentage of users rating above a certain threshold (Good-or-Better ratio GoB).
Across the whole service, users will experience different performance, measured by e.g., response times, throughput, etc. which depend on the system’s (and services’) configuration and implementation. In turn, this leads to users experiencing different quality levels. As an example, we consider the response time of a system, which offers a certain web service, such as access to a static web site. In such a case, the system’s performance can be represented by a random variable R for the response time. In the system community, research aims at deriving such distributions of the performance, R.
The user centric evaluation of the system combines the system’s perspective and the QoE perspective, as illustrated in the figure below. We consider service/network providers interested in deriving various QoE metrics in their system, given (a) the system’s performance, and (b) QoE models available from user studies. The main questions we need to answer are how to combine a) user rating distributions obtained from subjective studies, and b) system performance condition distributions, so as to obtain the actual observed QoE distribution in the system? Moreover, how can various QoE metrics of interest in the system be derived?
Model of System-centric QoE
A service provider is interested in the QoE distribution Q in the system, which includes the following stochastic components: 1) system performance condition, t (i.e., response time in our example), and 2) user diversity, Q|t. This system-centric QoE distribution allows us to derive various QoE metrics, such as expected QoE or expected GoB in the system.
Some basic mathematical transformations allow us to derive the expected system-centric QoE E[Q], as shown below. As a result, we show that the expected system QoE is equal to the expected Mean Opinion Score (MOS) in the system! Hence, for deriving system QoE, it is necessary to measure the response time distribution R and to have a proper QoS-to-MOS mapping function f(t) obtained from subjective studies. From the subjective studies, we obtain the MOS mapping function for a response time t, f(t)=E[Q|t]. The system QoE then follows as E[Q] = E[f(R)]=E[M]. Note: The MOS M distribution in the system allows only to derive the expected MOS, i.e., expected system-centric QoE.
Let us consider another system-centric QoE metric, such as the GoB ratio. On a typical 5-point Absolute Category Rating (ACR) scale (1:bad quality, 5: excellent quality), the system-centric GoB is defined as GoB[Q]=P(Q>=4). We find that it is not possible to use a MOS mapping function f and the MOS distribution M=f(R) to derive GoB[Q] in the system! Instead, it is necessary to use the corresponding QoS-to-GoB mapping function g. This mapping function g can also be derived from the same subjective studies as the MOS mapping function, and maps the response time (tested in the subjective experiment) to the ratio of users rating “good or better” QoE, i.e., g(t)=P(Q|t > 4). We may thus derive in a similar way: GoB[Q]=E[g(R)]. In the system, the GoB ratio is the expected value of the response times R mapped to g(R). Similar observations lead to analogous results for other QoE metrics, such as quantiles or variances (see [1]).
Conclusions
The reported fundamental relationships provide an important link between the QoE community and the systems community. If researchers conducting subjective user studies provide different QoS-to-QoE mapping functions for QoE metrics of interest (e.g., MOS or GoB), this is enough to derive corresponding QoE metrics from a system’s perspective. This holds for any QoS (e.g., response time) distribution in the system, as long as the corresponding QoS values are captured in the reported QoE models. As a result, we encourage QoE researchers to report not only MOS mappings, but the entire rating distributions from conducted subjective studies. As an alternative, researchers may report QoE metrics and corresponding mapping functions beyond just those relying on MOS!
We draw the attention of the systems community to the fact that the actual QoE distribution in a system is not (necessarily) equal to the MOS distribution in the system (see [1] for numerical examples). Just applying MOS mapping functions and then using observed MOS distribution to derive other QoE metrics like GoB is not adequate. The current systems literature however, indicates that there is clearly a lack of a common understanding as to what are the implications of using MOS distributions rather than actual QoE distributions.
References
[1] Hoßfeld, T., Heegaard, P.E., Skorin-Kapov, L., & Varela, M. (2019). Fundamental Relationships for Deriving QoE in Systems. 2019 Eleventh International Conference on Quality of Multimedia Experience (QoMEX). IEEE
[2] Hoßfeld, T., Heegaard, P. E., Varela, M., & Möller, S. (2016). QoE beyond the MOS: an in-depth look at QoE via better metrics and their relation to MOS. Quality and User Experience, 1(1), 2.
Authors
- Tobias Hoßfeld (University of Würzburg, Germany) is heading the chair of communication networks.
- Poul E. Heegaard (NTNU – Norwegian University of Science and Technology) is heading the Networking Research Group.
- Lea Skorin-Kapov (University of Zagreb, Faculty of Electrical Engineering and Computing, Croatia) is heading the Multimedia Quality of Experience Research Lab
- Martin Varela is working in the analytics team at callstats.io focusing on understanding and monitoring QoE for WebRTC services.