VQEG Column: New topics

Introduction

Welcome to the fourth column on the ACM SIGMM Records from the Video Quality Experts Group (VQEG).
During the last VQEG plenary meeting (14-18 Dec. 2020) various interesting discussions arose regarding new topics not addressed up to then by VQEG groups, which led to launching three new sub-projects and a new project related to: 1) clarifying the computation of spatial and temporal information (SI and TI), 2) including video quality metrics as metadata in compressed bitstreams, 3) Quality of Experience (QoE) metrics for live video streaming applications, and 4) providing guidelines on implementing objective video quality metrics to the video compression community.
The following sections provide more details about these new activities and try to encourage interested readers to follow and get involved in any of them by subscribing to the corresponding reflectors.

SI and TI Clarification

The VQEG No-Reference Metrics (NORM) group has recently focused on the topic of spatio-temporal complexity, revisiting the Spatial Information and Temporal Information (SI/TI) indicators, which are described in ITU-T Rec. P.910 [1]. They were originally developed for the T1A1 dataset in 1994 [2]. The metrics have found good use over the last 25 years – mostly employed for checking the complexity of video sources in datasets. However, SI/TI definitions contain ambiguities, so the goal of this sub-project is to provide revised definitions eliminating implementation inconsistencies.

Three main topics are discussed by VQEG in a series of online meetings:

  • Comparison of existing publicly available implementations for SI/TI: a comparison was made between several public open-source implementations for SI/TI, based on initial feedback from members of Facebook. Bugs and inconsistencies were identified with the handling of video frame borders, treatment of limited vs. full range content, as well as the reporting of TI values for the first frame. Also, the lack of standardized test vectors was brought up as an issue. As a consequence, a new reference library was developed in Python by members of TU Ilmenau, incorporating all bug fixes that were previously identified, and introducing a new test suite, to which the public is invited to contribute material. VQEG is now actively looking for specific test sequences that will be useful for both validating existing SI/TI implementations, but also extending the scope of the metrics, which is related to the next issue described below.
  • Study on how to apply SI/TI on different content formats: the description of SI/TI was found to be not suitable for extended applications such as video with a higher bit depth (> 8 Bit), HDR content, or spherical/3D video. Also, the question was raised on how to deal with the presence of scene changes in content. The community concluded that for content with higher bit depth, SI/TI functions should be calculated as specified, but that the output values could be mapped back to the original 8-Bit range to simplify comparisons. As for HDR, no conclusion was reached, given the inherent complexity of the subject. It was also preliminarily concluded that the treatment of scene changes should not be part of an SI/TI recommendation, instead focusing on calculating SI/TI for short sequences without scene changes, since the way scene changes would be dealt with may depend on the final application of the metrics.
  • Discussion on other relevant uses of SI/TI: it has been widely used for checking video datasets in terms of diversity and classifying content. Also, SI/TI have been used in some no-reference metrics as content features. The question was raised whether SI/TI could be used for predicting how well content could be encoded. The group noted that different encoders would deal with sources differently, e.g. related to noise in the video. It was stated that it would be nice to be able to find a metric that was purely related to content and not affected by encoding or representation.

As a first step, this revision of the topic of SI/TI has resulted in a harmonized implementation and in the identification of future application areas. Discussions on these topics will continue in the next months through audio-calls that are open to interested readers.

Video Quality Metadata Standard

Also within NORM group, another topic was launched related to the inclusion of video quality metadata in compressed streams [3].

Almost all modern transcoding pipelines use full-reference video quality metrics to decide on the most appropriate encoding settings. The computation of these quality metrics is demanding in terms of time and computational resources. In addition, estimation errors propagate and accumulate when quality metrics are recomputed several times along the transcoding pipeline. Thus, retaining the results of these metrics with the video can alleviate these constraints, requiring very little space and providing a “greener” way of estimating video quality. With this goal, the new sub-project has started working towards the definition of a standard format to include video quality metrics metadata both at video bitstream level and system layer [4].

In this sense, the experts involved in the new sub-project are working on the following items:

  • Identification of existing proposals and working groups within other standardisation bodies and organisations that address similar topics and propose amendments including new requirements. For example, MPEG has already worked on the adding of video quality metrics (e.g., PSNR, SSIM, MS-SSIM, VQM, PEVQ, MOS, FISG) metadata at system level (e.g, in MPEG2 streams [5], HTTP [6], etc.[7]).
  • Identification of quality metrics to be considered in the standard. In principle, validated and standardized metrics are of interest, although other metrics can be also considered after a validation process on a standard set of subjective data (e.g., using existing datasets). New metrics to those used in previous approaches are of special interest. (e.g., VMAF [8], FB-MOS [9]).
  • Consideration of the computation of multiple generations of full-reference metrics at different steps of the transcoding chain, of the use of metrics at different resolutions, different spatio-temporal aggregation methods, etc.
  • Definition of a standard video quality metadata payload, including relevant fields such as metric name (e.g., “SSIM”), version (e.g., “v0.6.1”), raw score (e.g., “0.9256”), mapped-to-MOS score (e.g., “3.89”), scaling method (e.g., “Lanczos-5”), temporal reference (e.g., “0-3” frames), aggregation method (e.g., “arithmetic mean”), etc [4].

More details and information on how to join this activity can be found in the NORM webpage.

QoE metrics for live video streaming applications

The VQEG Audiovisual HD Quality (AVHD) group launched a new sub-project on QoE metrics for live media streaming applications (Live QoE) in the last VQEG meeting [10].

The success of a live multimedia streaming session is defined by the experience of a participating audience. Both the content communicated by the media and the quality at which it is delivered matter – for the same content, the quality delivered to the viewer is a differentiating factor. Live media streaming systems undertake a lot of investment and operate under very tight service availability and latency constraints to support multimedia sessions for their audience. Both to measure the return on investment and to make sound investment decisions, it is paramount that we be able to measure the media quality offered by these systems. In this sense, given the large scale and complexity of media streaming systems, objective metrics are needed to measure QoE.

Therefore, the following topics have been identified and are studied [11]:

  • Creation of a high quality dataset, including media clips and subjective scores, which will be used to tune, train and develop objective QoE metrics. This dataset should represent the conditions that take place in typical live media streaming situations, therefore conditions and impairments comprising audio and video tracks (independently and jointly) will be considered. In addition, this datasets should cover a diverse set of content categories, including premium contentes (e.g., sports, movies, concerts, etc.) and user generated content (e.g., music, gaming, real life content, etc.).
  • Development of QoE objective metrics, especially focusing on no-reference or near-no-reference metrics, given the lack of access to the original video at various points in the live media streaming chain. Different types of models will be considered including signal-based (operate on the decoded signal), metadata-based (operate on available metadata, e.g. codecs, resolution, framerate, bitrate, etc.), bitstream-based (operate on the parsed bitstream), and hybrid models (combining signal and metadata) [12]. Also, machine-learning based models will be explored.

Certain challenges are envisioned to be faced when dealing with these two topics, such as separating “content” from “quality” (taking int account that content plays a big role on engagement and acceptability), spectrum expectations, role of network impairments and the collection of enough data to develop robust models [11]. Readers interested in joining this effort are encouraged to visit AVHD webpage for more details.

Implementer’s Guide to Video Quality Metrics

In the last meeting, a new dedicated group on Implementer’s Guide to Video Quality Metrics (IGVQM) was set up to work on introducing and provide guidelines on implementing objective video quality metrics to the video compression community.

During the development of new video coding standards, peak-signal-to-noise-ratio (PSNR) has been traditionally used as the main objective metric to determine which new coding tools to be adopted. It has been furthermore used to establish the bitrate savings that a new coding standard offers over its predecessor through the employment of the so-called “BD-rate” metric [13] that still relies on PSNR for measuring quality.

Although this choice was fully justified for the first image/video coding standards – JPEG (1992), MPEG1 (1994), MPEG2 (1996), JPEG2000 and even H.264/AVC (2004) – since there was simply no other alternative at that time, its continuing use for the development of H.265/HEVC (2013), VP9 (2013), AV1 (2018) and most recently EVC and VVC (2020) is questionable, given the rapid and continuous evolution of more perceptual image/video objective quality metrics, such as SSIM (2004) [14], MS-SSIM (2004) [15], and VMAF (2015) [8].

This project attempts to offer some guidance to the video coding community, including standards setting organisations, on how to better utilise existing objective video quality metrics to better capture the improvements offered by video coding tools. For this, the following goals have been envisioned:

  • Address video compression and scaling impairments only.
  • Explore and use “state-of-the-art” full-reference (pixel) objective metrics, examine applicability of no-reference objective metrics, and obtain reference implementations of them.
  • Offer temporal aggregation methods of image quality metrics into video quality metrics.
  • Present statistical analysis of existing subjective datasets, constraining them to compression and scaling artifacts.
  • Highlight differences among objective metrics and use-cases. For example, in case of very small differences, which metric is more sensitive? Which quality range is better served by what metric?
  • Offer standard logistic mappings of objective metrics to a normalised linear scale.

More details can be found in the working document that has been set up to launch the project [16] and on the VQEG website.

References

[1] ITU-T Rec. P.910. Subjective video quality assessment methods for multimedia applications, 2008.
[2] M. H. Pinson and A. Webster, “T1A1 Validation Test Database,” VQEG eLetter, vol. 1, no. 2, 2015.
[3] I. Katsavounidis, “Video quality metadata in compressed bitstreams”, Presentation in VQEG Meeting, Dec. 2020.
[4] I. Katsavounidis et al. “A case for embedding video quality metrics as metadata in compressed bitstreams, working document, 2019.
[5] ISO/IEC 13818-1:2015/AMD 6:2016 Carriage of Quality Metadata in MPEG2 Streams.
[6] ISO/IEC 23009 Dynamic Adaptive Streaming over HTTP (DASH).
[7] ISO/IEC 23001-10, MPEG Systems Technologies – Part 10: Carriage of timed metadata metrics of media in ISO base media file format.
[8] Toward a practical perceptual video quality metric, Tech blog with VMAF’s open sourcing on Github, Jun. 6, 2016.
[9] S.L. Regunathan, H. Wang, Y. Zhang, Y. R. Liu, D. Wolstencroft, S. Reddy, C. Stejerean, S. Gandhi, M. Chen, P. Sethi, A, Puntambekar, M. Coward, I. Katsavounidis, “Efficient measurement of quality at scale in Facebook video ecosystem”, in Applications of Digital Image Processing XLIII, vol. 11510, p. 115100J, Aug. 2020.
[10] R. Puri, “On a QoE metric for live media streaming applications”, Presentation in VQEG Meeting, Dec. 2020.
[11] R. Puri and S. Satti, “On a QoE metric for live media streaming applications”, working document, Jan. 2021.
[12] A. Raake, S. Borer, S. Satti, J. Gustafsson, R.R.R. Rao, S. Medagli, P. List, S. Göring, D. Lindero, W. Robitza, G. Heikkilä, S. Broom, C. Schmidmer, B. Feiten, U. Wüstenhagen, T. Wittmann, M. Obermann, R. Bitto, “Multi-model standard for bitstream-, pixel-based and hybrid video quality assessment of UHD/4K: ITU-T P.1204” , IEEE Access, vol. 8, Oct. 2020.
[13] G. Bjøntegaard, “Calculation of Average PSNR Differences Between RD-Curves”, Document VCEG-M33, ITU-T SG 16/Q6, 13th VCEG Meet- ing, Austin, TX, USA, Apr. 2001.
[14] Z. Wang, A. C. Bovik, H. R. Sheikh and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” in IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600-612, April 2004.
[15] Z. Wang, E. P. Simoncelli and A. C. Bovik, “Multiscale structural similarity for image quality assessment,” The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, 2003, Pacific Grove, CA, USA, 2003.
[16] I. Katsavounidis, “VQEG’s Implementer’s Guide to Video Quality Metrics (IGVQM) project , working document, 2021.

MPEG Column: 133rd MPEG Meeting (virtual/online)

The original blog post can be found at the Bitmovin Techblog and has been modified/updated here to focus on and highlight research aspects.

The 133rd MPEG meeting was once again held as an online meeting, and this time, kicked off with great news, that MPEG is one of the organizations honored as a 72nd Annual Technology & Engineering Emmy® Awards Recipient, specifically the MPEG Systems File Format Subgroup and its ISO Base Media File Format (ISOBMFF) et al.

The official press release can be found here and comprises the following items:

  • 6th Emmy® Award for MPEG Technology: MPEG Systems File Format Subgroup wins Technology & Engineering Emmy® Award
  • Essential Video Coding (EVC) verification test finalized
  • MPEG issues a Call for Evidence on Video Coding for Machines
  • Neural Network Compression for Multimedia Applications – MPEG calls for technologies for incremental coding of neural networks
  • MPEG Systems reaches the first milestone for supporting Versatile Video Coding (VVC) and Essential Video Coding (EVC) in the Common Media Application Format (CMAF)
  • MPEG Systems continuously enhances Dynamic Adaptive Streaming over HTTP (DASH)
  • MPEG Systems reached the first milestone to carry event messages in tracks of the ISO Base Media File Format

In this report, I’d like to focus on ISOBMFF, EVC, CMAF, and DASH.

MPEG Systems File Format Subgroup wins Technology & Engineering Emmy® Award

MPEG is pleased to report that the File Format subgroup of MPEG Systems is being recognized this year by the National Academy for Television Arts and Sciences (NATAS) with a Technology & Engineering Emmy® for their 20 years of work on the ISO Base Media File Format (ISOBMFF). This format was first standardized in 1999 as part of the MPEG-4 Systems specification and is now in its 6th edition as ISO/IEC 14496-12. It has been used and adopted by many other specifications, e.g.:

  • MP4 and 3GP file formats;
  • Carriage of NAL unit structured video in the ISO Base Media File Format which provides support for AVC, HEVC, VVC, EVC, and probably soon LCEVC;
  • MPEG-21 file format;
  • Dynamic Adaptive Streaming over HTTP (DASH) and Common Media Application Format (CMAF);
  • High-Efficiency Image Format (HEIF);
  • Timed text and other visual overlays in ISOBMFF;
  • Common encryption format;
  • Carriage of timed metadata metrics of media;
  • Derived visual tracks;
  • Event message track format;
  • Carriage of uncompressed video;
  • Omnidirectional Media Format (OMAF);
  • Carriage of visual volumetric video-based coding data;
  • Carriage of geometry-based point cloud compression data;
  • … to be continued!

This is MPEG’s fourth Technology & Engineering Emmy® Award (after MPEG-1 and MPEG-2 together with JPEG in 1996, Advanced Video Coding (AVC) in 2008, and MPEG-2 Transport Stream in 2013) and sixth overall Emmy® Award including the Primetime Engineering Emmy® Awards for Advanced Video Coding (AVC) High Profile in 2008 and High-Efficiency Video Coding (HEVC) in 2017, respectively.

Essential Video Coding (EVC) verification test finalized

At the 133rd MPEG meeting, a verification testing assessment of the Essential Video Coding (EVC) standard was completed. The first part of the EVC verification test using high dynamic range (HDR) and wide color gamut (WCG) was completed at the 132nd MPEG meeting. A subjective quality evaluation was conducted comparing the EVC Main profile to the HEVC Main 10 profile and the EVC Baseline profile to AVC High 10 profile, respectively:

  • Analysis of the subjective test results showed that the average bitrate savings for EVC Main profile are approximately 40% compared to HEVC Main 10 profile, using UHD and HD SDR content encoded in both random access and low delay configurations.
  • The average bitrate savings for the EVC Baseline profile compared to the AVC High 10 profile is approximately 40% using UHD SDR content encoded in the random-access configuration and approximately 35% using HD SDR content encoded in the low delay configuration.
  • Verification test results using HDR content had shown average bitrate savings for EVC Main profile of approximately 35% compared to HEVC Main 10 profile.

By providing significantly improved compression efficiency compared to HEVC and earlier video coding standards while encouraging the timely publication of licensing terms, the MPEG-5 EVC standard is expected to meet the market needs of emerging delivery protocols and networks, such as 5G, enabling the delivery of high-quality video services to an ever-growing audience. 

In addition to verification tests, EVC, along with VVC and CMAF were subject to further improvements to their support systems.

Research aspects: as for every new video codec, its compression efficiency and computational complexity are important performance metrics. Additionally, the availability of (efficient) open-source implementations (i.e., x264, x265, soon x266, VVenC, aomenc, et al., etc.) are vital for its adoption in the (academic) research community.

MPEG Systems reaches the first milestone for supporting Versatile Video Coding (VVC) and Essential Video Coding (EVC) in the Common Media Application Format (CMAF)

At the 133rd MPEG meeting, MPEG Systems promoted Amendment 2 of the Common Media Application Format (CMAF) to Committee Draft Amendment (CDAM) status, the first major milestone in the ISO/IEC approval process. This amendment defines:

  • constraints to (i) Versatile Video Coding (VVC) and (ii) Essential Video Coding (EVC) video elementary streams when carried in a CMAF video track;
  • codec parameters to be used for CMAF switching sets with VVC and EVC tracks; and
  • support of the newly introduced MPEG-H 3D Audio profile.

It is expected to reach its final milestone in early 2022. For research aspects related to CMAF, the reader is referred to the next section about DASH.

MPEG Systems continuously enhances Dynamic Adaptive Streaming over HTTP (DASH)

At the 133rd MPEG meeting, MPEG Systems promoted Part 8 of Dynamic Adaptive Streaming over HTTP (DASH) also referred to as “Session-based DASH” to its final stage of standardization (i.e., Final Draft International Standard (FDIS)).

Historically, in DASH, every client uses the same Media Presentation Description (MPD), as it best serves the scalability of the service. However, there have been increasing requests from the industry to enable customized manifests for enabling personalized services. MPEG Systems has standardized a solution to this problem without sacrificing scalability. Session-based DASH adds a mechanism to the MPD to refer to another document, called Session-based Description (SBD), which allows per-session information. The DASH client can use this information (i.e., variables and their values) provided in the SBD to derive the URLs for HTTP GET requests.

An updated overview of DASH standards/features can be found in the Figure below.

MPEG DASH Status as of January 2021.

Research aspects: CMAF is mostly like becoming the main segment format to be used in the context of HTTP adaptive streaming (HAS) and, thus, also DASH (hence also the name common media application format). Supporting a plethora of media coding formats will inevitably result in a multi-codec dilemma to be addressed in the near future as there will be no flag day where everyone will switch to a new coding format. Thus, designing efficient bitrate ladders for multi-codec delivery will an interesting research aspect, which needs to include device/player support (i.e., some devices/player will support only a subset of available codecs), storage capacity/costs within the cloud as well as within the delivery network, and network distribution capacity/costs (i.e., CDN costs).

The 134th MPEG meeting will be again an online meeting in April 2021. Click here for more information about MPEG meetings and their developments.

JPEG Column: 90th JPEG Meeting

JPEG AI becomes a new work item of ISO/IEC

The 90th JPEG meeting was held online from 18 to 22 January 2021. This meeting was distinguished by very relevant activities, notably the new JPEG AI standardization project planning, and the analysis of the Call for Evidence on JPEG Pleno Point Cloud Coding.

The new JPEG AI Learning-based Image Coding System has become an official new work item registered under ISO/IEC 6048 and aims at providing compression efficiency in addition to image processing and computer visions tasks without the need for decompression.

The response to the Call for Evidence on JPEG Pleno Point Cloud Coding was a learning-based method that was found to offer state of the art compression efficiency.  Considering this response, the JPEG Pleno Point Cloud activity will analyse the possibility of preparing a future call for proposals on learning-based coding solutions that will also consider new functionalities, building on the relevant use cases already identified that require machine learning tasks processed in the compressed domain.

Meanwhile the new JPEG XL coding system has reached FDIS stage and it is ready for adoption. JPEG XL offers compression efficiency similar to the best state of the art in image coding, the best lossless compression performance, affordable low complexity and integration with the legacy JPEG image coding standard allowing a friendly transition between the two standards.

The new JPEG AI logo.

The 90th JPEG meeting had the following highlights:

  • JPEG AI,
  • JPEG Pleno Point Cloud response to the Call for Evidence,
  • JPEG XL Core Coding System reaches FDIS stage,
  • JPEG Fake Media exploration,
  • JPEG DNA continues the exploration on image coding suitable for DNA storage,
  • JPEG systems,
  • JPEG XS 2nd edition of Profiles reaches DIS stage.

JPEG AI

The scope of the JPEG AI is the creation of a learning-based image coding standard offering a single-stream, compact compressed domain representation, targeting both human visualization with significant compression efficiency improvement over image coding standards in common use at equivalent subjective quality, and effective performance for image processing and computer vision tasks, with the goal of supporting a royalty-free baseline.

JPEG AI has made several advances during the 90th technical meeting. During this meeting, the JPEG AI Use Cases and Requirements were discussed and collaboratively defined. Moreover, the JPEG AI vision and the overall system framework of an image compression solution with efficient compressed domain representation was defined. Following this approach, a set of exploration experiments were defined to assess the capabilities of the compressed representation generated by learning-based image codecs, considering some specific computer vision and image processing tasks.

Moreover, the performance assessment of the most popular objective quality metrics, using subjective scores obtained during the call for evidence were discussed, as well as anchors and some techniques to perform spatial prediction and entropy coding.

JPEG Pleno Point Cloud response to the Call for Evidence

JPEG Pleno is working towards the integration of various modalities of plenoptic content under a single and seamless framework. Efficient and powerful point cloud representation is a key feature within this vision. Point cloud data supports a wide range of applications including computer-aided manufacturing, entertainment, cultural heritage preservation, scientific research and advanced sensing and analysis. During the 90th JPEG meeting, the JPEG Committee reached an exciting major milestone and reviewed the results of its Final Call for Evidence on JPEG Pleno Point Cloud Coding. With an innovative Deep Learning based point cloud codec supporting scalability and random access submitted, the Call for Evidence results highlighted the emerging role of Deep Learning in point cloud representation and processing. Between the 90th and 91st meetings, the JPEG Committee will be refining the scope and direction of this activity in light of the results of the Call for Evidence.

JPEG XL Core Coding System reaches FDIS stage

The JPEG Committee has finalized JPEG XL Part 1 (Core Coding System), which is now at FDIS stage. The committee has defined new core experiments to determine appropriate profiles and levels for the codec, as well as appropriate criteria for defining conformance. With Part 1 complete, and Part 2 close to completion, JPEG XL is ready for evaluation and adoption by the market.

JPEG Fake Media exploration

The JPEG Committee initiated the JPEG Fake Media JPEG exploration study with the objective to create a standard that can facilitate secure and reliable annotation of media asset generation and modifications. The initiative aims to support usage scenarios that are in good faith as well as those with malicious intent. During the 90th JPEG meeting, the committee released a new version of the document entitled “JPEG Fake Media: Context Use Cases and Requirements” which is available on the JPEG website. A first workshop on the topic was organized on the 15th of December 2020. The program, presentations and a video recording of this workshop are available on the JPEG website. A second workshop will be organized around March 2021. More details will be made available soon on JPEG.org. JPEG invites interested parties to regularly visit https://jpeg.org/jpegfakemedia for the latest information and subscribe to the mailing list via http://listregistration.jpeg.org.

JPEG DNA continues the exploration on image coding suitable for DNA storage

The JPEG Committee continued its exploration for coding of images in quaternary representation, particularly suitable for DNA storage. After a second successful workshop presentation by stakeholders, additional requirements were identified, and a new version of the JPEG DNA overview document was issued and made publicly available. It was decided to continue this exploration by organising a third workshop and further outreach to stakeholders, as well as a proposal for an updated version of the JPEG overview document. Interested parties are invited to refer to the following URL and to consider joining the effort by registering to the mailing list of JPEG DNA here: https://jpeg.org/jpegdna/index.html.

JPEG Systems

JUMBF (ISO/IEC 19566-5) Amendment 1 draft review is complete and it is proceeding to international standard and subsequent publication; additional features to support new applications are under consideration.   Likewise, JPEG 360 (ISO/IEC 19566-5) Amendment 1 draft review is complete, and it is proceeding to international standard and subsequent publication.  The JLINK (ISO/IEC 19566-7) standard completed the committee draft review and is preparing a DIS study text ahead of the 91st meeting. The JPEG Snack (ISO/IEC 19566-8) will make a second working draft.  Interested parties can subscribe to the mailing list of the JPEG Systems AHG in order to contribute to the above activities.

JPEG XS 2nd edition of Profiles reaches DIS stage

The 2nd edition of Part 2 (Profiles) is now at the DIS stage and defines the required new profiles and levels to support the compression of raw Bayer content, mathematically lossless coding of up to 12-bit per component images, and 4:2:0 sampled image content. With the second editions of Parts 1, 2, and 3 completed, and the scheduled second editions of Part 4 (Conformance) and 5 (Reference Software), JPEG XS will soon have received a complete backwards-compatible revision of its entire suite of standards. Moreover, the committee defined a new exploration study to create new coding tools for improving the HDR and mathematically lossless compression capabilities, while still honoring the low-complexity and low-latency requirements.

Final Quote

“The official approval of JPEG AI by JPEG Parent Bodies ISO and IEC is a strong signal of support of this activity and its importance in the creation of AI-based imaging applications” said Prof. Touradj Ebrahimi, the Convenor of the JPEG Committee.

Future JPEG meetings are planned as follows:

  • No 91, will be held online from April 19 to 23, 2021.
  • No 92, will be held online from July 7 to 13, 2021.

ITU-T Standardization Activities Targeting Gaming Quality of Experience

Motivation for Research in the Gaming Domain

The gaming industry has eminently managed to intrinsically motivate users to interact with their services. According to the latest report of Newzoo, there will be an estimated total of 2.7 billion players across the globe by the end of 2020. The global games market will generate revenues of $159.3 billion in 2020 [1]. This surpasses the movie industry (box offices and streaming services) by a factor of four and almost three times the music industry market in value [2].

The rapidly growing domain of online gaming emerged in the late 1990s and early 2000s allowing social relatedness to a great number of players. During traditional online gaming, typically, the game logic and the game user interface are locally executed and rendered on the player’s hardware. The client device is connected via the internet to a game server to exchange information influencing the game state, which is then shared and synchronized with all other players connected to the server. However, in 2009 a new concept called cloud gaming emerged that is comparable to the rise of Netflix for video consumption and Spotify for music consumption. On the contrary to traditional online gaming, cloud gaming is characterized by the execution of the game logic, rendering of the virtual scene, and video encoding on a cloud server, while the player’s client is solely responsible for video decoding and capturing of client input [3].

For online gaming and cloud gaming services, in contrast to applications such as voice, video, and web browsing, little information existed on factors influencing the Quality of Experience (QoE) of online video games, on subjective methods for assessing gaming QoE, or on instrumental prediction models to plan and manage QoE during service set-up and operation. For this reason, Study Group (SG) 12 of the Telecommunication Standardization Sector of the International Telecommunication Union (ITU-T) has decided to work on these three interlinked research tasks [4]. This was especially required since the evaluation of gaming applications is fundamentally different compared to task-oriented human-machine interactions. Traditional aspects such as effectiveness and efficiency as part of usability cannot be directly applied to gaming applications like a game without any challenges and time passing would result in boredom, and thus, a bad player experience (PX). The absence of standardized assessment methods as well as knowledge about the quantitative and qualitative impact of influence factors resulted in a situation where many researchers tended to use their own self-developed research methods. This makes collaborative work through reliably, valid, and comparable research very difficult. Therefore, it is the aim of this report to provide an overview of the achievements reached by ITU-T standardization activities targeting gaming QoE.

Theory of Gaming QoE

As a basis for the gaming research carried out, in 2013 a taxonomy of gaming QoE aspects was proposed by Möller et al. [5]. The taxonomy is divided into two layers of which the top layer contains various influencing factors grouped into user (also human), system (also content), and context factors. The bottom layer consists of game-related aspects including hedonic concepts such as appeal, pragmatic concepts such as learnability and intuitivity (part of playing quality which can be considered as a kind of game usability), and finally, the interaction quality. The latter is composed of output quality (e.g., audio and video quality), as well as input quality and interactive behaviour. Interaction quality can be understood as the playability of a game, i.e., the degree to which all functional and structural elements of a game (hardware and software) enable a positive PX. The second part of the bottom layer summarized concepts related to the PX such as immersion (see [6]), positive and negative affect, as well as the well-known concept of flow that describes an equilibrium between requirements (i.e., challenges) and abilities (i.e., competence). Consequently, based on the theory depicted in the taxonomy, the question arises which of these aspects are relevant (i.e., dominant), how they can be assessed, and to which extent they are impacted by the influencing factors.

Fig. 1: Taxonomy of gaming QoE aspects. Upper panel: Influence factors and interaction performance aspects; lower panel: quality features (cf. [5]).

Introduction to Standardization Activities

Building upon this theory, the SG 12 of the ITU-T has decided during the 2013-2016 Study Period to start work on three new work items called P.GAME, G.QoE-gaming, and G.OMG. However, there are also other related activities at the ITU-T summarized in Fig. 2 about evaluation methods (P.CrowdG), and gaming QoE modelling activities (G.OMMOG and P.BBQCG).

Fig. 2: Overview of ITU-T SG12 recommendations and on-going work items related to gaming services.

The efforts on the three initial work items continued during the 2017-2020 Study Period resulting in the recommendations G.1032, P.809, and G.1072, for which an overview will be given in this section.

ITU-T Rec. G.1032 (G.QoE-gaming)

The ITU-T Rec. G.1032 aims at identifying the factors which potentially influence gaming QoE. For this purpose, the Recommendation provides an overview table and then roughly classifies the influence factors into (A) human, (B) system, and (C) context influence factors. This classification is based on [7] but is now detailed with respect to cloud and online gaming services. Furthermore, the recommendation considers whether an influencing factor carries an influence mainly in a passive viewing-and-listening scenario, in an interactive online gaming scenario, or in an interactive cloud gaming scenario. This classification is helpful to evaluators to decide which type of impact may be evaluated with which type of text paradigm [4]. An overview of the influencing factors identified for the ITU-T Rec. G.1032 is presented in Fig. 3. For subjective user studies, in most cases the human and context factors should be controlled and their influence should be reduced as much as possible. For example, even though it might be a highly impactful aspect of today’s gaming domain, within the scope of the ITU-T cloud gaming modelling activities, only single-player user studies are conducted to reduce the impact of social aspects which are very difficult to control. On the other hand, as network operators and service providers are the intended stakeholders of gaming QoE models, the relevant system factors must be included in the development process of the models, in particular the game content as well as network and encoding parameters.

Fig. 3: Overview of influencing factors on gaming QoE summarized in ITU-T Rec. G.1032 (cf. [3]).

ITU-T Rec. P.809 (P.GAME)

The aim of the ITU-T Rec. P.809 is to describe subjective evaluation methods for gaming QoE. Since there is no single standardized evaluation method available that would cover all aspects of gaming QoE, the recommendation mainly summarizes the state of the art of subjective evaluation methods in order to help to choose suitable methods to conduct subjective experiments, depending on the purpose of the experiment. In its main body, the draft consists of five parts: (A) Definitions for games considered in the Recommendation, (B) definitions of QoE aspects relevant in gaming, (C) a description of test paradigms, (D) a description of the general experimental set-up, recommendations regarding passive viewing-and-listening tests and interactive tests, and (E) a description of questionnaires to be used for gaming QoE evaluation. It is amended by two paragraphs regarding performance and physiological response measurements and by (non-normative) appendices illustrating the questionnaires, as well as an extensive list of literature references [4].

Fundamentally, the ITU-T Rec. P.809 defines two test paradigms to assess gaming quality:

  • Passive tests with predefined audio-visual stimuli passively observed by a participant.
  • Interactive tests with game scenarios interactively played by a participant.

The passive paradigm can be used for gaming quality assessment when the impairment does not influence the interaction of players. This method suggests a short stimulus duration of 30s which allows investigating a great number of encoding conditions while reducing the influence of user behaviours on the stimulus due to the absence of their interaction. Even for passive tests, as the subjective ratings will be merged with those derived from interactive tests for QoE model developments, it is recommended to give instruction about the game rules and objectives to allow participants to have similar knowledge of the game. The instruction should also explain the difference between video quality and graphic quality (e.g., graphical details such as abstract and realistic graphics), as this is one of the common mistakes of participants in video quality assessment of gaming content.

The interactive test should be used when other quality features such as interaction quality, playing quality, immersion, and flow are under investigation. While for the interaction quality, a duration of 90s is proposed, a longer duration of 5-10min is suggested in the case of research targeting engagement concepts such as flow. Finally, the recommendation provides information about the selection of game scenarios as stimulus material for both test paradigms, e.g., ability to provide repetitive scenarios, balanced difficulty, representative scenes in terms of encoding complexity, and avoiding ethically questionable content.

ITU-T Rec. G.1072 (G.OMG)

The quality management of gaming services would require quantitative prediction models. Such models should be able to predict either “overall quality” (e.g., in terms of a Mean Opinion Score), or individual QoE aspects from characteristics of the system, potentially considering the player characteristics and the usage context. ITU-T Rec. G.1072 aims at the development of quality models for cloud gaming services based on the impact of impairments introduced by typical Internet Protocol (IP) networks on the quality experienced by players. G.1072 is a network planning tool that estimates the gaming QoE based on the assumption of network and encoding parameters as well as game content.

The impairment factors are derived from subjective ratings of the corresponding quality aspects, e.g., spatial video quality or interaction quality, and modelled by non-linear curve fitting. For the prediction of the overall score, linear regression is used. To create the impairment factors and regression, a data transformation from the MOS values of each test condition to the R-scale was performed, similar to the well-known E-model [8]. The R-scale, which results from an s-shaped conversion of the MOS scale, promises benefits regarding the additivity of the impairments and compensation for the fact that participants tend to avoid using the extremes of rating scales [3].

As the impact of the input parameters, e.g. delay, was shown to be highly content-dependent, the model used two modes. If no assumption on a game sensitivity class towards degradations is available to the user of the model (e.g. a network provider), the “default” mode of operation should be used that considers the highest (sensitivity) game class. The “default” mode of operation will result in a pessimistic quality prediction for games that are not of high complexity and sensitivity. If the user of the model can make an assumption about the game class (e.g. a service provider), the “extended” mode can predict the quality with a higher degree of accuracy based on the assigned game classes.

On-going Activities

While the three recommendations provide a basis for researchers, as well as network operators and cloud gaming service providers towards improving gaming QoE, the standardization activities continue by initiating new work items focusing on QoE assessment methods and gaming QoE model development for cloud gaming and online gaming applications. Thus, three work items have been established within the past two years.

ITU-T P.BBQCG

P.BBQCG is a work item that aims at the development of a bitstream model predicting cloud gaming QoE. Thus, the model will benefit from the bitstream information, from header and payload of packets, to reach a higher accuracy of audiovisual quality prediction, compared to G.1072. In addition, three different types of codecs and a wider range of network parameters will be considered to develop a generalizable model. The model will be trained and validated for H.264, H.265, and AV1 video codecs and video resolutions up to 4K. For the development of the model, two paradigms of passive and interactive will be followed. The passive paradigm will be considered to cover a high range of encoding parameters, while the interactive paradigm will cover the network parameters that might strongly influence the interaction of players with the game.

ITU-T P.CrowdG

A gaming QoE study is per se a challenging task on its own due to the multidimensionality of the QoE concept and a large number of influence factors. However, it becomes even more challenging if the test would follow a crowdsourcing approach which is of particular interest in times of the COVID-19 pandemic or if subjective ratings are required from a highly diverse audience, e.g., for the development or investigation of questionnaires. The aim of the P.CrowdG work item is to develop a framework that describes the best practices and guidelines that have to be considered for gaming QoE assessment using a crowdsourcing approach. In particular, the crowd gaming framework provides the means to ensure reliable and valid results despite the absence of an experimenter, controlled network, and visual observation of test participants had to be considered. In addition to the crowd game framework, guidelines will be given that provide recommendations to ensure collecting valid and reliable results, addressing issues such as how to make sure workers put enough focus on the gaming and rating tasks. While a possible framework for interactive tests of simple web-based games is already presented in [9], more work is required to complete the ITU-T work item for more advanced setups and passive tests.

ITU-T G.OMMOG

G.OMMOG is a work item that focuses on the development of an opinion model predicting gaming Quality of Experience (QoE) for mobile online gaming services. The work item is a possible extension of the ITU-T Rec. G.1072. In contrast to G.1072, the games are not executed on a cloud server but on a gaming server that exchanges game states with the user’s clients instead of a video stream. This more traditional gaming concept represents a very popular service, especially considering multiplayer gaming such as recently published AAA titles of the Multiplayer Online Battle Arena (MOBA) and battle royal genres.

So far, it is decided to follow a similar model structure to ITU-T Rec. G.1072. However, the component of spatial video quality, which was a major part of G.1072, will be removed, and the corresponding game type information will not be used. In addition, for the development of the model, it was decided to investigate the impact of variable delay and packet loss burst, especially as their interaction can have a high impact on the gaming QoE. It is assumed that more variability of these factors and their interplay will weaken the error handling of mobile online gaming services. Due to missing information on the server caused by packet loss or strong delays, the gameplay is assumed to be not smooth anymore (in the gaming domain, this is called ‘rubber banding’), which will lead to reduced temporal video quality.

About ITU-T SG12

ITU-T Study Group 12 is the expert group responsible for the development of international standards (ITU-T Recommendations) on performance, quality of service (QoS), and quality of experience (QoE). This work spans the full spectrum of terminals, networks, and services, ranging from speech over fixed circuit-switched networks to multimedia applications over mobile and packet-based networks.

In this article, the previous achievements of the ITU-T SG12 with respect to gaming QoE are described. The focus was in particular on subjective assessment methods, influencing factors, and modelling of gaming QoE. We hope that this information will significantly improve the work and research in this domain by enabling more reliable, comparable, and valid findings. Lastly, the report also points out many on-going activities in this rapidly changing domain, to which everyone is gladly invited to participate.

More information about the SG12, which will host its next E-meeting from 4-13 May 2021, can be found at ITU Study Group (SG) 12.

For more information about the gaming activities described in this report, please contact Sebastian Möller (sebastian.moeller@tu-berlin.de).

Acknowledgement

The authors would like to thank all colleagues of ITU-T Study Group 12, as well as of the Qualinet gaming Task Force, for their support. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 871793 and No 643072 as well as by the German Research Foundation (DFG) within project MO 1038/21-1.

References

[1] T. Wijman, The World’s 2.7 Billion Gamers Will Spend $159.3 Billion on Games in 2020; The Market Will Surpass $200 Billion by 2023, 2020.

[2] S. Stewart, Video Game Industry Silently Taking Over Entertainment World, 2019.

[3] S. Schmidt, Assessing the Quality of Experience of Cloud Gaming Services, Ph.D. dissertation, Technische Universität Berlin, 2021.

[4] S. Möller, S. Schmidt, and S. Zadtootaghaj, “New ITU-T Standards for Gaming QoE Evaluation and Management”, in 2018 Tenth International Conference on Quality of Multimedia Experience (QoMEX), IEEE, 2018.

[5] S. Möller, S. Schmidt, and J. Beyer, “Gaming Taxonomy: An Overview of Concepts and Evaluation Methods for Computer Gaming QoE”, in 2013 Fifth International Workshop on Quality of Multimedia Experience (QoMEX), IEEE, 2013.

[6] A. Perkis and C. Timmerer, Eds., QUALINET White Paper on Definitions of Immersive Media Experience (IMEx), European Network on Quality of Experience in Multimedia Systems and Services, 14th QUALINET meeting, 2020.

[7] P. Le Callet, S. Möller, and A. Perkis, Eds, Qualinet White Paper on Definitions of Quality of Experience, COST Action IC 1003, 2013.

[8] ITU-T Recommendation G.107, The E-model: A Computational Model for Use in Transmission Planning. Geneva: International Telecommunication Union, 2015.

[9] S. Schmidt, B. Naderi, S. S. Sabet, S. Zadtootaghaj, and S. Möller, “Assessing Interactive Gaming Quality of Experience Using a Crowdsourcing Approach”, in 2020 Twelfth International Conference on Quality of Multimedia Experience (QoMEX), IEEE, 2020.

Multidisciplinary Column: An Interview with Alex Thayer

Profile picture of Alex Thayer, PhD

Alex, could you tell us a bit about your background, and what the road to your current position was?

Profile picture of Alex Thayer, PhD

Alex Thayer, PhD. Head of Research, Amazon (Search); Affiliate Assistant Professor, University of Washington

Sure! I began my career in the tech industry in 1998, when I interned at the IBM Silicon Valley Lab in San Jose, California. Back then it was called the Santa Teresa Lab, and I completed a year-long internship because I wanted to get a richer professional experience than a single school quarter would provide. I also wanted to find an internship at a company that future employers would recognize when they saw my resume. 

At the time, I thought about my career as a narrative that would span decades: What story would I want to tell about my employment history 20 or 30 years later? In a sense, each job would become a “chapter” in that story. As I have learned over the years, this metaphor holds up and each chapter has a slightly different theme: from drama to comedy to Greek tragedy. After about 13 different tech industry jobs, I think I’ve got a lot of genres covered. 

After the year at IBM, I returned to Seattle and spent another year completing my degrees in Technical Communication (College of Engineering) and Art History (College of Art). After graduation, I focused on building my career as a technical writer. I worked at a voice recognition startup, then at a consulting firm, and I wound up doing a lot of “UX work” that was not quite codified into specific roles yet. For example, in a typical week I might work on the design of a UI component, rewrite the Javascript for a website, change the physical layout of a printed user manual, and write copy for a tutorial. I went back to the University of Washington in 2002 to get a Master of Science degree in the Technical Communication program, and to try teaching courses at the college level. 

Eventually I began working full-time at Microsoft in 2006. It was during my time there when I realized technical writing was not my passion. I decided to “adjust my career narrative” and shift toward UX design and research. I was able to make that happen partly because I worked on a cross-disciplinary team at Microsoft: We had interaction design, industrial design, user research, and content publishing included in the same team. I worked on software and hardware projects in a variety of capacities. For one project, I helped design the physical product packaging; on another project, I collaborated with my teammates on the vision for an adaptive keyboard. 

Eventually I hit the limits of what I could do professionally without returning to school and advancing my knowledge about people and their practices. I returned to the University of Washington and spent 4 years working on my PhD in Human Centered Design & Engineering. I moved with my family to the Bay Area in California near the conclusion of my PhD work, and I looked for a role with a focus on emerging technology and interfaces. I found that role at Intel, where I stayed for a year and a half before shifting to a very different research role at VMware. When an opportunity to work at HP Labs arose, I decided to make another career move after a year and a half. It was never my intention to work for different companies so quickly, but I thought about the career narrative perspective and the story I wanted to tell. That perspective helped me make my decision to change roles and work at HP.

What is the professional role of interdisciplinarity in your experience?

Because I have an interdisciplinary skill set, I have discovered that it can be tricky to find a job! As a “T-shaped” person, it’s not always easy to know how to bring my full set of skills to a specific role or organization. In my experience, companies are looking for experts who can go deep in a particular area, but who can also span a variety of topics and skills as needed. In practice, this means collaborating with colleagues who have an assortment of technical backgrounds and methodologies. In a typical week at my current role, I engage with product managers, designers, design technologists, business leaders, engineers, economists, and scientists. All of these roles have different requirements and dialects, which means I am constantly surrounded by “interdisciplinarity,” if that makes sense!

Also, because of my academic research focus on how people collaborate, it’s hard for me to imagine a world without “interdisciplinarity.” That’s how I think about the “role” of interdisciplinarity: It’s more of a fabric or texture that underpins the teams on which I work. And as a leader, I need to consider how different members of a team or organization come together and bring their unique skills and backgrounds to bear on the tasks at hand. 

As a tangible example, we had a terrific undergraduate intern at HP who was working on Computer Science and Humanities degrees at Stanford. His approach to his education resonated with me since I had taken a similar Engineering/Arts path in my own undergrad education. It was fun to watch him apply his thought processes and knowledge on a team of senior engineers, designers, and researchers. I believe he was successful in his intern role because he could reframe problems or goals in creative ways.  

In 2012, you successfully defended your dissertation on “Understanding University Students’ Use of Tools and Artifacts in Support of Collaborative Project Work”. Almost a decade later: what are your thoughts on today’s use of (multimedia) tools and devices at a university level? 

This is a great segue from the question about interdisciplinarity and collaboration! 

As a social scientist, I am excited to see how new tools and processes “come with” students as they graduate and enter the workforce. The space of design prototyping is evolving rapidly, for example, as recent grads expect to use the same tools on the job that they learned how to use while in school. My role at HP included people management, and I had a number of conversations about how to get access to the specific software and hardware tools that employees needed to achieve their vision. Some of these discussions were easy: one of my colleagues asked if he could buy an iron and an ironing board, for example. I said yes. Other discussions required more planning, like when our team wanted to purchase a laser cutter. So perhaps I am taking this question in an unexpected direction, but I do see an opportunity to bridge a gap between the tools and devices in use at the university level and the availability of those same tools and devices in industry.

To be honest, I have a lot to learn about how students are doing their work today. It’s been several years since I finished my PhD. I spent an entire academic quarter observing a class of advanced design students. When I think about how they were doing their project work nearly a decade ago, and when I think about how I saw students working at Yale a couple of years ago, it’s easy for me to see the advances in technology. Or when we took a trip to Wellesley a few years ago, I watched my young daughter play with the VR headsets and try her hand at archaeology. And yet we still love whiteboards and paper! Once university students are able to safely return to in-person learning, I’m sure we will keep using whiteboards and paper as two of our main tools for learning and collaboration.

Looking at your impressive set of published patents: your inventions draw from and actually span many different disciplines. 

Thanks! All of those patents represent the work of teams: I have been lucky to have worked with amazing people who, quite frankly, did the hard work to make those patents happen. So, returning to that topic of interdisciplinarity, I can only point to these published patents because of the amazing work of my colleagues. 

One anecdote stands out for me now, as I think back about my experience at HP Labs in particular. I was meeting with one of my teammates, an amazing colleague named Ian Robinson, and we were having our weekly one-on-one meeting. We were talking about tracking digital pen devices in Virtual Reality (VR) spaces. At one point we began riffing on the idea of a “low-cost” VR controller, and then we had a realization: rather than putting a lot of expensive technology inside a single pen, what if you designed a pair of objects that relied on a different VR tracking method? We could conceivably eliminate the need for some of the guts of the single object if we had two objects moving in virtual space. We stopped out meeting and walked over to our desks, hoping to catch some of our teammates. We described the essential concept to a few of our peers and that was the genesis of the “VR Grabbers” idea. Jackie Yang was a Stanford grad student who was working as an intern in our lab at the time, and he did an incredible amount of work on the project from that point on. His effort culminated in our UIST 2018 paper on which Jackie was the first author! 

How do you work across disciplines?

Continuing that “VR Grabbers” story, I was lucky enough to have a stimulating conversation with a really smart person in a place that enabled us to pursue the idea. Ian and I came from different professional backgrounds. We happened to find ourselves working together and, on that project, we made the most of our different skills. My role after that initial conversation was to evangelize the project inside the organization rather than develop the prototype, for example. So, while it was great to help a team come together around an idea, my involvement on the project was quite different than it would have been if I were earlier in my career.

I said a bit about collaboration earlier, but I’d like to go a bit deeper on this topic. In my dissertation I spent a lot of time in the literature review section exploring the different types of collaboration. I am a big believer in “contested collaboration,” which occurs when a team of people come from different backgrounds and bring their specific perspectives and experiences to bear on a project. It is certainly more challenging to lead a team that engages in contested collaboration: It would be a lot easier if everyone agreed all the time! I’m not saying anything new here, of course.

Could you name a grand research challenge in your current field of work?

I recently saw the 2021 AI Index Report from Stanford (https://aiindex.stanford.edu/report/) and I thought each topic raised in the summary of that report could represent a “grand research challenge.” On the topic of “generative everything”, I am particularly curious about the future of ideas. In 2019 I delivered one of the keynote presentations at the IEEE Games, Entertainment, and Media (IEEE GEM) conference at Yale University in New Haven, Connecticut. In part of my presentation, I raised the question about attribution of ideas and intellectual property when we “partner” with AI. I can imagine a future where it seems less clear “who” came up with an idea: the person or the AI agent? Thinking about the “VR Grabbers” story I told earlier, I wonder how that same story will play out 20 years from now. In my capacity as an affiliate assistant professor at the University of Washington, I’m excited to continue thinking about this topic!  

How and in what form do you feel we as academics can be most impactful?

I think academics need to keep doing what they’re doing. Perhaps that’s a trite answer, but as a society we need to preserve and protect the ability of academics to do their work, to ask very basic questions and be surprised by what they find. I’m not just talking about the need for basic R&D so we can find the next penicillin. I’m also talking about how companies incentivize the effort to identify and use academic work.

I also think others know a lot more about this topic, though! I’d suggest reviewing the 2017 DIS paper, Translational Resources: Reducing the Gap Between Academic Research and HCI Practice, as a useful starting point. Lucas Colusso recently completed his PhD in Human Centered Design & Engineering at the University of Washington, and he was the first author on that paper. Thanks to Professor Gary Hsieh in that department, I became aware of Lucas’ work and now I reference it with my team members when we talk about how to pursue research topics that will have lasting impact. I believe academics are the experts at generating knowledge, and in industry we can apply similar approaches on our projects. 


Bios

Alex Thayer, PhD is the Head of Research for Amazon (Search) in Palo Alto. He completed his PhD in Human Centered Design & Engineering at the University of Washington, where he is currently an Affiliate Assistant Professor. Prior to joining Amazon, Alex was the Chief Experience Architect for HP Labs. He has also worked at VMware, Intel, Microsoft, YouTube, and a voice recognition startup that was partly funded by James Doohan (Scotty from Star Trek). Alex’s professional work focuses on explorations of the social-technical gap and how we make sense of people’s habits, practices, and messy lives. His academic work spans topics from AR/VR to professional collaboration to digital gaming. He has published 12 patents on medical testing, haptic feedback systems, 3D and 4D printing, immersive displays, and wearable technology. He also co-leads his daughter’s Girl Scout troop.

Editor Biographies

Cynthia_Liem_2017Dr. Cynthia C. S. Liem is an Assistant Professor in the Multimedia Computing Group of Delft University of Technology, The Netherlands, and pianist of the Magma Duo. She initiated and co-coordinated the European research project PHENICX (2013-2016), focusing on technological enrichment of symphonic concert recordings with partners such as the Royal Concertgebouw Orchestra. Her research interests consider music and multimedia search and recommendation, and increasingly shift towards making people discover new interests and content which would not trivially be retrieved. Beyond her academic activities, Cynthia gained industrial experience at Bell Labs Netherlands, Philips Research and Google. She was a recipient of the Lucent Global Science and Google Anita Borg Europe Memorial scholarships, the Google European Doctoral Fellowship 2010 in Multimedia, and a finalist of the New Scientist Science Talent Award 2016 for young scientists committed to public outreach.

jochen_huberDr. Jochen Huber is Professor of Computer Science at Furtwangen University, Germany. Previously, he was a Senior User Experience Researcher with Synaptics and an SUTD-MIT postdoctoral fellow in the Fluid Interfaces Group at MIT Media Lab and the Augmented Human Lab at Singapore University of Technology and Design. He holds a Ph.D. in Computer Science and degrees in both Mathematics (Dipl.-Math.) and Computer Science (Dipl.-Inform.), all from Technische Universität Darmstadt, Germany. Jochen’s work is situated at the intersection of Human-Computer Interaction and Human Augmentation. He designs, implements and studies novel input technology in the areas of mobile, tangible & non-visual interaction, automotive UX and assistive augmentation. He has co-authored over 60 academic publications and regularly serves as program committee member in premier HCI and multimedia conferences. He was program co-chair of ACM TVX 2016 and Augmented Human 2015 and chaired tracks of ACM Multimedia, ACM Creativity and Cognition and ACM International Conference on Interface Surfaces and Spaces, as well as numerous workshops at ACM CHI and IUI. Further information can be found on his personal homepage: http://jochenhuber.com