JPEG Column: 84th JPEG Meeting in Brussels, Belgium

The 84th JPEG meeting was held in Brussels, Belgium.

This meeting was characterised by significant progress in most of JPEG projects and also exploratory studies. JPEG XL, the new image coding system, has issued the Committee Draft, giving shape to this new effective solution for the future of image coding. JPEG Pleno, the standard for new imaging technologies, Part 1 (Framework) and Part 2 (Light field coding) have also reached Draft International Standard status.

Moreover, exploration studies are ongoing in the domain of media blockchain and on the application of learning solutions for image coding (JPEG AI). Both have triggered a number of activities providing new knowledge and opening new possibilities on the future use of these technologies in future JPEG standards.

The 84th JPEG meeting had the following highlights: 84th meetingTE-66694113_10156591758739370_4025463063158194176_n

  • JPEG XL issues the Committee Draft
  • JPEG Pleno Part 1 and 2 reaches Draft International Standard status
  • JPEG AI defines Common Test Conditions
  • JPEG exploration studies on Media Blockchain
  • JPEG Systems –JLINK working draft
  • JPEG XS

In the following, a short description of the most significant activities is presented.

 

JPEG XL

The JPEG XL Image Coding System (ISO/IEC 18181) has completed the Committee Draft of the standard. The new coding technique allows storage of high-quality images at one-third the size of the legacy JPEG format. Moreover, JPEG XL can losslessly transcode existing JPEG images to about 80% of their original size simplifying interoperability and accelerating wider deployment.

The JPEG XL reference software, ready for mobile and desktop deployments, will be available in Q4 2019. The current contributors have committed to releasing it publicly under a royalty-free and open source license.

 

JPEG Pleno

A significant milestone has been reached during this meeting: the Draft International Standard (DIS) for both JPEG Pleno Part 1 (Framework) and Part 2 (Light field coding) have been completed. A draft architecture of the Reference Software (Part 4) and developments plans have been also discussed and defined.

In addition, JPEG has completed an in-depth analysis of existing point cloud coding solutions and a new version of the use-cases and requirements document has been released reflecting the future role of JPEG Pleno in point cloud compression. A new set of Common Test Conditions has been released as a guideline for the testing and evaluation of point cloud coding solutions with both a best practice subjective testing protocol and a set of objective metrics.

JPEG Pleno holography activities had significant advances on the definition of use cases and requirements, and description of Common Test Conditions. New quality assessment methodologies for holographic data defined in the framework of a collaboration between JPEG and Qualinet were established. Moreover, JPEG Pleno continues collecting microscopic and tomographic holographic data.

 

JPEG AI

The JPEG Committee continues to carry out exploration studies with deep learning-based image compression solutions, typically with an auto-encoder architecture. The promise that these types of codecs hold, especially in terms of coding efficiency, will be evaluated with several studies. In this meeting, a Common Test Conditions was produced, which includes a plan for subjective and objective quality assessment experiments as well as coding pipelines for anchor and learning-based codecs. Moreover, a JPEG AI dataset was proposed and discussed, and a double stimulus impairment scale experiment (side-by-side) was performed with a mix of experts and non-experts in a controlled environment.

 

JPEG exploration on Media Blockchain

Fake news, copyright violation, media forensics, privacy and security are emerging challenges in digital media. JPEG has determined that blockchain and distributed ledger technologies (DLT) have great potential as a technology component to address these challenges in transparent and trustable media transactions. However, blockchain and DLT need to be integrated closely with a widely adopted standard to ensure broad interoperability of protected images. JPEG calls for industry participation to help define use cases and requirements that will drive the standardization process. In order to clearly identify the impact of blockchain and distributed ledger technologies on JPEG standards, the committee has organised several workshops to interact with stakeholders in the domain.

The 4th public workshop on media blockchain was organized in Brussels on Tuesday the 16th of July 2019 during the 84th ISO/IEC JTC 1/SC 29/WG1 (JPEG) Meeting. The presentations and program of the workshop are available on jpeg.org.

The JPEG Committee has issued an updated version of the white paper entitled “Towards a Standardized Framework for Media Blockchain” that elaborates on the initiative, exploring relevant standardization activities, industrial needs and use cases.

To keep informed and to get involved in this activity, interested parties are invited to register to the ad hoc group’s mailing list.

 

JPEG Systems – JLINK

At the 84th meeting, IS text reviews for ISO/IEC 19566-5 JUMBF and ISO/IEC 19566-6 JPEG 360 were completed; IS publication will be forthcoming.  Work began on adding functionality to JUMBF, Privacy & Security, and JPEG 360; and initial planning towards developing software implementation of these parts of JPEG Systems specification.  Work also began on the new ISO/IEC 19566-7 Linked media images (JLINK) with development of a working draft.

 

JPEG XS

The JPEG Committee is pleased to announce new Core Experiments and Exploration Studies on compression of raw image sensor data. The JPEG XS project aims at the standardization of a visually lossless low-latency and lightweight compression scheme that can be used as a mezzanine codec in various markets. Video transport over professional video links (SDI, IP, Ethernet), real-time video storage in and outside of cameras, memory buffers, machine vision systems, and data compression onboard of autonomous vehicles are among the targeted use cases for raw image sensor compression. This new work on raw sensor data will pave the way towards highly efficient close-to-sensor image compression workflows with JPEG XS.

 

Final Quote

“Completion of the Committee Draft of JPEG XL, the new standard for image coding is an important milestone. It is hoped that JPEG XL can become an excellent replacement of the widely used JPEG format which has been in service for more than 25 years.” said Prof. Touradj Ebrahimi, the Convenor of the JPEG Committee.

About JPEG

The Joint Photographic Experts Group (JPEG) is a Working Group of ISO/IEC, the International Organisation for Standardization / International Electrotechnical Commission, (ISO/IEC JTC 1/SC 29/WG 1) and of the International Telecommunication Union (ITU-T SG16), responsible for the popular JPEG, JPEG 2000, JPEG XR, JPSearch, JPEG XT and more recently, the JPEG XS, JPEG Systems, JPEG Pleno and JPEG XL families of imaging standards.

More information about JPEG and its work is available at www.jpeg.org.

Future JPEG meetings are planned as follows:

  • No 85, San Jose, California, U.S.A., November 2 to 8, 2019
  • No 86, Sydney, Australia, January 18 to 24, 2020

MPEG Column: 127th MPEG Meeting in Gothenburg, Sweden

The original blog post can be found at the Bitmovin Techblog and has been modified/updated here to focus on and highlight research aspects.

Plenary of the 127th MPEG Meeting in Gothenburg, Sweden.

Plenary of the 127th MPEG Meeting in Gothenburg, Sweden.

The 127th MPEG meeting concluded on July 12, 2019 in Gothenburg, Sweden with the following topics:

  • Versatile Video Coding (VVC) enters formal approval stage, experts predict 35-60% improvement over HEVC
  • Essential Video Coding (EVC) promoted to Committee Draft
  • Common Media Application Format (CMAF) 2nd edition promoted to Final Draft International Standard
  • Dynamic Adaptive Streaming over HTTP (DASH) 4th edition promoted to Final Draft International Standard
  • Carriage of Point Cloud Data Progresses to Committee Draft
  • JPEG XS carriage in MPEG-2 TS promoted to Final Draft Amendment of ISO/IEC 13818-1 7th edition
  • Genomic information representation – WG11 issues a joint call for proposals on genomic annotations in conjunction with ISO TC 276/WG 5
  • ISO/IEC 23005 (MPEG-V) 4th Edition – WG11 promotes the Fourth edition of two parts of “Media Context and Control” to the Final Draft International Standard (FDIS) stage

The corresponding press release of the 127th MPEG meeting can be found here: https://mpeg.chiariglione.org/meetings/127

Versatile Video Coding (VVC)

The Moving Picture Experts Group (MPEG) is pleased to announce that Versatile Video Coding (VVC) progresses to Committee Draft, experts predict 35-60% improvement over HEVC.

The development of the next major generation of video coding standard has achieved excellent progress, such that MPEG has approved the Committee Draft (CD, i.e., the text for formal balloting in the ISO/IEC approval process).

The new VVC standard will be applicable to a very broad range of applications and it will also provide additional functionalities. VVC will provide a substantial improvement in coding efficiency relative to existing standards. The improvement in coding efficiency is expected to be quite substantial – e.g., in the range of 35–60% bit rate reduction relative to HEVC although it has not yet been formally measured. Relative to HEVC means for equivalent subjective video quality at picture resolutions such as 1080p HD or 4K or 8K UHD, either for standard dynamic range video or high dynamic range and wide color gamut content for levels of quality appropriate for use in consumer distribution services. The focus during the development of the standard has primarily been on 10-bit 4:2:0 content, and 4:4:4 chroma format will also be supported.

The VVC standard is being developed in the Joint Video Experts Team (JVET), a group established jointly by MPEG and the Video Coding Experts Group (VCEG) of ITU-T Study Group 16. In addition to a text specification, the project also includes the development of reference software, a conformance testing suite, and a new standard ISO/IEC 23002-7 specifying supplemental enhancement information messages for coded video bitstreams. The approval process for ISO/IEC 23002-7 has also begun, with the issuance of a CD consideration ballot.

Research aspects: VVC represents the next generation video codec to be deployed in 2020+ and basically the same research aspects apply as for previous generations, i.e., coding efficiency, performance/complexity, and objective/subjective evaluation. Luckily, JVET documents are freely available including the actual standard (committee draft), software (and its description), and common test conditions. Thus, researcher utilizing these resources are able to conduct reproducible research when contributing their findings and code improvements back to the community at large. 

Essential Video Coding (EVC)

MPEG-5 Essential Video Coding (EVC) promoted to Committee Draft

Interestingly, at the same meeting as VVC, MPEG promoted MPEG-5 Essential Video Coding (EVC) to Committee Draft (CD). The goal of MPEG-5 EVC is to provide a standardized video coding solution to address business needs in some use cases, such as video streaming, where existing ISO video coding standards have not been as widely adopted as might be expected from their purely technical characteristics.

The MPEG-5 EVC standards includes a baseline profile that contains only technologies that are over 20 years old or are otherwise expected to be royalty-free. Additionally, a main profile adds a small number of additional tools, each providing significant performance gain. All main profile tools are capable of being individually switched off or individually switched over to a corresponding baseline tool. Organizations making proposals for the main profile have agreed to publish applicable licensing terms within two years of FDIS stage, either individually or as part of a patent pool.

Research aspects: Similar research aspects can be described for EVC and from a software engineering perspective it could be also interesting to further investigate this switching mechanism of individual tools or/and fall back option to baseline tools. Naturally, a comparison with next generation codecs such as VVC is interesting per se. The licensing aspects itself are probably interesting for other disciplines but that is another story…

Common Media Application Format (CMAF)

MPEG ratified the 2nd edition of the Common Media Application Format (CMAF)

The Common Media Application Format (CMAF) enables efficient encoding, storage, and delivery of digital media content (incl. audio, video, subtitles among others), which is key to scaling operations to support the rapid growth of video streaming over the internet. The CMAF standard is the result of widespread industry adoption of an application of MPEG technologies for adaptive video streaming over the Internet, and widespread industry participation in the MPEG process to standardize best practices within CMAF.

The 2nd edition of CMAF adds support for a number of specifications that were a result of significant industry interest. Those include

  • Advanced Audio Coding (AAC) multi-channel;
  • MPEG-H 3D Audio;
  • MPEG-D Unified Speech and Audio Coding (USAC);
  • Scalable High Efficiency Video Coding (SHVC);
  • IMSC 1.1 (Timed Text Markup Language Profiles for Internet Media Subtitles and Captions); and
  • additional HEVC video CMAF profiles and brands.

This edition also introduces CMAF supplemental data handling as well as new structural brands for CMAF that reflects the common practice of the significant deployment of CMAF in industry. Companies adopting CMAF technology will find the specifications introduced in the 2nd Edition particularly useful for further adoption and proliferation of CMAF in the market.

Research aspects: see below (DASH).

Dynamic Adaptive Streaming over HTTP (DASH)

MPEG approves the 4th edition of Dynamic Adaptive Streaming over HTTP (DASH)

The 4th edition of MPEG-DASH comprises the following features:

  • service description that is intended by the service provider on how the service is expected to be consumed;
  • a method to indicate the times corresponding to the production of associated media;
  • a mechanism to signal DASH profiles and features, employed codec and format profiles; and
  • supported protection schemes present in the Media Presentation Description (MPD).

It is expected that this edition will be published later this year. 

Research aspects: CMAF 2nd and DASH 4th edition come along with a rich feature set enabling a plethora of use cases. The underlying principles are still the same and research issues arise from updated application and service requirements with respect to content complexity, time aspects (mainly delay/latency), and quality of experience (QoE). The DASH-IF awards the excellence in DASH award at the ACM Multimedia Systems conference and an overview about its academic efforts can be found here.

Carriage of Point Cloud Data

MPEG progresses the Carriage of Point Cloud Data to Committee Draft

At its 127th meeting, MPEG has promoted the carriage of point cloud data to the Committee Draft stage, the first milestone of ISO standard development process. This standard is the first one introducing the support of volumetric media in the industry-famous ISO base media file format family of standards.

This standard supports the carriage of point cloud data comprising individually encoded video bitstreams within multiple file format tracks in order to support the intrinsic nature of the video-based point cloud compression (V-PCC). Additionally, it also allows the carriage of point cloud data in one file format track for applications requiring multiplexed content (i.e., the video bitstream of multiple components is interleaved into one bitstream).

This standard is expected to support efficient access and delivery of some portions of a point cloud object considering that in many cases that entire point cloud object may not be visible by the user depending on the viewing direction or location of the point cloud object relative to other objects. It is currently expected that the standard will reach its final milestone by the end of 2020.

Research aspects: MPEG’s Point Cloud Compression (PCC) comes in two flavors, video- and geometric-based but still requires to be packaged into file and delivery formats. MPEG’s choice here is the ISO base media file format and the efficient carriage of point cloud data is characterized by both functionality (i.e., enabling the required used cases) and performance (such as low overhead).

MPEG 2 Systems/Transport Stream

JPEG XS carriage in MPEG-2 TS promoted to Final Draft Amendment of ISO/IEC 13818-1 7th edition

At its 127th meeting, WG11 (MPEG) has extended ISO/IEC 13818-1 (MPEG-2 Systems) – in collaboration with WG1 (JPEG) – to support ISO/IEC 21122 (JPEG XS) in order to support industries using still image compression technologies for broadcasting infrastructures. The specification defines a JPEG XS elementary stream header and specifies how the JPEG XS video access unit (specified in ISO/IEC 21122-1) is put into a Packetized Elementary Stream (PES). Additionally, the specification also defines how the System Target Decoder (STD) model can be extended to support JPEG XS video elementary streams.

Genomic information representation

WG11 issues a joint call for proposals on genomic annotations in conjunction with ISO TC 276/WG 5

The introduction of high-throughput DNA sequencing has led to the generation of large quantities of genomic sequencing data that have to be stored, transferred and analyzed. So far WG 11 (MPEG) and ISO TC 276/WG 5 have addressed the representation, compression and transport of genome sequencing data by developing the ISO/IEC 23092 standard series also known as MPEG-G. They provide a file and transport format, compression technology, metadata specifications, protection support, and standard APIs for the access of sequencing data in the native compressed format.

An important element in the effective usage of sequencing data is the association of the data with the results of the analysis and annotations that are generated by processing pipelines and analysts. At the moment such association happens as a separate step, standard and effective ways of linking data and meta information derived from sequencing data are not available.

At its 127th meeting, MPEG and ISO TC 276/WG 5 issued a joint Call for Proposals (CfP) addressing the solution of such problem. The call seeks submissions of technologies that can provide efficient representation and compression solutions for the processing of genomic annotation data.

Companies and organizations are invited to submit proposals in response to this call. Responses are expected to be submitted by the 8th January 2020 and will be evaluated during the 129th WG 11 (MPEG) meeting. Detailed information, including how to respond to the call for proposals, the requirements that have to be considered, and the test data to be used, is reported in the documents N18648, N18647, and N18649 available at the 127th meeting website (http://mpeg.chiariglione.org/meetings/127). For any further question about the call, test conditions, required software or test sequences please contact: Joern Ostermann, MPEG Requirements Group Chair (ostermann@tnt.uni-hannover.de) or Martin Golebiewski, Convenor ISO TC 276/WG 5 (martin.golebiewski@h-its.org).

ISO/IEC 23005 (MPEG-V) 4th Edition

WG11 promotes the Fourth edition of two parts of “Media Context and Control” to the Final Draft International Standard (FDIS) stage

At its 127th meeting, WG11 (MPEG) promoted the 4th edition of two parts of ISO/IEC 23005 (MPEG-V; Media Context and Control) standards to the Final Draft International Standard (FDIS). The new edition of ISO/IEC 23005-1 (architecture) enables ten new use cases, which can be grouped into four categories: 3D printing, olfactory information in virtual worlds, virtual panoramic vision in car, and adaptive sound handling. The new edition of ISO/IEC 23005-7 (conformance and reference software) is updated to reflect the changes made by the introduction of new tools defined in other parts of ISO/IEC 23005. More information on MPEG-V and its parts 1-7 can be found at https://mpeg.chiariglione.org/standards/mpeg-v.


Finally, the unofficial highlight of the 127th MPEG meeting we certainly found while scanning the scene in Gothenburg on Tuesday night…

MPEG127_Metallica

Qualinet Databases: Central Resource for QoE Research – History, Current Status, and Plans

Introduction

Datasets are an enabling tool for successful technological development and innovation in numerous fields. Large-scale databases of multimedia content play a crucial role in the development and performance evaluation of multimedia technologies. Among those are most importantly audiovisual signal processing, for example coding, transmission, subjective/objective quality assessment, and QoE (Quality of Experience) [1]. Publicly available and widely accepted datasets are necessary for a fair comparison and validation of systems under test; they are crucial for reproducible research. In the public domain, large amounts of relevant multimedia contents are available, for example, ACM SIGMM Records Dataset Column (http://sigmm.hosting.acm.org/category/datasets-column/), MediaEval Benchmark (http://www.multimediaeval.org/), MMSys Datasets (http://www.sigmm.org/archive/MMsys/mmsys14/index.php/mmsys-datasets.html), etc. However, the description of these datasets is usually scattered – for example in technical reports, research papers, online resources – and it is a cumbersome task for one to find the most appropriate dataset for the particular needs.

The Qualinet Multimedia Databases Online platform is one of many efforts to provide an overview and comparison of multimedia content datasets – especially for QoE-related research, all in one place. The platform was introduced in the frame of ICT COST Action IC1003 European Network on Quality of Experience in Multimedia Systems and Services – Qualinet (http://www.qualinet.eu). The platform, abbreviated “Qualinet Databases” (http://dbq.multimediatech.cz/), is used to share information on databases with the community [3], [4]. Qualinet was supported as a COST Action between November 8, 2010, and November 7, 2014. It has continued as an independent entity with a new structure, activities, and management since 2015. Qualinet Databases platform fulfills the initial goal to provide a rich and internationally recognized database and has been running since 2010. It is widely considered as one of Qualinet’s most notable achievements.

In the following paragraphs, there is a summary on Qualinet Databases, including its history, current status, and plans.

Background

A commonly recognized database for multimedia content is a crucial resource required not only for QoE-related research. Among the first published efforts in this field are the image and video quality resources website by Stefan Winkler (https://stefan.winklerbros.net/resources.html) and related publications providing in-depth analysis of multimedia content databases [2]. Since 2010, one of the main interests of Qualinet and its Working Group 4 (WG4) entitled Databases and Validation (Leader: Christian Timmerer, Deputy Leaders: Karel Fliegel, Shelley Buchinger, Marcus Barkowsky) was to create an even broader database with extended functionality and take the necessary steps to make it accessible to all researchers.

Qualinet firstly decided to list and summarize available multimedia databases based on a literature search and feedback from the project members. As the number of databases in the list was rapidly increasing, the handling of the necessary updates became inefficient. Based on these findings, WG4 started the implementation of the Qualinet Databases online platform in 2011. Since then, the website has been used as Qualinet’s central resource for sharing the datasets among Qualinet members and the scientific community. To the best of our knowledge, there is no other publicly available resource for QoE research that offers similar functionality. The Qualinet Databases platform is intended to provide more features than other known similar solutions such as Consumer Video Digital Library (http://www.cdvl.org). The main difference lies in the fact that the Qualinet Databases acts as a hub to various scattered resources of multimedia content, especially with the available data, such as MOS (Mean Opinion Score), raw data from subjective experiments, eye-tracking data, and detailed descriptions of the datasets including scientific references.

In the development of Qualinet DBs within the frame of COST Action IC1003, there are several milestones, which are listed in the timeline below:

  • March 2011 (1st Qualinet General Assembly (GA), Lisbon, Portugal), an initial list of multimedia databases collected and published internally for Qualinet members, creation of Web-based portal proposed,
  • September 2011 (2nd Qualinet GA, Brussels, Belgium), Qualinet DBs prototype portal introduced, development of publicly available resource initiated,
  • February 2012 (3rd Qualinet GA, Prague, Czech Republic), hosting of the Qualinet DBs platform under development at the Czech Technical University in Prague (http://dbq.multimediatech.cz/), Qualinet DBs Wiki page (http://dbq-wiki.multimediatech.cz/) introduced,
  • October 2012 (4th Qualinet GA, Zagreb, Croatia), White paper on Qualinet DBs published [3], Qualinet DBs v1.0 online platform released to the public,
  • March 2013 (5th Qualinet GA, Novi Sad, Serbia), Qualinet DBs v1.5 online platform published with extended functionality,
  • September 2013 (6th Qualinet GA, Novi Sad, Serbia), Qualinet DBs Information leaflet published, Task Force (TF) on Standardization and Dissemination established, QoMEX 2013 Dataset Track organized,
  • March 2014 (7th Qualinet GA, Berlin, Germany), ACM MMSys 2014 Dataset Track organized, liaison with Ecma International (https://www.ecma-international.org/) on possible standardization of Qualinet DBs subset established,
  • October 2014 (8th Final Qualinet GA and Workshop, Delft, The Netherlands), final development stage v3.00 of Qualinet DBs platform reached, code freeze.

Qualinet Databases became Qualinet’s primary resource for sharing datasets publicly to Qualinet members and after registration also to the broad scientific community. At the final Qualinet General Assembly under the COST Action IC1003 umbrella (October 2014, Delft, The Netherlands) it was concluded – also based on numerous testimonials – that Qualinet DBs is one of the major assets created throughout the project. Thus it was decided that the sustainability of this resource must be ensured for the years to come. Since 2015 the Qualinet DBs platform is being kept running with the effort of a newly established Task Force, TF4 Qualinet Databases (Leader: Karel Fliegel, Deputy Leaders: Lukáš Krasula, Werner Robitza). The status and achievements are being discussed regularly at Qualinet’s Annual Meetings collocated with QoMEX (International Conference on Quality of Multimedia Experience), i.e., 7th QoMEX 2015 (Costa Navarino, Greece), 8th QoMEX 2016 (Lisbon, Portugal), 9th QoMEX 2017 (Erfurt, Germany), 10th QoMEX 2018 (Sardinia, Italy), and 11th QoMEX 2019 (Berlin, Germany).

Current Status

The basic functionality of the Qualinet Databases online platform, see Figure 1, is based on the idea that registered users (Qualinet members and other interested users from the scientific community) have access through an easy-to-use Web portal providing a list of multimedia databases. Based on their user rights, they are allowed to browse information about the particular database and eventually download the actual multimedia content from the link provided by the database owner.

qualinetDatabaseInterface

Figure 1. Qualinet Databases online platform and its current interface.

Selected users – Database Owners in particular – have rights to upload or edit their records in the list of databases. Most of the multimedia databases have a flag of “Publicly Available” and are accessible to the registered users outside Qualinet. Only Administrators (Task Force leader and deputy leaders) have the right to delete records in the database. Qualinet DBs does not contain the actual multimedia content but only the access information with provided links to the dataset files saved at the server of the Database Owner.

The Qualinet DBs is accessible to all registered users after entering valid login data. Depending on the level of the rights assigned to the particular account, the user can browse the list of the databases with description (all registered users) and has access to the actual multimedia content via a link entered by the Database Owner. It provides the user with a powerful tool to find the multimedia database that best suits his/her needs.

In the list of databases user can select visible fields for the list in the User Settings, namely:

  • Database name, Institution, Qualinet Partner (Yes/No),
  • Link, Description (abstract), Access limitations, Publicly available (Yes/No), Copyright Agreement signed (Yes/No),
  • Citation, References, Copyright notice, Database usage tracking,
  • Content type, MOS (Yes/No), Other (Eye tracking, Sensory, …),
  • Total number of contents, SRC, HRC,
  • Subjective evaluation method (DSCQS, …), Number of ratings.

Fulltext search within the selected visible fields is available. In the current version of the Qualinet DBs, users can sort databases alphabetically based on the visible fields or use the search field as described above.

The list of databases allows:

  • Opening a card with details on particular database record (accessible to all users),
  • Editing database record (accessible to the database owners and administrators),
  • Deleting database record (accessible only to administrators),
  • Requesting deletion of a database record (accessible to the database owners),
  • Requesting assignment as the database owner (accessible to all users).

As for the records available in Qualinet DBs, the listed multimedia databases are a crucial resource for various tasks in multimedia signal processing. The Qualinet DBs is focused primarily on QoE research [1] related content, where, while designing objective quality assessment algorithms, it is necessary to perform (1) Verification of model during development, (2) Validation of model after development, and (2) Benchmarking of various models.

Annotated multimedia databases contain essential ground truth, that is, test material from the subjective experiment annotated with subjective ratings. Qualinet DBs also lists other material without subjective ratings for other kinds of experiments. Qualinet DBs covers mostly image and video datasets, including special contents (e.g., 3D, HDR) and data from subjective experiments, such as subjective quality ratings or visual attention data.

A timeline with statistics on the number of records and users registered in Qualinet DBs throughout the years can be seen in Figure 2. Throughout Qualinet COST Action IC1003 the number of registered datasets grew from 64 in March 2011 to 201 in October 2014. The number of datasets created by the Qualinet partner institutions grew from 30 in September 2011 to 83 in October 2014. The number of registered users increased from 37 in March 2013 to 222 in October 2014. After the end of COST Action IC1003 in November 2014 the number of datasets increased to 246 and the number of registered users to 491. The average yearly increase of registered users is approximately 56 users, which illustrates continuous interest and value of Qualinet DBs for the community.

Figure 2. Qualinet Databases statistics on the number of records and users.

Figure 2. Qualinet Databases statistics on the number of records and users.

Besides the Qualinet DBs online platform (http://dbq.multimediatech.cz/), there are also additional resources available for download via the Wiki page (http://dbq-wiki.multimediatech.cz) and Qualinet website (http://www.qualinet.eu/). Two documents are available: (1) “QUALINET Multimedia Databases v6.5” (May 28, 2017) with a detailed description of registered datasets, and “List of QUALINET Multimedia Databases v6.5” in a searchable spreadsheet with records as of May 28, 2017.

Plans

There are indicators – especially the number of registered users – showing that Qualinet DBs is a valuable resource for the community. However, the current platform as described above has not been updated since 2014, and there are several issues to be solved, such as the burden on one institution to host and maintain the system, possible instability and an obsolete interface, issues with the Wiki page and lack of a file repository. Moreover, in the current system, user registration is required. It is a very useful feature for usage tracking, ensuring database privacy, but at the same time, it can put some people off from using and adding new datasets, and it requires handling of personal data. There are also numerous obsolete links in Qualinet DBs, which is useful for the record, but the respective databases should be archived.

A proposal for a new platform for Qualinet DBs has been presented at the 13th Qualinet General Meeting in June 2019 (Berlin, Germany) and was subsequently supported by the assembly. The new platform is planned to be based on a Git repository so that the system will be open-source and text-based, and no database will be needed. The user-friendly interface is to be provided by a static website generator; the website itself will be hosted on GitHub. A similar approach has been successfully implemented for the VQEG Software & Tools (https://vqeg.github.io/software-tools/) web portal. Among the main advantages of the new platform are (1) easier access (i.e., fast performance with simple interface, no hosting fees and thus long term sustainability, no registration necessary and thus no entry barrier), (2) lower maintenance burden (i.e., minimal technical maintenance effort needed, easy code editing), and (3) future-proofness (i.e., databases are just text files with easy format conversion, and hosting can be done on any server).

On the other hand, the new platform will not support user registration and login, which is beneficial in order to prevent data privacy issues. Tracking of registered users will no longer be available, but database usage tracking is planned to be provided via, for example, Google Analytics. There are three levels of dataset availability in the current platform: (1) Publicly available dataset, (2) Information about dataset but data not available/available upon request, and (3) Not publicly available (e.g., Qualinet members only, not supported in the new platform). The migration of Qualinet DBs to the new platform is to be completed by mid-2020. Current data are to be checked and sanitized, and obsolete records moved to the archive.

Conclusions

Broad audiovisual contents with diverse characteristics, annotated with data from subjective experiments, is an enabling resource for research in multimedia signal processing, especially when QoE is considered. The availability of training and testing data becomes even more important nowadays, with ever-increasing utilization of machine learning approaches. Qualinet Databases helps to facilitate reproducible research in the field and has become a valuable resource for the community. 

References

  • [1] Le Callet, P., Möller, S., Perkis, A. Qualinet White Paper on Definitions of Quality of Experience, European Network on Quality of Experience in Multimedia Systems and Services (COST Action IC 1003), Lausanne, Switzerland, Version 1.2, March 2013. (http://www.qualinet.eu/images/stories/QoE_whitepaper_v1.2.pdf
  • [2] Winkler, S. Analysis of public image and video databases for quality assessment, IEEE Journal of Selected Topics in Signal Processing, 6(6):616-625, 2012. (https://doi.org/10.1109/JSTSP.2012.2215007)
  • [3] Fliegel, K., Timmerer, C. (eds.) WG4 Databases White Paper v1.5: QUALINET Multimedia Database enabling QoE Evaluations and Benchmarking, Prague/Klagenfurt, Czech Republic/Austria, Version 1.5, March 2013.
  • [4] Fliegel, K., Battisti, F., Carli, M., Gelautz, M., Krasula, L., Le Callet, P., Zlokolica, V. 3D Visual Content Datasets. In: Assunção P., Gotchev A. (eds) 3D Visual Content Creation, Coding and Delivery. Signals and Communication Technology, Springer, Cham, 2019. (https://doi.org/10.1007/978-3-319-77842-6_11)

NoteThe readers interested in active contribution to extending the success of Qualinet Databases are referred to Qualinet (http://www.qualinet.eu/) and invited to join its Task Force on Qualinet Databases via email reflector. To subscribe, please send an email to (dbq.wg4.qualinet-subscribe@listes.epfl.ch). This work was partially supported by the project No. GA17-05840S “Multicriteria optimization of shift-variant imaging system models” of the Czech Science Foundation.

Report from MMSYS 2019 – by Alia Sheikh

Alia Sheikh (@alteralias) is researching immersive and interactive content. At present she is interested in the narrative language of immersive environments and how stories can best be choreographed within them.

Being part of an international academic research community and actually meeting said international research community are not exactly the same thing it turns out. After attending the 2019 ACM MMSys conference this year, I have decided that leaving the office and actually meeting the people behind the research is very worth doing.

This year I was invited to give an overview presentation at ACM MMSys ’19, which was being hosted at the University of Massachusetts. The MMSys, NOSSDAV and MMVE (International Workshop on Immersive Mixed and Virtual Environment Systems) conferences happen back to back, in a different location each year. I was asked to talk about some of our team’s experiments in immersive storytelling at MMVE. This included our current work on lightfields and my work on directing attention in, and the cinematography of, immersive environments.

To be honest it wasn’t the most convenient time to decide to catch a plane to New York and then a train to Boston for a multi-day conference, but it felt like the right time to take a break from the office and find out what the rest of the community had been working on.

Fig.1: A picturesque scene from the wonderful University of Massachussetts Amherst campus

Fig.1: A picturesque scene from the wonderful University of Massachussetts Amherst campus

I arrived at Amherst the day before the conference and (along with another delegate who had taken the same bus) wandered the tranquil university grounds slightly lost before being rescued by the ever calm and cheerful Michael Zink. Michael is the chair of the MMSys organising committee and someone who later spent much of the conference introducing people with shared interests to each other – he appeared to know every delegate by name.

Once installed in my UMass hotel room, I proceeded to spend the evening on my usual pre-conference ritual: entirely rewriting my presentation.

As the timetable would have it, I was going to be the first speaker.

Fig 2: Attendees at MMSys 2019 taking their seats

Fig. 2: Attendees at MMSys 2019 taking their seats

Fig 3: Alia in full flow during our talk on day 1

Fig. 3: Alia in full flow during our talk on day 1

I don’t actually know why I do this to myself, but there is something about turning up to the event proper that gives you a sense of what will work for that particular audience, and Michael had given me a brilliantly concise snapshot of the type of delegate that MMSys attracts – highly motivated, expert on the nuts and bolts of how to get data to where it needs to be and likely to be interested in a big picture overview of how these systems can be used to create a meaningful human connection.

Using selected examples from our research, I put together a talk on how the experience of stories in high tech immersive environments differs from more traditional formats, but, once the language of immersive cinematography is properly understood, we find that we are able to create new narrative experiences that are both meaningful and emotionally rich.

The next morning I walked into an auditorium full of strangers filing in, gave my talk (I thought it went well?) and then sank happily into a plush red flip-seat chair safe in the knowledge that I was free to enjoy the rest of the event.

The next item was the keynote and easily one of the best talks I have ever experienced at a conference. Presented by Professor Nimesha Ranasinghe it was a masterclass in taking an interesting problem (how do we transmit a full sensory experience over a network?) And presenting it in such a way as to neatly break down and explain the science (we can electrically stimulate the tongue to recreate a taste!) while never losing sight of the inherent joy in working on the kind of science you dream of as a child (therefore electrified cutlery!).

Fig. 4: Professor Nimesha Ranasinghe during his talk on Multisensory experiences

Fig. 4: Professor Nimesha Ranasinghe during his talk on Multisensory experiences

Fig 5: Multisensory enhanced multimedia - experiences of the future ?

Fig. 5: Multisensory enhanced multimedia – experiences of the future ?

Fig6: Networking and some delicious lunch

Fig. 6: Networking and some delicious lunch

At lunch I discovered the benefit of having presented my talk early – I made a lot of friends with people who had specific questions about our work, and got a useful heads up on work they were presenting either in the afternoon’s long papers session or the poster session.

We all spent the evening at the welcome reception on the top floor of UMass Hotel, where we ate a huge variety of tiny, delicious cakes and got to know each other better. It was obvious that in some cases, researchers that might collaborate remotely all year, were able to use MMSys as an excellent opportunity to catch up. As a newcomer to this ACM conference however, I have to say that I found it a very welcoming event, and I met a lot of very friendly people many of them working on research that was entirely different to my own, but which seemed to offer an interesting insight or area of overlap.

I wasn’t surprised that I really enjoyed MMVE – virtual environments are very much my topic of interest right now. But I was delighted by how much of MMSys was entirely up my street. ACM MMSys provides a forum for researchers to present and share their latest research findings in multimedia systems, and the conference cuts across all media/data types to showcase the intersections and the interplay of approaches and solutions developed for different domains. This year, the work presented on how to best encode and transport mixed reality content, as well as predict head motion to better encode and deliver the part of a spherical panorama a viewer was likely to be looking at, was particularly interesting to me. I wondered whether comparing the predicted path of user attention to the desired path of user attention, would teach us how to better control a users attention within a panoramic scene, or whether peoples viewing patterns were simply too variable. In the Open Datasets & Software track, I was fascinated by one particular dataset: “ A Dataset of Eye Movements for the Children with Autism Spectrum Disorder”. This was a timely reminder for me that diversity within the audience needed to be catered for when designing multimedia systems, to avoid consigning sections of our audience to a substandard experience.

Of the demos, there were too many interesting ones to list, but I was hugely impressed by the demo for Multi-Sensor Capture and Network Processing for Virtual Reality Conferencing. This used cameras and Kinects to turn me into a point cloud and put a live 3D representation of my own physical body in a virtual space.A brilliantly simple and incredibly effective idea and I found myself sitting next to the people responsible for it at a talk later that day and discussing ways to optimise their data compression.

Despite wearing a headset that allowed me to see the other participants, I was still able to see and therefore use my own hands in the real world – even extending to picking up and using my phone.

Fig7: Trying out some cool demos during a bustling demo session

Fig. 7: Trying out some cool demos during a bustling demo session

Fig. 8: An example of the social media interaction from my "tweeting"

Fig. 8: An example of the social media interaction from my “tweeting”

Amusingly, I found that I was (virtually) sat next to a point-cloud of TNO researcher Omar Niamut which led to my favourite twitter exchange of the whole conference. I knew Omar from online, but we had never actually managed to meet in real life. Still, this was the most life-like digital incarnation yet!

I really should mention the Women’s and Diversity lunch event which (pleasingly) was attended by both men and women and offered some absolutely fascinating insights.

These included: the value of mentors over the course of a successful academic life, how a gender pay-gap is inextricably related to work family policies and steps that have successfully been taken by some countries and organisations to improve work-life balance for all genders.

It was incredibly refreshing to see these topics being discussed both scientifically and openly. The conversations I had with people afterwards as they opened up about their own experiences of work and parenthood, were among the most interesting I have ever had on the topic.

Another nice surprise – MMSys offers childcare grants available for conference attendees who are bringing small children to the conference and require on-site childcare or who incur extra expenses in leaving their children at home. It was very cheering to see that the Inclusion Policy did not stop at simply providing interesting talks, but also translated into specific inclusive action.

Fig. 9:  Women’s and Diversity lunch! What a wonderful initiative - well done MMSys and SIGMM

Fig. 9: Women’s and Diversity lunch! What a wonderful initiative – well done MMSys and SIGMM

I am delighted that I made the decision to attend MMSys. I had not realised that I was feeling somewhat detached from my peers and the academic research community in general, until I was put in an environment which contained a concentrated amount of interesting research, interesting researchers and an air of collaboration and sheer good will. It is easy to get tunnel vision when you are focused on your own little area of work, but every conversation I had at the conference reminded me that research does not happen in a vacuum.

Fig. 10: A fascinating talk at the  Women’s and Diversity lunch - it initiated great post event discussions!

Fig. 10: A fascinating talk at the Women’s and Diversity lunch – it initiated great post event discussions!

Fig. 11: The food truck experience - one of many wonderful social aspects to MMSys 2019

Fig. 11: The food truck experience – one of many wonderful social aspects to MMSys 2019

I could write a thousand more words about every interesting thing I saw or person I met at MMSys, but that would only give you my own specific experience of the conference. (I did live tweet* a lot of the talks and demos just for my own records and that can all be found here: https://twitter.com/Alteralias/status/1148546945859952640?s=20)

Fig. 12: Receiving the SIGMM Social Media Reporter Award for MMSys 2019!

Fig. 12: Receiving the SIGMM Social Media Reporter Award for MMSys 2019!

Whether you were someone I was sitting next to at a paper session, a person I spoke to standing next to in line at the food truck (one of the many sociable meal events) or someone who demoed their PhD work to me, thank you so much for sharing this event with me.

Maybe I will see you at MMSys 2020.

* p.s it turns out that if you live-tweet an entire conference, Niall gives you a Social Media Reporter award.

An Interview with Professor Susanne Boll

Describe your journey into research from your youth up to the present. What foundational lessons did you learn from this journey? 

My journey into research started with my interest in computers and computer science at school while I was still in my early years at that time. I liked all the STEM subjects and was very good at these in school. I got in touch with programming and the first Mac in high school when my physics teacher started the first basic programming course. After highschool, I continued on this journey and became a Mathematical-Technical1 Assistant and continued studying CS and went on to do a PhD, always driven by the desire that I could learn more, could explore and understand more of this field.

Why were you initially attracted to Multimedia? 

Susanne at

Susanne Boll at the beginning of her research career in 2001

I was initially attracted by multimedia when information systems started to look at novel methods of integrating large amounts of unstructured multimedia and different media types into structured database systems. I joined the GMD Institute for Integrated Publication and Information Systems who were working on multimedia database systems. My PhD was on multimedia document models for representing and replaying multimedia presentations in the context of multimedia information systems. One of the most inspiring early events was a small but very nice IFIP working conference on Database Semantics – Semantic Issues in Multimedia Systems in New Zealand 1999 where I met many researchers from the multimedia community some of whom I still consider my research friends today. I stayed in the field of multimedia but as my work was always relating to the applications of multimedia and the interaction with the user it was not surprising that I moved into the field of Human Computer Interaction and SIGCHI in which I am an active member also today. Over the last three decades I have worked in the field of interactive multimedia and human computer interaction – in different application domains from personal media to health, from mobility to industry 4.0. To cite a much valued friend of mine whom I just met again – “I enjoy when my research makes me smile”, when I can see how research can be translated in applications for a better use.

Why did I volunteer for the role of the director for diversity and outreach? 

Professor Susanne Boll in 2019

Professor Susanne Boll in 2019

Over more than three decades now I was supporting gender equality as a mentor, in different roles, in committees and institutions, by speaking up and by driving actions. Within the multimedia community I observed that there are many individuals supporting and acting for a better gender equality, however, it remained efforts of individuals and we as a community were not able to turn this into a collective understanding. 

There were actually a few recent events related to SIGMM that made truly sad and consider if I should leave this community which I at the same time consider my home community. Some years ago I was observing in a panel in which only men were discussing the future and challenges in multimedia. Observing this was painful for me. I knew and met with each of them individually over the years and they were interesting researchers and great mentors. But that panel it made again obvious that we as a community failed to be inclusive also with regard to the women. Why would there be not an excellent woman would have her say in that panel? Why would not someone organizing the panel consider to be inclusive with regard to gender? Why would not the panelists, when they are invited, ask who else would be on the panel and encourage this?

When I talk about gender equality in these days I almost immediately get the reaction that gender is not diversity. People say that looking at gender equality would be too short sighted and that I should care more about diversity and not gender alone. So let me clearly say that I am well aware that diversity is not gender it is much more than that.  But, don’t let the perfect be the enemy of the good. My personal story starts with gender equality in STEM fields. Looking at women participation in SIGMM, I decided that the actions described in the “25 in 25’’ strategy would be a good starting point for my new role – it is just the beginning.

What are my plans serving in this position?

Within SIGMM, we need to understand and fully embrace the different dimensions of diversity. We should not use the term in the sense of an easy cover-up of a multitude of aspects in which the individual needs get blurred. I sometimes have the feeling as if one aspect of diversity could be traded for another one, and the term was used as if there was a measure that there is “sufficient” diversity in some setting. 

As a  director for diversity and outreach I will be caring about the richness of diversity.  I want to bring the different dimensions of diversity into the multimedia community and make us understand, embrace listen and take action for better diversity and outreach of SIGMM.


1Mathematical-Technical Assistant (MaTA, MA or MTA for short; also: mathematical-technical software developer) is the occupational title of a recognised training occupation according to the Vocational Training Act in Germany, which has existed since the mid-1960s. It is the first non-academic training occupation in data processing.


Bios

Prof. Susanne Boll: 

Susanne Boll is a full professor for Media Informatics and Multimedia Systems at the University of Oldenburg and a member of the board of the OFFIS-Institute for Information Technology. OFFIS belongs to the top 5% research institutes among the non-university institutes in computer science in Germany. Over the last two decades, she has consistently achieved highly competitive research results in the field of multimedia and human–computer interaction. She has actively been driving these fields of research by many scientific research projects and organization of highly visible events in the field. Her scientific results have been published in competitive peer-reviewed international conferences such as Multimedia, CHI, MobileHCI, AutomotiveUI, DIS, and IDC, as well as internationally recognized journals. Her research makes competitive contributions to the field of human computer interaction and ubiquitous computing. Her research projects also have a strong connection to industry partners and application partners and addresses highly relevant challenges in the applications field of automation in transportation systems as well as health care technologies. I am an active member of the scientific community and have co-chaired and organized many international events in my field. Her teaching follows combination of theoretical foundations with team-oriented and research-oriented practical assignments.  She currently leads a highly visible international team of researchers (PhD students, research associates, post docs, senior principal scientists).


Opinion Column: Fairness, Accountability and Transparency (in Multimedia)

The inclusiveness and transparency of automatic information processing methods is a research topic that exhibited growing interest in the past years. In the era of digitized decision-making software where the push for artificial intelligence happens worldwide and at different strata of the socio-economic tissue, the consequences of biased, unexplainable and opaque methods for content analysis can be dramatic.

Several initiatives have raisen to address these issues in different communities. From 2014 to 2018, the FAT/ML workshop was co-located with the International Conference on Machine Learning. This year, the FATE/CV workshop (E standing for Ethics) was co-located with the International Conference on Computer Vision and Pattern Recognition. Similarly, the FAT/MM workshop is co-located with ACM Multimedia 2019. This initiatives, and specifically the FAT/ML series of workshop, converge to the birth of the ACM FAT* conference, having its first edition in New York in 2018, this years in Atlanta, and the third edition, next year in Barcelona.

ACM FAT* is a very recent interdisciplinary conference dedicated to bringing together a multidisciplinary community of researchers from computer science, law, social sciences, and humanities to investigate and tackle issues in this emerging area. The focus of the conference is not limited to technological solutions regarding potential bias, but also to address the question of whether decisions should be outsourced to data- and code-driven computing systems. This question is very timely given the impressive number of algorithmic systems (adopted in a growing number of contexts) fueled by big data. These systems aim to filter, sort, score, recommend, personalize, and shape human experience. They increasingly make/inform decisions with major impact on credit, insurance, healthcare, and immigration, to cite a few key fields with inherent critical risks.

In this context, we believe that the multimedia community should put together the necessary efforts in the same direction, investigating how to transform the current technical tools and methodologies to derive computational models that are transparent and inclusive. Information processing is one of the fundamental pillars of multimedia, it does not matter whether data is processed for content delivery, experience or systems applications, the automatic analysis of content is used in every corner of our community. Typical risks of large-scale computational models include model bias and algorithmic discrimination. These risks become particularly prominent in the multimedia field, which historically has been focusing on user-centered technologies. This is why it is crucial to start bringing the notion of fairness, accountability and transparency into ACM Multimedia.

ACM Multimedia 2019 in Nice will benefit from mainly two initiatives to start melting with the trend of Fairness, Accountability and Transparency. First, one of the workshops co-located with ACM Multimedia 2019 (as mentioned above) will deal with Fairness, Accountability and Transparency in Multimedia (FAT/MM, held on October 27th). The FAT/MM workshop is the first attempt to foster research efforts that focus on addressing fairness, accountability and transparency issues in the Multimedia field. To ensure a healthy and constructive development of the best multimedia technologies, this workshop offers a space to discuss how to develop fair, unbiased, representative, and transparent multimedia models, bringing together researchers from different areas to present computational solutions to these issues.

Second, one of the two selected Conference Ambassadors of SIGMM for 2019 attended the FATE/CV workshop at CVPR earlier this year, identified a speaker that could be of great interest for the Multimedia field, and invited them to FAT/MM to meet and discuss with the Multimedia community. The paper selected covers topics such as age bias in datasets and the impact this could have in real-world applications, such as autonomous driving or recommendation systems.

We hope that, by organising and getting strongly involved in these two initiatives, we can raise awareness within our community, and finally come to create a group of researchers interested in analysing and solving potential issues associated to fairness, accountability and transparency in multimedia.

The V3C1 Dataset: Advancing the State of the Art in Video Retrieval

Download

In order to download the video dataset as well as its provided analysis data, please follow the instructions described here:

https://github.com/klschoef/V3C1Analysis/blob/master/README.md

Introduction

Standardized datasets are of vital importance in multimedia research, as they form the basis for reproducible experiments and evaluations. In the area of video retrieval, widely used datasets such as the IACC [5], which has formed the basis for the TRECVID Ad-Hoc Video Search Task and other retrieval-related challenges, have started to show their age. For example, IACC is no longer representative of video content as it is found in the wild [7]. This is illustrated by the figures below, showing the distribution of video age and duration across various datasets in comparison with a sample drawn from Vimeo and Youtube.

datasets1

 

datasets2

Its recently released spiritual successor, the Vimeo Creative Commons Collection (V3C) [3], aims to remedy this discrepancy by offering a collection of freely reusable content sourced from the video hosting platform Vimeo (https://vimeo.com). The figures below show the age and duration distributions of the Vimeo sample from [7] in comparison with the properties of the V3C.datasets3

datasets4

The V3C is comprised of three shards, consisting of 1000h, 1200h and 1500h of video content respectively. It consists not only of the original videos themselves, but also comes with video shot-boundary annotations, as well as representative key-frames and thumbnail images for every such video shot. In addition, all the technical and semantic video metadata that was available on Vimeo is provided as well. The V3C has already been used in the 2019 edition of the Video Browser Showdown [2] and will also be used for the TRECVID AVS Tasks (https://www-nlpir.nist.gov/projects/tv2019/) starting 2019 with a plan for future usage in the coming several years. This video provides an overview of the type of content found within the dataset

Dataset & Collections

The three shards of V3C (V3C1, V3C2, and V3C3) contain Creative Commons videos sourced from video hosting platform Vimeo. For this reason, the elements of the dataset may be freely used and publicly shared. The following table presents the composition of the dataset and the characteristics of its shards, as well as the information on the dataset as a whole.

Partition V3C1 V3C2 V3C3 Total
File Size (videos) 1.3TB 1.6TB 1.8TB 4.8TB
File Size (total) 2.4TB 3.0TB 3.3TB 8.7TB
Number of Videos 7’475 9’760 11’215 28’450
Combined

Video Duration

1’000 hours,

23 minutes,

50 seconds

1’300 hours,

52 minutes,

48 seconds

1’500 hours,

8 minutes,

57 seconds

3801 hours,

25 minutes,

35 seconds

Mean Video Duration 8 minutes,

2 seconds

7 minutes,

59 seconds

8 minutes,

1 seconds

8 minutes,

1 seconds

Number of Segments 1’082’659 1’425’454 1’635’580 4’143’693

Similarly to IACC, V3C contains a master shot reference, which segments every video into non-overlapping shots based on the visual content of the videos. For every single shot, a representative keyframe is included, as well as the thumbnail version of that keyframe. Furthermore, for each video, identified by a unique ID, a metadata file is available that contains both technical as well as semantic information, such as the categories. Vimeo categorizes every video into categories and subcategories. Some of the categories were determined to be non-relevant for visual based multimedia retrieval and analytical tasks, and were dropped during the sourcing process of V3C. For simplicity reasons, subcategories were generalized into their parent categories and are, for this reason, not included. The remaining Vimeo categories are:

  • Arts & Design
  • Cameras & Techniques
  • Comedy
  • Fashion
  • Food
  • Instructionals
  • Music
  • Narrative
  • Reporting & Journals

Ground Truth and Analysis Data

As described above, the ground truth of the dataset consists of (deliberately over-segmented) shot boundaries as well as keyframes. Additionally, for the first shard of the V3C, the V3C1, we have already performed several analyses of the video content and metadata in order to provide an overview of the dataset [1]

In particular, we have analyzed specific content characteristics of the dataset, such as:

  • Bitrate distribution of the videos
  • Resolution distribution of the videos
  • Duration of shots
  • Dominant color of the keyframes
  • Similarity of the keyframes in terms of color layout, edge histogram, and deep features (weights extracted from the last fully-connected layer of GoogLeNet).
  • Confidence range distribution of the best class for shots detected by NasNet (using the best result out of the 1000 ImageNet classes) 
  • Number of different classes for a video detected by NasNet (using the best result out of the 1000 ImageNet classes)
  • Number of shots/keyframes for a specific content class
  • Number of shots/keyframes for a specific number of detected faces

This additional analysis data is available via GitHub, so that other researchers can take advantage of it. For example, one could use a specific subset of the dataset (only shots with blue keyframes, only videos with a specific bitrate or resolution, etc.) for performing further evaluations (e.g., for multimedia streaming, video coding, but also for image and video retrieval, of course). Additionally, due the public dataset and the analysis data, one could easily create an image and video retrieval system and use it either for participation in competitions like the Video Browser Showdown [2], or for submitting other evaluation runs (TRECVID Ad-hoc Video Search Task).

Conclusion

In the broad field of multimedia retrieval and analytics, one of the key components of research is having useful and appropriate datasets in place to evaluate multimedia systems’ performance and benchmark their quality. The usage of standard and open datasets enables researchers to reproduce analytical experiments based on these datasets and thus validate their results. In this context, the V3C dataset proves to be very diverse in several useful aspects (upload time, visual concepts, resolutions, colors, etc.). Also it has no dominating characteristics and provides a low self-similarity (i.e., few near duplicates) [3].

Further, the richness of V3C in terms of content diversity and content attributes enables benchmarking multimedia systems in close-to-reality test environments. In contrast to other video datasets (cf. YouTube-8M [4] and IACC [5]), V3C also provides a vast number of different video encodings and bitrates per second, so that it enables research focusing on video retrieval and analytical tasks regarding those attributes. The large number of different video resolutions (and to a lesser extent frame-rates) makes this dataset interesting for video transport and storage applications such as the development of novel encoding schemes, streaming mechanisms or error-correction techniques. Finally, in contrast to many current datasets, V3C also provides support for creating queries for evaluation competitions, such as VBS and TRECVID [6].

References

[1] Fabian Berns, Luca Rossetto, Klaus Schoeffmann, Christian Beecks, and George Awad. 2019. V3C1 Dataset: An Evaluation of Content Characteristics. In Proceedings of the 2019 on International Conference on Multimedia Retrieval (ICMR ’19). ACM, New York, NY, USA, 334-338.

[2] Jakub Lokoč, Gregor Kovalčík, Bernd Münzer, Klaus Schöffmann, Werner Bailer, Ralph Gasser, Stefanos Vrochidis, Phuong Anh Nguyen, Sitapa Rujikietgumjorn, and Kai Uwe Barthel. 2019. Interactive Search or Sequential Browsing? A Detailed Analysis of the Video Browser Showdown 2018. ACM Trans. Multimedia Comput. Commun. Appl. 15, 1, Article 29 (February 2019), 18 pages.

[3] Rossetto, L., Schuldt, H., Awad, G., & Butt, A. A. (2019). V3C–A Research Video Collection. In International Conference on Multimedia Modeling (pp. 349-360). Springer, Cham.

[4] Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., & Vijayanarasimhan, S. (2016). Youtube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675.

[5] Paul Over, George Awad, Alan F. Smeaton, Colum Foley, and James Lanagan. 2009. Creating a web-scale video collection for research. In Proceedings of the 1st workshop on Web-scale multimedia corpus (WSMC ’09). ACM, New York, NY, USA, 25-32. 

[6] Smeaton, A. F., Over, P., and Kraaij, W. 2006. Evaluation campaigns and TRECVid. In Proceedings of the 8th ACM International Workshop on Multimedia Information Retrieval (Santa Barbara, California, USA, October 26 – 27, 2006). MIR ’06. ACM Press, New York, NY, 321-330.

[7] Luca Rossetto & Heiko Schuldt (2017). Web video in numbers-an analysis of web-video metadata. arXiv preprint arXiv:1707.01340.

JPEG Column: 83rd JPEG Meeting in Geneva, Switzerland

The 83rd JPEG meeting was held in Geneva, Switzerland.

The meeting was very dense due to the multiple activities taking place. Beyond the multiple standardization activities, like the new JPEG XL, JPEG Pleno, JPEG XS, HTJ2K or JPEG Systems, the 83rd JPEG meeting had the report and discussion of a new exploration study on the use of learning based methods applied to image coding, and two successful workshops, namely on digital holography applications and systems and the 3rd on media blockchain technology.

The new exploration study on the use of learning based methods applied to image coding was initiated at the previous 82nd JPEG meeting in Lisbon, Portugal. The initial approach provided very promising results and might establish a new alternative for future image representations.

The workshop on digital holography applications and systems, revealed the state of the art on industry applications and current technical solutions. It covered applications such as holographic microscopy, tomography, printing and display. Moreover, insights were provided on state-of-the-art holographic coding technologies and quality assessment procedures. The workshop allowed a very fruitful exchange of ideas between the different invited parties and JPEG experts.

The 3rd workshop of a series organized around media blockchain technology, had several talks were academia and industry shared their views on this emerging solution. The workshop ended with a panel where multiple questions were further elaborated by different panelists, providing the ground to a better understanding of the possible role of blockchain in media technology for the near future.

Two new logos for JPEG Pleno and JPEG XL, were approved and released during the Geneva meeting.

jpegpleno-logo  jpegxl-logo

The two new logos, for JPEG Pleno and JPEG XL

The 83rd JPEG meeting had the following highlights: 55540677_10156332786204370_7011318091044880384_n_h

  • New explorations studies of JPEG AI
  • The new Image Coding System JPEG XL
  • JPEG Pleno
  • JPEG XS
  • HTJ2K
  • JPEG Media Blockchain Technology
  • JPEG Systems – Privacy, Security & IPR, JPSearch and JPEG in HEIF

In the following a short summary of the most relevant achievements of the 83rd meeting in Geneva, Switzerland, are presented.

 

JPEG AI

The JPEG Committee is pleased to announce that it has started exploration studies on the use of learning-based solutions for its standards.

In the last few years, several efficient learning-based image coding solutions have been proposed, mainly with improved neural network models. These advances exploit the availability of large image datasets and special hardware, such as the highly parallelizable graphic processing units (GPUs). Recognizing that this area has received many contributions recently and it is considered critical for the future of a rich multimedia ecosystem, JPEG has created the JPEG AI AhG group to study promising learning-based image codecs with a precise and well-defined quality evaluation methodology.

In this meeting, a taxonomy was proposed and available solutions from the literature were organized into different dimensions. Besides, a list of promising learning-based image compression implementations and potential datasets to be used in the future were gathered.

JPEG XL

The JPEG Committee continues to develop the JPEG XL Image Coding System, a standard for image coding that offers substantially better compression efficiency than relevant alternative image formats, along with features desirable for web distribution and efficient compression of high quality images.

Software for the JPEG XL verification model has been implemented. A series of experiments showed promising results for lossy, lossless and progressive coding. In particular, photos can be stored with significant savings in size compared to equivalent-quality JPEG files. Additionally, existing JPEG files can also be considerably reduced in size (for faster download) while retaining the ability to later reproduce the exact JPEG file. Moreover, lossless storage of images is possible with major savings in size compared to PNG. Further refinements to the software and experiments (including enhancement of existing JPEG files, and animations) will follow.

JPEG Pleno

The JPEG Committee has three activities in JPEG Pleno: Light Field, Point Cloud, and Holographic image coding. A generic box-based syntax has been defined that allows for signaling of these modalities, independently or composing a plenoptic scene represented by different modalities. The JPEG Pleno system also includes a reference grid system that supports the positioning of the respective modalities. The generic file format and reference grid system are defined in Part 1 of the standard, which is currently under development. Part 2 of the standard covers light field coding and supports two encoding mechanisms. The launch of specifications for point cloud and holographic content is under study by the JPEG committee.

JPEG XS

The JPEG committee is pleased to announce the creation of an Amendment to JPEG XS Core Coding System defining the use of the codec for raw image sensor data. The JPEG XS project aims at the standardization of a visually lossless low-latency and lightweight compression scheme that can be used as a mezzanine codec in various markets. Among the targeted use cases for raw image sensor compression, one can cite video transport over professional video links (SDI, IP, Ethernet), real-time video storage in and outside of cameras, memory buffers, machine vision systems, and data compression onboard of autonomous cars. One of the most important benefit of the JPEG XS codec is an end-to-end latency ranging from less than one line to a few lines of the image.

HTJ2K

The JPEG committee is pleased to announce a significant milestone, with ISO/IEC 15444-15 High-Throughput JPEG 2000 (HTJ2K) submitted to ISO for immediate publication as International Standard. HTJ2K opens the door to higher encoding and decoding throughput for applications where JPEG 2000 is used today.

The HTJ2K algorithm has demonstrated an average tenfold increase in encoding and decoding throughput compared to the algorithm currently defined by JPEG 2000 Part 1. This increase in throughput results in an average coding efficiency loss of 10% or less in comparison to the most efficient modes of the block coding algorithm in JPEG 2000 Part 1 and enables mathematically lossless transcoding to and from JPEG 2000 Part 1 codestreams.

JPEG Media Blockchain Technology

In order to clearly identify the impact of blockchain and distributed ledger technologies on JPEG standards, the committee has organized several workshops to interact with stakeholders in the domain. The programs and proceedings of these workshop are accessible on the JPEG website:

  1. 1st JPEG Workshop on Media Blockchain Proceedings, ISO/IEC JTC1/SC29/WG1, Vancouver, Canada, October 16th, 2018
  2. 2nd JPEG Workshop on Media Blockchain Proceedings, ISO/IEC JTC1/SC29/WG1, Lisbon, Portugal, January 22nd, 2019
  3. 3rd JPEG Workshop on Media Blockchain Proceedings, ISO/IEC JTC1/SC29/WG1, Geneva, Switzerland, March 20th, 2019

A 4th workshop is planned during the 84th JPEG meeting to be held in Brussels, Belgium on July 16th, 2019. The JPEG Committee invites experts to participate to this upcoming workshop.

JPEG Systems – Privacy, Security & IPR, JPSearch, and JPEG-in-HEIF.

At the 83rd meeting, JPEG Systems realized significant progress towards improving users’ privacy with the DIS text completion of ISO/IEC 19566-4 “Privacy, Security, and IPR Features” which will be released for ballot. JPEG Systems continued to progress on image search and retrieval with the FDIS text release of JPSearch ISO/IEC 24800 Part 2- 2nd edition. Finally, support for JPEG 2000, JPEG XR, and JPEG XS images encapsulated in ISO/IEC 15444-12 are progressing towards IS stage; this enables these JPEG images to be encapsulated in ISO base media file formats, such as ISO/IEC 23008-12 High efficiency file format (HEIF).

Final Quote

“Intelligent codecs might redesign the future of media compression. JPEG can accelerate this trend by producing the first AI based image coding standard.” said Prof. Touradj Ebrahimi, the Convenor of the JPEG Committee.

About JPEG

The Joint Photographic Experts Group (JPEG) is a Working Group of ISO/IEC, the International Organisation for Standardization / International Electrotechnical Commission, (ISO/IEC JTC 1/SC 29/WG 1) and of the International Telecommunication Union (ITU-T SG16), responsible for the popular JPEG, JPEG 2000, JPEG XR, JPSearch, JPEG XT and more recently, the JPEG XS, JPEG Systems, JPEG Pleno and JPEG XL families of imaging standards.

The JPEG Committee nominally meets four times a year, in different world locations. The 82nd JPEG Meeting was held on 19-25 January 2018, in Lisbon, Portugal. The next 84th JPEG Meeting will be held on 13-19 July 2019, in Brussels, Belgium.

More information about JPEG and its work is available at jpeg.org or by contacting Antonio Pinheiro or Frederik Temmermans of the JPEG Communication Subgroup.

If you would like to stay posted on JPEG activities, please subscribe to the jpeg-news mailing list.

Future JPEG meetings are planned as follows:

  • No 84, Brussels, Belgium, July 13 to 19, 2019
  • No 85, San Jose, California, U.S.A., November 2 to 8, 2019
  • No 86, Sydney, Australia, January 18 to 24, 2020

MPEG Column: 126th MPEG Meeting in Geneva, Switzerland

The original blog post can be found at the Bitmovin Techblog and has been modified/updated here to focus on and highlight research aspects.

The 126th MPEG meeting concluded on March 29, 2019 in Geneva, Switzerland with the following topics:

  • Three Degrees of Freedom Plus (3DoF+) – MPEG evaluates responses to the Call for Proposal and starts a new project on Metadata for Immersive Video
  • Neural Network Compression for Multimedia Applications – MPEG evaluates responses to the Call for Proposal and kicks off its technical work
  • Low Complexity Enhancement Video Coding – MPEG evaluates responses to the Call for Proposal and selects a Test Model for further development
  • Point Cloud Compression – MPEG promotes its Geometry-based Point Cloud Compression (G-PCC) technology to the Committee Draft (CD) stage
  • MPEG Media Transport (MMT) – MPEG approves 3rd Edition of Final Draft International Standard
  • MPEG-G – MPEG-G standards reach Draft International Standard for Application Program Interfaces (APIs) and Metadata technologies

The corresponding press release of the 126th MPEG meeting can be found here: https://mpeg.chiariglione.org/meetings/126

Three Degrees of Freedom Plus (3DoF+)

MPEG evaluates responses to the Call for Proposal and starts a new project on Metadata for Immersive Video

MPEG’s support for 360-degree video — also referred to as omnidirectional video — is achieved using the Omnidirectional Media Format (OMAF) and Supplemental Enhancement Information (SEI) messages for High Efficiency Video Coding (HEVC). It basically enables the utilization of the tiling feature of HEVC to implement 3DoF applications and services, e.g., users consuming 360-degree content using a head mounted display (HMD). However, rendering flat 360-degree video may generate visual discomfort when objects close to the viewer are rendered. The interactive parallax feature of Three Degrees of Freedom Plus (3DoF+) will provide viewers with visual content that more closely mimics natural vision, but within a limited range of viewer motion.

At its 126th meeting, MPEG received five responses to the Call for Proposals (CfP) on 3DoF+ Visual. Subjective evaluations showed that adding the interactive motion parallax to 360-degree video will be possible. Based on the subjective and objective evaluation, a new project was launched, which will be named Metadata for Immersive Video. A first version of a Working Draft (WD) and corresponding Test Model (TM) were designed to combine technical aspects from multiple responses to the call. The current schedule for the project anticipates Final Draft International Standard (FDIS) in July 2020.

Research aspects: Subjective evaluations in the context of 3DoF+ but also immersive media services in general are actively researched within the multimedia research community (e.g., ACM SIGMM/SIGCHI, QoMEX) resulting in a plethora of research papers. One apparent open issue is the gap between scientific/fundamental research and standards developing organizations (SDOs) and industry fora which often address the same problem space but sometimes adopt different methodologies, approaches, tools, etc. However, MPEG (and also other SDOs) often organize public workshops and there will be one during the next meeting, specifically on July 10, 2019 in Gothenburg, Sweden which will be about “Coding Technologies for Immersive Audio/Visual Experiences”. Further details are available here.

Neural Network Compression for Multimedia Applications

MPEG evaluates responses to the Call for Proposal and kicks off its technical work

Artificial neural networks have been adopted for a broad range of tasks in multimedia analysis and processing, such as visual and acoustic classification, extraction of multimedia descriptors or image and video coding. The trained neural networks for these applications contain a large number of parameters (i.e., weights), resulting in a considerable size. Thus, transferring them to a number of clients using them in applications (e.g., mobile phones, smart cameras) requires compressed representation of neural networks.

At its 126th meeting, MPEG analyzed nine technologies submitted by industry leaders as responses to the Call for Proposals (CfP) for Neural Network Compression. These technologies address compressing neural network parameters in order to reduce their size for transmission and the efficiency of using them, while not or only moderately reducing their performance in specific multimedia applications.

After a formal evaluation of submissions, MPEG identified three main technology components in the compression pipeline, which will be further studied in the development of the standard. A key conclusion is that with the proposed technologies, a compression to 10% or less of the original size can be achieved with no or negligible performance loss, where this performance is measured as classification accuracy in image and audio classification, matching rate in visual descriptor matching, and PSNR reduction in image coding. Some of these technologies also result in the reduction of the computational complexity of using the neural network or can benefit from specific capabilities of the target hardware (e.g., support for fixed point operations).

Research aspects: This topic has been addressed already in previous articles here and here. An interesting observation after this meeting is that apparently the compression efficiency is remarkable, specifically as the performance loss is negligible for specific application domains. However, results are based on certain applications and, thus, general conclusions regarding the compression of neural networks as well as how to evaluate its performance are still subject to future work. Nevertheless, MPEG is certainly leading this activity which could become more and more important as more applications and services rely on AI-based techniques.

Low Complexity Enhancement Video Coding

MPEG evaluates responses to the Call for Proposal and selects a Test Model for further development

MPEG started a new work item referred to as Low Complexity Enhancement Video Coding (LCEVC), which will be added as part 2 of the MPEG-5 suite of codecs. The new standard is aimed at bridging the gap between two successive generations of codecs by providing a codec-agile extension to existing video codecs that improves coding efficiency and can be readily deployed via software upgrade and with sustainable power consumption.

The target is to achieve:

  • coding efficiency close to High Efficiency Video Coding (HEVC) Main 10 by leveraging Advanced Video Coding (AVC) Main Profile and
  • coding efficiency close to upcoming next generation video codecs by leveraging HEVC Main 10.

This coding efficiency should be achieved while maintaining overall encoding and decoding complexity lower than that of the leveraged codecs (i.e., AVC and HEVC, respectively) when used in isolation at full resolution. This target has been met, and one of the responses to the CfP will serve as starting point and test model for the standard. The new standard is expected to become part of the MPEG-5 suite of codecs and its development is expected to be completed in 2020.

Research aspects: In addition to VVC and EVC, LCEVC is now the third video coding project within MPEG basically addressing requirements and needs going beyond HEVC. As usual, research mainly focuses on compression efficiency but a general trend in video coding is probably observable that favors software-based solutions rather than pure hardware coding tools. As such, complexity — both at encoder and decoder — is becoming important as well as power efficiency which are additional factors to be taken into account. Other issues are related to business aspects which are typically discussed elsewhere, e.g., here.

Point Cloud Compression

MPEG promotes its Geometry-based Point Cloud Compression (G-PCC) technology to the Committee Draft (CD) stage

MPEG’s Geometry-based Point Cloud Compression (G-PCC) standard addresses lossless and lossy coding of time-varying 3D point clouds with associated attributes such as color and material properties. This technology is appropriate especially for sparse point clouds.

MPEG’s Video-based Point Cloud Compression (V-PCC) addresses the same problem but for dense point clouds, by projecting the (typically dense) 3D point clouds onto planes, and then processing the resulting sequences of 2D images with video compression techniques.

G-PCC provides a generalized approach, which directly codes the 3D geometry to exploit any redundancy found in the point cloud itself and is complementary to V-PCC and particularly useful for sparse point clouds representing large environments.

Point clouds are typically represented by extremely large amounts of data, which is a significant barrier for mass market applications. However, the relative ease to capture and render spatial information compared to other volumetric video representations makes point clouds increasingly popular to present immersive volumetric data. The current implementation of a lossless, intra-frame G-PCC encoder provides a compression ratio up to 10:1 and acceptable quality lossy coding of ratio up to 35:1.

Research aspects: After V-PCC MPEG has now promoted G-PCC to CD but, in principle, the same research aspects are relevant as discussed here. Thus, coding efficiency is the number one performance metric but also coding complexity and power consumption needs to be considered to enable industry adoption. Systems technologies and adaptive streaming are actively researched within the multimedia research community, specifically ACM MM and ACM MMSys.

MPEG Media Transport (MMT)

MPEG approves 3rd Edition of Final Draft International Standard

MMT 3rd edition will introduce two aspects:

  • enhancements for mobile environments and
  • support of Contents Delivery Networks (CDNs).

The support for multipath delivery will enable delivery of services over more than one network connection concurrently, which is specifically useful for mobile devices that can support more than one connection at a time.

Additionally, support for intelligent network entities involved in media services (i.e., Media Aware Network Entity (MANE)) will make MMT-based services adapt to changes of the mobile network faster and better. Understanding the support for load balancing is an important feature of CDN-based content delivery, messages for DNS management, media resource update, and media request is being added in this edition.

On going developments within MMT will add support for the usage of MMT over QUIC (Quick UDP Internet Connections) and support of FCAST in the context of MMT.

Research aspects: Multimedia delivery/transport is still an important issue, specifically as multimedia data on the internet is increasing much faster than network bandwidth. In particular, the multimedia research community (i.e., ACM MM and ACM MMSys) is looking into novel approaches and tools utilizing exiting/emerging protocols/techniques like HTTP/2, HTTP/3 (QUIC), WebRTC, and Information-Centric Networking (ICN). One question, however, remains, namely what is the next big thing in multimedia delivery/transport as currently we are certainly in a phase where tools like adaptive HTTP streaming (HAS) reached maturity and the multimedia research community is eager to work on new topics in this domain.

Report from ACM MM 2018 – by Ana García del Molino

Seoul, what a beautiful place to host the premier conference on multimedia! Living in never-ending summer Singapore, I fell in love with the autumn colours of this city. The 26th edition of the ACM International Conference on Multimedia was held on October 22-26 of 2018 at the Lotte Hotel in Seoul, South Korea. It packed a full program including a very diverse range of workshops and tutorials, oral and poster presentations, art exhibits, interactive demos, competitions, industrial booths, and plenty of networking opportunities.

For me, this edition was a special one. About to graduate, with my thesis half written, I was presenting two papers. So of course, I was both nervous and excited. I had to fly to Seoul a few days ahead just to prepare myself! I was so motivated, I somehow managed to get myself a Best Social Media Reporter Award (who would have said… Me! A reporter!).

So, enough with the intro. Let’s get to the juice. What happened in Seoul between the 22nd and 26th of October 2018?

The first and last day of the conference were dedicated to Workshops and Tutorials. Those were a mix between Deep Learning themed and social applications of multimedia. The sessions included tutorials like “Interactive Video Search: Where is the User in the Age of Deep Learning?” that discussed the importance of the user in the collection of datasets, evaluation, and also interactive search, as opposed to using deep learning to solve challenges with big labelled datasets. In “Deep Learning Interpretation” Jitao Sang presented the main multimedia problems that can’t be addressed using deep learning. On the other hand, new and important trends related to social media (analysis of information diffusion and contagion, user activities and networking, prediction of real-world events, etc) were discussed in the tutorial “Social and Political Event Analysis using Rich Media”. The workshops were mainly user-centred, with special interest in affective computing and emotion analysis and use for multimedia (EE-USAD, ASMMC – MMAC 2018, AVEC 2018).

The conference kick-started with a wonderful keynote by Marianna Obrist. With “Don’t just Look – Smell, Taste, and Feel the Interaction” she showed us how to bring art into 4D by using technology, driving us through a full sensory experience that let us see, hear, and almost touch and smell. Ernest Edmonds also delved into how to mix art and multimedia in “What has art got to do with it?” but this time the other way around: what can multimedia research learn from the artists? Three industry speakers completed the keynote program. Xian-Sheng Hua from Alibaba Group shared their efforts towards visual Intelligence in “Challenges and Practices of Large-Scale Visual Intelligence in the Real-World”. Gary Geunbae Lee shared Samsung’s AI user experience strategy in “Living with Artificial Intelligence Technology in Connected Devices around Us.” And Bowen Zhou presented JD.com’s brand-new concept of Retail as a Service in “Transforming Retailing Experiences with Artificial Intelligence”.

This year’s program included 209 full papers, from a total of 757 submissions. 64 papers were allocated 15-minute oral presentations, while the others got a 90-second spotlight slot in the fast-forward sessions.  The poster sessions and the oral sessions run at the same time. While this was an inconvenience for poster presenters having to leave the poster to attend the oral sessions or miss them, the coffee breaks took place at the same location as the posters, so that was a win-win: chit-chat while having cookies and fruits? I’m in! In terms of content, half of the submissions were to only two areas: Multimedia and Vision and Deep Learning for Multimedia. But who am I to judge, when I had two of those myself! Many members of the community noted that the conference is becoming more and more deep learning, and less multimodal. To compensate, the workshops, tutorials and demos were mostly pure multimedia.

The challenges, competitions, art exhibits and demos happened in the afternoons, so at times it was hard to choose where to head to. So many interesting things happening all around the place! The art exhibit had some really cool interactive art installations, such as “Cellular Music”, that created music from visual motion. Among the demos, I found particularly interesting AniDance, an LSTM-based algorithm that made 3D models dance to the given music; SoniControl, an ultrasonic firewall for NFC protection; MusicMapp, a platform to augment how we experience music; and The Influence Map project, to explore who has influenced each scientist, and who did they most influence through their career.

Regarding diversity, I feel there is still a long way to go. Being in Asia, it makes sense that almost half of the attendees came from China. However, the submission numbers speak by themselves: less than 20% of submissions came from out of Asia, with just one submission from Africa (that’s a 0.13%!) Diversity is not only about gender, folks! I feel like more efforts are needed to facilitate the integration of more collectives in the multimedia community. One step at a time.

The next edition will take place at the NICE ACROPOLIS Convention Center in Nice, France from 21-25 October 2019. The ACM reproducibility badge system will be implemented for the first time at this 27th edition, so we may be seeing many more open-sourced projects. I am so looking forward to this!