Practical Guide to Using the YFCC100M and MMCOMMONS on a Budget

 

The Yahoo-Flickr Creative Commons 100 Million (YFCC100M), the largest freely usable multimedia dataset to have been released so far, is widely used by students, researchers and engineers on topics in multimedia that range from computer vision to machine learning. However, its sheer volume, one of the traits that make the dataset unique and valuable, can pose a barrier to those who do not have access to powerful computing resources. In this article, we introduce useful information and tools to boost the usability and accessibility of the YFCC100M, including the supplemental material provided by the Multimedia Commons (MMCOMMONS) community. In particular, we provide a practical guide on how to set up a feasible and cost effective research and development environment locally or in the cloud that can access the data without having to download it first.

YFCC100M: The Largest Multimodal Public Multimedia Dataset

Datasets are unarguably one of the most important components of multimedia research. In recent years there was a growing demand for a dataset that was not specifically biased or targeted towards certain topics, sufficiently large, truly multimodal, and freely usable without licensing issues.

The YFCC100M dataset was created to meet these needs and overcome many of the issues affecting existing multimedia datasets. It is, so far, the largest publicly and freely available multimedia collection of metadata representing about 99.2 million photos and 0.8 million videos, all of which were uploaded to Flickr between 2004 and 2014. Metadata included in the dataset are, for example, title, description, tags, geo-tag, uploader information, capture device information, URL to the original item. Additional information was later released in the form of expansion packs to supplement the dataset, namely autotags (presence of visual concepts, such as people, animals, objects, events, architecture, and scenery), Exif metadata, and human-readable place labels. All items in the dataset were published under one of the Creative Commons commercial or noncommercial licenses, whereby approximately 31.8% of the dataset is marked for commercial use and 17.3% has the most liberal license that only requires attribution to the photographer. For academic purposes, the entire dataset can be used freely, which enables fair comparisons and reproducibility of published research works.

Two articles from the people who created the dataset, YFCC100M: The New Data in Multimedia Research and Ins and Outs of the YFCC100M give more detail about the the motivation, collection process, and interesting characteristics and statistics about the dataset. Since its initial release in 2014, the YFCC100M quickly gained popularity and is widely used in the research community. As of September 2017, the dataset had been requested over 1400 times and cited over 300 times in research publications with topics ranging in multimedia from computer vision to machine learning. Specific topics include, but are not limited to, image and video search, tag prediction, captioning, learning word embeddings, travel routing, event detection, and geolocation prediction. Demos that use the YFCC100M can be found here.

Figure 1. Overview diagram of YFCC100M and Multimedia Commons.

Figure 1. Overview diagram of YFCC100M and Multimedia Commons.

MMCOMMONS: Making YFCC100M More Useful and Accessible

Out of the many things that the YFCC100M offers, its sheer volume is what makes it especially valuable, but it is also what makes the dataset not so trivial to work with. The metadata alone spans 100 million lines of text and is 45GB in size, not including the expansion packs. To work with the images and/or videos of YFCC100M, they need to be downloaded first using the individual URLs contained in the metadata. Aside from the time required to download all 100 million items, which would further occupy 18TB of disk space, the main problem is that a growing number of images and videos is becoming unavailable due to the natural lifecycle of digital items, where people occasionally delete what they have shared online. In addition, the time alone to process and analyze images and videos is generally infeasible for students and scientists in small research groups who do not have access to high performance computing resources.

These issues were noted upon the creation of the dataset and the MMCOMMONS community was formed to coordinate efforts for making the YFCC100M more useful and accessible to all, and to persist the contents of the dataset over time. To that end, MMCOMMONS provides an online repository that holds supplemental material to the dataset, which can be mounted and used to directly process the dataset in the cloud. The images and videos included in the YFCC100M can be accessed and even downloaded freely from an AWS S3 bucket, which was made possible courtesy of the Amazon Public Dataset program. Note that a tiny percentage of images and videos are missing from the bucket, as they already had disappeared when organizers started the download process right after the YFCC100M was published. This notwithstanding, the images and videos hosted in the bucket still serve as a useful snapshot that researchers can use to ensure proper reproduction of and comparison with their work. Also included in the Multimedia Commons repository are visual and aural features extracted from the image and video content. The MMCOMMONS website provides a detailed description of conventional features and deep features, which include HybridNet, VGG and VLAD. These CNN features can be a good starting point for those who would like to jump right into using the dataset for their research or application.

The Multimedia Commons has been supporting multimedia researchers by generating annotations (see the YLI Media Event Detection and MediaEval Placing tasks), developing tools, as well as organizing competitions and workshops for ideas exchange and collaboration.

Setting up a Research Environment for YFCC100M and MMCOMMONS

Even with pre-extracted features available, to do meaningful research one still needs a lot of computing power to process the large amount of YFCC100M and MMCOMMONS data. We would like to lower the barrier of entry for students and scientists who don’t have access to dedicated high-performance resources. In the following we describe how one can easily set up a research environment for handling the large collection. We introduce how Apache MXNet, Amazon EC2 Spot Instance and AWS S3 can be used to create a research development environment that can handle the data in a cost-efficient way, as well as other ways to use it more efficiently.

1) Use a subset of dataset

It is not necessary to work with the entire dataset just because you can. Depending on the use case, it may make more sense to use a well-chosen subset. For instance, the YLI-GEO and YLI-MED subsets released by the MMCOMMONS can be useful for geolocation and multimedia event detection tasks, respectively. For other needs, the data can be filtered to generate a customized subset.

The YFCC100M Dataset Browser is a web-based tool you can use to search the dataset by keyword. It provides an interactive visualization with statistics that helps to better understand the search results. You can generate a list file (.csv) of the items that match the search query, which you can then use to fetch the images and/or videos afterwards. The limitations of this browser are that it only supports keyword search on the tags and that it only accepts ASCII text as valid input, as opposed to UNICODE for queries using non-Roman characters. Also, queries can take up to a few seconds to return results.

A more flexible way to search the collection with lower latency is to set up your own Apache Solr server and indexing (a subset of) the metadata. For instance, the autotags metadata can be indexed to search for images that have visual concepts of interest. A step-by-step guide to setting up a Solr server environment with the dataset can be found here. You can write Solr queries in most programming languages by using one of the Solr wrappers.

2) Work directly with data from AWS S3

Apache MXNet, a deep learning framework you can run locally on your workstation, allows training with S3 data. Most training and inference modules in MXNet accept data iterators that can read data from and write data to a local drive as well as AWS S3.

The MMCOMMONS provides a data iterator for YFCC100M images, stored as a RecordIO file, so you can process the images in the cloud without ever having to download them to your computer. If you are working with a subset that is sufficiently large, you can further filter it to generate a custom RecordIO file that suits your needs. Since the images stored in the RecordIO file are already resized and saved compactly, generating a RecordIO from an existing RecordIO file by filtering on-the-fly is more time and space efficient than downloading all images first and creating a RecordIO file from scratch. However, if you are using a subset that is relatively small, it is recommended to download just those images you need from S3 and then create a RecordIO file locally, as that will considerably speed up processing the data.

While one would generally set up Apache MXNet to run locally, you should note that the I/O latency of using S3 data can be greatly reduced if you would set it up to run on an Amazon EC2 instance in the same region as where the S3 data is stored (namely, us-west-2, Oregon), see Figure 2. Instructions for setting up a deep learning environment on Amazon EC2 can be found here.

Figure 2. The diagram shows a cost-efficient setup with a Spot Instance in the same region (us-west-2) as the S3 buckets that houses YFCC100M and MMCOMMONS images/videos and RecordIO files. Data in the S3 buckets can be accessed in a same way from researcher’s computer; the only downside with this is the longer latency for retrieving data from S3. Note that there are several Yahoo! Webscope buckets (I3set1-I3setN) that hold a copy of the YFCC100M, but you only can access it using the path you were assigned after requesting the dataset.

Figure 2. The diagram shows a cost-efficient setup with a Spot Instance in the same region (us-west-2) as the S3 buckets that houses YFCC100M and MMCOMMONS images/videos and RecordIO files. Data in the S3 buckets can be accessed in a same way from researcher’s computer; the only downside with this is the longer latency for retrieving data from S3. Note that there are several Yahoo! Webscope buckets (I3set1-I3setN) that hold a copy of the YFCC100M, but you only can access it using the path you were assigned after requesting the dataset.

3) Save cost by using Amazon EC2 Spot Instances

Cloud computing has become considerably cheaper in recent years. However, the price for using a GPU instance to process the YFCC100M and MMCOMMONS can still be quite expensive. For instance, Amazon EC2’s on-demand p2.xlarge instance (with a NVIDIA TESLA K80 GPU and 12GB RAM) costs 0.9 USD per hour in the us-west-2 region. This would cost approximately $650 (€540) a month if used full-time.

One way to reduce the cost is to set up a persistent Spot Instance environment. If you request an EC2 Spot Instance, you can use the instance as long as its market price is below your maximum bidding price. If the market price goes beyond your maximum bid, the instance gets terminated after a two minutes warning. To deal with such frequent interruptions it is important to store your intermediate results often to persistent storage space, such as AWS S3 or AWS EFS. The market price of the EC2 instance fluctuates, see Figure 3, so there is no guarantee as to how much you can save or how long you have to wait for your final results to be ready. But if you are willing to experiment with pricing, in our case we were able to reduce the costs by 75% during the period January-April 2017.

Figure 3. You can check the current and past market price of different EC2 instance types from the Spot Instance Pricing History panel.

Figure 3. You can check the current and past market price of different EC2 instance types from the Spot Instance Pricing History panel.

4) Apply for academic AWS credits

Consider applying for the AWS Cloud Credits for Research Program to receive AWS credits to run your research in the cloud. In fact, thanks to the grant we were able to release LocationNet, a pre-trained geolocation model that used all geotagged YFCC100M images.

Conclusion

YFCC100M is at the moment the largest multimedia dataset released to the public, but its sheer volume poses a high barrier to actually use it. To boost the usability and accessibility of the dataset, the MMCOMMONS community provides an additional AWS S3 repository with tools, features, and annotations to facilitate creating a feasible research development environment for those with fewer resources at their disposal. In this column, we provided a guide on how a subset of the dataset can be created for specific scenarios, how the hosted YFCC100M and MMCOMMONS data on S3 can be used directly for training a model with Apache MXNet, and finally how Spot Instances and academic AWS credits can make running experiments cheaper or even free.

Join the Multimedia Commons Community

Please let us know if you’re interested in contributing to the MMCOMMONS. This is a collaborative effort among research groups at several institutions (see below). We welcome contributions of annotations, features, and tools around the YFCC100M dataset, and may potentially be able to host them on AWS. What are you working on?

See the this page for information about how to help out.

Acknowledgements:

This dataset would not have been possible without the effort of many people, especially those at Yahoo, Lawrence Livermore National Laboratory, International Computer Science Institute, Amazon, ISTI-CNR, and ITI-CERTH.

Opinion Column: Tracks, Reviews and Preliminary Works

In a nutshell, the community agreed that: we need more transparent communication and homogeneous rules across thematic areas; we need more useful rebuttals; there is no need for conflict of interest tracks; large conferences must protect preliminary and emergent research works. Solutions were suggested to improve these points.

Welcome to the first edition of the  SIGMM Community Discussion Column!

As promised in our introductory edition, this column will report highlights and lowlights of online discussion threads among the members of the Multimedia community (see our Facebook MM Community Discussion group).

After an initial poll, this quarter the community chose to discuss about the reviewing process and structure of the SIGMM-sponsored conferences. We organized the discussion around 3 main sub-topics: importance of tracks, structure of reviewing process, and value of preliminary works.  We collected more than 50 contributions from the members of the Facebook MM Community Discussion group. Therefore, the following synthesis represents only these contributions. We encourage everyone to participate in the upcoming discussions, so that this column becomes more and more representative of the entire community.

In a nutshell, the community agreed that: we need more transparent communication and homogeneous rules across thematic areas; we need more useful rebuttals; there is no need for conflict of interest tracks; large conferences must protect preliminary and emergent research works. Solutions were suggested to improve these points.

Communication, Coordination and Transparency. All participants agreed that more vertical (from chairs to authors) and horizontal (in between area chairs or technical program chairs) communication could improve the quality of both papers and reviews in SIGMM-sponsored conferences. For example, lack of transparency and communication regarding procedures might deal to uneven rules and deadlines across tracks.

Tracks. How should conference thematic areas be coordinated? The community’s view can be summarized into 3 main perspectives:

  1. Rule Homogeneity.  The majority of participants agreed that big conferences should have thematic areas, and that tracks should be jointly coordinated by a technical program committee. Tracks are extremely important, but in order for the conference to give an individual, unified message, as opposed to “multi-conferences”, the same review and selection process should apply to all tracks. Moreover, hosting a face to face global TPC meetings is key for a solid, homogeneous conference program.
  2. Non-uniform Selection Process to Help Emerging Areas. A substantial number of participants pointed out that one role of the track system is to help emerging subcommunities: thematic areas ensure a balanced programme with representation from less explored topics (for example, music retrieval or arts and multimedia). Under this perspective, while the reviewing process should be the same for all tracks, the selection phase could be non-uniform. “Mathematically applying a percentage rate per area” does not help selecting the actually high-quality papers across tracks: with a uniformly applied low acceptance rate rule, minor tracks might have one or two papers accepted only, despite the high quality of the submissions.
  3. Abolish Tracks. A minority of participants agreed that, similar to big conferences such as CVPR, tracks should be completely abolished. A rigid track-based structure makes it somehow difficult for authors to choose the right track where to submit; moreover, reviewers and area chairs are often experts in more than one area. These issues could be addressed by a flexible structure where papers are assigned to area chairs and reviewers based on the topic.

Reviewing process  How do we want the reviewing process to be? Here is the view of the community on four main points: rebuttal, reviewing instructions, conflict of interest, and reviewers assignment.

  1. Rebuttal: important, but we need to increase impact. The majority of participants agreed that rebuttal is helpful to increase review quality and to grant authors more room for discussion. However, it was pointed out that sometimes the rebuttal process is slightly overlooked by both reviewers and area chairs, thus decreasing the potential impact of the rebuttal phase. It was suggested that, in order to raise awareness on rebuttal’s value, SIGMM could publish statistics on the number of reviewers who changed their opinion after rebuttal. Moreover, proposed improvements on the rebuttal process included: (1) more time allocated for reviewers to have a discussion regarding the quality of the papers; (2) a post-rebuttal feedback where reviewers respond to authors’ rebuttal (to promote reviewers-authors discussion and increase awareness on both sides) and (3) a closer supervision of the area chairs.
  2. Reviewing Guidelines: complex, but they might help preliminary works. Do reviewing guidelines help reviewers writing better reviews? For most participants, giving instructions to reviewers appear to be somehow impractical, as reviewers do not necessarily read or follow the guidelines. A more feasible solution is to insert weak instructions through specific questions in the reviewing form (e.g. “could you rate the novelty of the paper?”). However, it was also pointed out that written rules could help area chairs justify a rejection of a bad review.  Also, although reviewing instructions might change from track to track, general written rules regarding “what is a good paper” could help the reviewers understand what to accept. For example, clarification is needed on the depth of acceptable research works, and on how preliminary works should be evaluated, given the absence of a short paper track.
  3. Brave New Idea Track: ensuring scientific advancement. Few participants expressed their opinion regarding this track hosting novel, controversial research ideas. They remarked the importance of such a track to ensure scientific advancement, and it was suggested that, in the future, this track could host exploratory works (former short papers), as preliminary research works  are crucial to make a conference exciting.
  4. Conflict of Interest (COI) Track: perhaps we should abolish it. Participants almost unanimously agreed that a COI track is needed only when the conference management system is not able to handle conflicts on its own. It was suggested that, if that is not the case, a COI track might actually have a antithetical effect (is the COI track acceptance rate for ACM MM higher this year?).
  5. Choosing Reviewers: A Semi-Automated Process. The aim of the reviewers assignment procedure is to give the right papers to the right reviewers. How to make this procedure successful? Some participants supported the “fully manual assignment” option, where area chairs directly nominate reviewers for their own track. Others proposed to have a “fully automatic assignment”, based on an automated matching system such as the Toronto Paper Matching System (TPMS). A discussion followed, and eventually most participants agreed on a semi-automated process, having first the TPMS surfacing a relevant pool of reviewers (independent of tracks) and then area chairs manually intervening. Manual inspection of area chairs is crucial for inter-disciplinary papers needing reviews from experts from different areas.

Finally, during the discussion, few observations and questions regarding the future of the community arouse. For example: how to steer the direction of the conference, given the increase in number of AI-related papers? How to support diversity of topics, and encourage papers in novel fields (e.g. arts and music) beyond the legacy (traditional multimedia topics)? Given the wide interest on such issues, we will include these discussion topics in our next pre-discussion poll. To participate in the next discussion, please visit and subscribe to the Facebook MM Community Discussion group, and raise your voice!

Xavier Alameda-Pineda and Miriam Redi.

Report from ACM MMSys 2017

–A report from Christian Timmerer, AAU/Bitmovin Austria

The ACM Multimedia Systems Conference (MMSys) provides a forum for researchers to present and share their latest research findings in multimedia systems. It is a unique event targeting “multimedia systems” from various angles and views across all domains instead of focusing on a specific aspect or data type. ACM MMSys’17 was held in Taipei, Taiwan in June 20-23, 2017.

MMSys is a single-track conference which hosts also a series of workshops, namely NOSSDAV, MMVE, and NetGames. Since 2016, it kicks off with overview talks and 2017 we’ve seen the following talks: “Geometric representations of 3D scenes” by Geraldine Morin; “Towards Understanding Truly Immersive Multimedia Experiences” by Niall Murray; “Rate Control In The Age Of Vision” by Ketan Mayer-Patel; “Humans, computers, delays and the joys of interaction” by Ragnhild Eg; “Context-aware, perception-guided workload characterization and resource scheduling on mobile phones for interactive applications” by Chung-Ta King and Chun-Han Lin.

Additionally, industry talks have been introduced: “Virtual Reality – The New Era of Future World” by WeiGing Ngang; “The innovation and challenge of Interactive streaming technology” by Wesley Kuo; “What challenges are we facing after Netflix revolutionized TV watching?” by Shuen-Huei Guan; “The overview of app streaming technology” by Sam Ding; “Semantic Awareness in 360 Streaming” by Shannon Chen; “On the frontiers of Video SaaS” by Sega Cheng.

An interesting set of keynotes presented different aspects related multimedia systems and its co-located workshops:

  • Henry Fuchs, The AR/VR Renaissance: opportunities, pitfalls, and remaining problems
  • Julien Lai, Towards Large-scale Deployment of Intelligent Video Analytics Systems
  • Dah Ming Chiu, Smart Streaming of Panoramic Video
  • Bo Li, When Computation Meets Communication: The Case for Scheduling Resources in the Cloud
  • Polly Huang, Measuring Subjective QoE for Interactive System Design in the Mobile Era – Lessons Learned Studying Skype Calls

IMG_4405The program included a diverse set of topics such as immersive experiences in AR and VR, network optimization and delivery, multisensory experiences, processing, rendering, interaction, cloud-based multimedia, IoT connectivity, infrastructure, media streaming, and security. A vital aspect of MMSys is dedicated sessions for showcasing latest developments in the area of multimedia systems and presenting datasets, which is important towards enabling reproducibility and sustainability in multimedia systems research.

The social events were a perfect venue for networking and in-depth discussion how to advance the state of the art. A welcome reception was held at “LE BLE D’OR (Miramar)”, the conference banquet at the Taipei World Trade Center Club, and finally a tour to the Shilin Night Market was organized.

ACM MMSys 2917 issued the following awards:

  • The Best Paper Award  goes to “A Scalable and Privacy-Aware IoT Service for Live Video Analytics” by Junjue Wang (Carnegie Mellon University), Brandon Amos (Carnegie Mellon University), Anupam Das (Carnegie Mellon University), Padmanabhan Pillai (Intel Labs), Norman Sadeh (Carnegie Mellon University), and Mahadev Satyanarayanan (Carnegie Mellon University).
  • The Best Student Paper Award goes to “A Measurement Study of Oculus 360 Degree Video Streaming” by Chao Zhou (SUNY Binghamton), Zhenhua Li (Tsinghua University), and Yao Liu (SUNY Binghamton).
  • The NOSSDAV’17 Best Paper Award goes to “A Comparative Case Study of HTTP Adaptive Streaming Algorithms in Mobile Networks” by Theodoros Karagkioules (Huawei Technologies France/Telecom ParisTech), Cyril Concolato (Telecom ParisTech), Dimitrios Tsilimantos (Huawei Technologies France), Stefan Valentin (Huawei Technologies France).

Excellence in DASH award sponsored by the DASH-IF 

  • 1st place: “SAP: Stall-Aware Pacing for Improved DASH Video Experience in Cellular Networks” by Ahmed Zahran (University College Cork), Jason J. Quinlan (University College Cork), K. K. Ramakrishnan (University of California, Riverside), and Cormac J. Sreenan (University College Cork)
  • 2nd place: “Improving Video Quality in Crowded Networks Using a DANE” by Jan Willem Kleinrouweler, Britta Meixner and Pablo Cesar (Centrum Wiskunde & Informatica)
  • 3rd place: “Towards Bandwidth Efficient Adaptive Streaming of Omnidirectional Video over HTTP” by Mario Graf (Bitmovin Inc.), Christian Timmerer (Alpen-Adria-Universität Klagenfurt / Bitmovin Inc.), and Christopher Mueller (Bitmovin Inc.)

Finally, student travel grants awards have been sponsored by SIGMM. All details including nice pictures can be found here.


ACM MMSys 2018 will be held in Amsterdam, The Netherlands, June 12 – 15, 2018 and includes the following tracks:

  • Research track: Submission deadline on November 30, 2017
  • Demo track: Submission deadline on February 25, 2018
  • Open Dataset & Software Track: Submission deadline on February 25, 2018

MMSys’18 co-locates the following workshops (with submission deadline on March 1, 2018):

  • MMVE2018: 10th International Workshop on Immersive Mixed and Virtual Environment Systems,
  • NetGames2018: 16th Annual Worksop on Network and Systems Support for Games,
  • NOSSDAV2018: 28th ACM SIGMM Workshop on Network and Operating Systems Support for Digital Audio and Video,
  • PV2018: 23rd Packet Video Workshop

MMSys’18 includes the following special sessions (submission deadline on December 15, 2017):

Impact of the New @sigmm Records

The SIGMM Records have renewed, with the ambition of continue being a useful resource for the multimedia community. The intention is to provide a forum for (open) discussion and to become a primary source of information (and of inspiration!).

The new team (http://sigmm.hosting.acm.org/impressum/) has committed to lead the Records in the coming years, gathering relevant contributions in the following main clusters:

The team has also revitalized the presence of SIGMM on Social Media. SIGMM accounts on Facebook and Twitter have been created for disseminating relevant news, events and contributions for the SIGMM community. Moreover, a new award has been approved: the Best Social Media Reporters from each SIGMM conference will get a free registration to one of the SIGMM conferences within a period of one year. The award criteria are specified at http://sigmm.hosting.acm.org/2017/05/20/awarding-the-best-social-media-reporters/

The following paragraphs detail the impact of all these new activities in terms of increased number of visitors and visits to the Records website (Figure 1), and broaden reach. All the statistics presented below started to be collected since the publication of the June issue (July 29th 2017).

Figure 1. Number of visitors and visits since the publication of the June issue

Figure 1. Number of visitors and visits since the publication of the June issue

Visitors and Visits to the Records website

The daily number of visitors ranges approximately between 100 and 400. It has been noticed that this variation is strongly influenced by the publication of Social Media posts promoting contents published on the website. In the first month (since July 29th, one day after the publication of the issue), more than 13000 visitors were registered, and more than 20000 visitors have been registered until now (see Table 1 for detailed statistics). The number of visits to the different posts and pages of the website accumulates up to more than 100000. The top 5 countries with highest number of visitors are also listed in Table 2. Likewise, the top 3 posts with highest impact, in terms of number of visits and of Social Media shares (via the Social Media icons recently added in the posts and pages of the website) are listed in Table 3. As an example, the daily number of visits to the main page of the June issue is provided in Figure 2, with a total number of 224 visits since its publication.

Finally, the top 3 referring sites (i.e., external websites from which visitors have clicked an URL to access the Records website) are Facebook (>700 references), Google (>300 references) and Twitter (>100 references). So, it seems that Social Media is helping to increase the impact of the Records. More than 30 users have accessed the Records website through the SIGMM website (sigmm.org) as well.

Table 1. Number of visitors and visits to the SIGMM Records website

Period Visitors
Day ~100-400
Week ~2000-3000
Month ~8000-13000
Total (Since July 29th)

20012   (102855 visits)

Table 2. Top 5 countries in terms of number of visitors

Rank Country Visitors
1 China 3339
2 United States 2634
3 India 1368
4 Germany 972
5 Brazil 731

Table 3. Top 3 posts on the Records website with highest impact

Post Date Visits Shares

Interview to Prof. Ramesh Jain

29/08/2017 619 103
Interview to Suranga Nanayakkara 13/09/2017 376 15
Standards Column: JPEG and MPEG 28/7/2017 273 44
Figure 1. Visits to the main page of the June issue since its publication (199 visits)

Figure 2. Visits to the main page of the June issue since its publication (199 visits)

Impact of the Social Media channels

The use of Social Media includes a Facebook page and a Twitter account (@sigmm). The number of followers is still not high (27 followers in Facebook, 88 followers in Twitter), which is natural with recently created channels. However, the impact of the posts on these platforms, in terms of reach, likes and shares is noteworthy. Tables 4 and 5 lists the top 3 Facebook posts and tweets, respectively, with highest impact up to now.

Table 4. Top 3 Facebook posts with highest impact

Post Date Reach (users) Likes Shares
>10K visitors in 3 weeks 21/08/2017 1347 7 4
Interview to Suranga Nanayakkara 13/09/2017 1297 89 3
Interview to Prof. Ramesh Jain 30/08/2017 645 28 4

Table 5. Top 3 tweets with highest impact

Post Date Likes Retweets
Announcing the publication of the June issue 28/07/2017 7 9
Announcing the availability of the official @sigmm account 8/09/2017 8 9
Social Media Reporter Award: Report from ICMR 2017 11/09/2017 5 8

Awarded Social Media Reporters

The Social Media co-chairs, with the approval of the SIGMM Executive Committee, have already started the processes of selecting the Best Social Media Reporters from the latest SIGMM conferences. In particular, the winners have been Miriam Redi  from ICMR 2017 (her post-summary of the conference is available at: http://sigmm.hosting.acm.org/2017/09/02/report-from-icmr-2017/) and Christian Timmerer for MMSYS 2017 (his post-summary of the conference is available at: http://sigmm.hosting.acm.org/2017/10/02/report-from-acm-mmsys-2017/). Congratulations!

The Editorial Team would like to take this opportunity to thank all the SIGMM members who use Social Media channels to share relevant news and information from the SIGMM community. We are convinced it is a very important service for the community.

We will keep pushing to improve the Records and extend their impact!

The Editorial Team.

JPEG Column: 76th JPEG Meeting in Turin, Italy

The 76th JPEG meeting was held at Politecnico di Torino, Turin, Italy, from 15 to 21 of July. The current standardisation activities have been complemented by the 25th anniversary of the first JPEG standard. Simultaneously, JPEG pursues the development of different standardised solutions to meet the current challenges on imaging technology, namely on emerging new applications and on low complexity image coding. The 76th JPEG meeting featured mainly the following highlights:

  • JPEG 25th anniversary of the first JPEG standard
  • High Throughput JPEG 2000
  • JPEG Pleno
  • JPEG XL
  • JPEG XS
  • JPEG Reference Software

In the following an overview of the main JPEG activities at the 76th meeting is given.

JPEG 25th anniversary of the first JPEG standard – JPEG is proud tocelebrate the 25th anniversary of its first standard. This very successful standard won an Emmy award in 1995-96 and its usage is still rising, reaching in 2015 the impressive daily rate of over 3 billion images exchanged in just a few social networks. During the celebration, a number of early members of the committee were awarded for their contributions to this standard, namely Alain Léger, Birger Niss, Jorgen Vaaben and István Sebestyén. Also Richard Clark for his long lasting contribution as JPEG Webmaster and contributions to many JPEG standards was also rewarded during the same ceremony. The celebration will continue at the next 77th JPEG meeting that will be held in Macau, China from 21 to 27, October, 2017.

IMG_1113 2

High Throughput JPEG 2000 – The JPEG committee is continuing its work towards the creation of a new Part 15 to the JPEG 2000 suite of standards, known as High Throughput JPEG 2000 (HTJ2K). In a significant milestone, the JPEG Committee has released a Call for Proposals that invites technical contributions to the HTJ2K activity. The deadline for an expression of interest is 1 October 2017, as detailed in the Call for Proposals, which is publicly available on the JPEG website at https://jpeg.org/jpeg2000/htj2k.html.

The objective of the HTJ2K activity is to identify and standardize an alternate block coding algorithm that can be used as a drop-in replacement for the block coding defined in JPEG 2000 Part-1. Based on existing evidence, it is believed that significant increases in encoding and decoding throughput are possible on modern software platforms, subject to small sacrifices in coding efficiency. An important focus of this activity is interoperability with existing systems and content libraries. To ensure this, the alternate block coding algorithm supports mathematically lossless transcoding between HTJ2K and JPEG 2000 Part-1 codestreams at the code-block level.

JPEG Pleno – The JPEG committee intends to provide a standard framework to facilitate capture, representation and exchange of omnidirectional, depth-enhanced, point cloud, light field, and holographic imaging modalities. JPEG Pleno aims at defining new tools for improved compression while providing advanced functionalities at the system level. Moreover, it targets to support data and metadata manipulation, editing, random access and interaction, protection of privacy and ownership rights as well as other security mechanisms. At the 76th JPEG meeting in Turin, Italy, responses to the call for proposals for JPEG Pleno light field image coding were evaluated using subjective and objective evaluation metrics, and a Generic JPEG Pleno Light Field Architecture was created. The JPEG committee defined three initial core experiments to be performed before the 77thJPEG meeting in Macau, China. Interested parties are invited to join these core experiments and JPEG Pleno standardization.

JPEG XL – The JPEG Committee is working on a new activity, known as Next generation Image Format, which aims to develop an image compression format that demonstrates higher compression efficiency at equivalent subjective quality of currently available formats and that supports features for both low-end and high-end use cases.  On the low end, the new format addresses image-rich user interfaces and web pages over bandwidth-constrained connections. On the high end, it targets efficient compression for high-quality images, including high bit depth, wide color gamut and high dynamic range imagery. A draft Call for Proposals (CfP) on JPEG XL has been issued for public comment, and is available on the JPEG website.

JPEG XS – This project aims at the standardization of a visually lossless low-latency lightweight compression scheme that can be used as a mezzanine codec for the broadcast industry and Pro-AV markets. Targeted use cases are professional video links, IP transport, Ethernet transport, real-time video storage, video memory buffers, and omnidirectional video capture and rendering. After a Call for Proposal and the assessment of the submitted technologies, a test model for the upcoming JPEG XS standard was created. Several rounds of Core Experiments have allowed to further improving the Core Coding System, the last one being reviewed during this 76th JPEG meeting in Torino. More core experiments are on their way, including subjective assessments. JPEG committee therefore invites interested parties – in particular coding experts, codec providers, system integrators and potential users of the foreseen solutions – to contribute to the further specification process. Publication of the International Standard is expected for Q3 2018.

JPEG Reference Software – Together with the celebration of 25th anniversary of the first JPEG Standard, the committee continued with its important activities around the omnipresent JPEG image format; while all newer JPEG standards define a reference software guiding users in interpreting and helping them in implementing a given standard, no such references exist for the most popular image format of the Internet age. The JPEG committee therefore issued a call for proposals https://jpeg.org/items/20170728_cfp_jpeg_reference_software.html asking interested parties to participate in the submission and selection of valuable and stable implementations of JPEG (formally, Rec. ITU-T T.81 | ISO/IEC 10918-1).

 

Final Quote

“The experience shared by developers of the first JPEG standard during celebration was an inspiring moment that will guide us to further the ongoing developments of standards responding to new challenges in imaging applications. said Prof. Touradj Ebrahimi, the Convener of the JPEG committee.

About JPEG

JPEG-signatureThe Joint Photographic Experts Group (JPEG) is a Working Group of ISO/IEC, the International Organisation for Standardization / International Electrotechnical Commission, (ISO/IEC JTC 1/SC 29/WG 1) and of the Interna
tional Telecommunication Union (ITU-T SG16), responsible for the popular JBIG, JPEG, JPEG 2000, JPEG XR, JPSearch and more recently, the JPEG XT, JPEG XS, JPEG Systems and JPEG Pleno families of imaging standards.

The JPEG group meets nominally three times a year, in Europe, North America and Asia. The latest 75th    meeting was held on March 26-31, 2017, in Sydney, Australia. The next (76th) JPEG Meeting will be held on July 15-21, 2017, in Torino, Italy.

More information about JPEG and its work is available at www.jpeg.org or by contacting Antonio Pinheiro and Frederik Temmermans of the JPEG Communication Subgroup at pr@jpeg.org.

If you would like to stay posted on JPEG activities, please subscribe to the jpeg-news mailing list on https://listserv.uni-stuttgart.de/mailman/listinfo/jpeg-news.  Moreover, you can follow JPEG twitter account on http://twitter.com/WG1JPEG.

Future JPEG meetings are planned as follows:

  • No. 77, Macau, CN, 23 – 27 October 2017

 

Multidisciplinary Column: An Interview with Suranga Nanayakkara

 

suranga

 

Could you tell us a bit about your background, and what the road to your current position was?

I was born and raised in Sri Lanka and my mother being an electrical engineer by profession, it always fascinated me to watch her tinkering around with the TV or the radio and other such things. At age of 19, I moved to Singapore to pursue my Bachelors degree at National University of Singapore (NUS) on electronics and computer engineering. I then wanted to go into a field of research that would help me to apply my skills into creating a meaningful solution. As such, for my PhD I started exploring ways of providing the most satisfying musical experience to profoundly deaf Children.

That gave me the inspiration to design something that provides a full body haptic sense.  We researched on various structures and materials, and did lots of user studies. The final design, which we call the Haptic Chair, was a wooden chair that has contact speakers embedded to it. Once you play music through this chair, the whole chair vibrates and a person sitting on the chair gets a full body vibration in tune with the music been played.

I was lucky to form a collaboration with one of the deaf schools in Sri Lanka, Savan Sahana Sewa, a College in Rawatawatte, Moratuwa. They gave me the opportunity to install the Haptic Chair in house, where there were about 90 hearing-impaired kids. I conducted user studies over a year and a half with these hearing-impaired kids, trying to figure out if this was really providing a satisfying musical experience. Haptic Chair has been in use for more than 8 years now and has provided a platform for deaf students and their hearing teachers to connect and communicate via vibrations generated from sound.

After my PhD, I met Professor Pattie Maes, who directs the Fluid Interfaces Group at the Media Lab. After talking to her about my research and future plans, she offered me a postdoctoral position in her group.  The 1.5 years at MIT Media Lab was a game changer in my research career where I was able to form the emphasis is on “enabling” rather than “fixing”. The technologies that I have developed there, for example, FingerReader, demonstrated this idea and have a potentially much broader range of applications.

At this time, Singapore government was setting up a new public university, Singapore University of Technology and Design (SUTD), in collaboration with MIT. I then moved to SUTD where I work as an Assistant Professor and direct the Augmented Human Lab (www.ahlab.org).

Your general agenda is towards humanizing technology. Can you tell us a bit about this mission and how it impacts your research?

When I started my bachelor’s degree in National University of Singapore, in 2001, I spoke no English and had not used a computer. My own “disability” to interact with computers gave me a chance to realize that there’s a lot of opportunity to create an impact with assistive human-computer interfaces.

This inspired me to establish ‘Augmented Human Lab’ with a broader vision of creating interfaces to enable people, connecting different user communities through technology and empowering them to go beyond what they think they could do. Our work has use cases for everyone regardless of where you stand in the continuum of sensorial ability and disability.   

In a short period of 6 years, our work resulted in over 11 million (SGD) research funding, more than 60 publications, 12 patents, more than 20 live demonstrations and most importantly the real-world deployments of my work that created a social impact.

How does multidisciplinary work play a role in your research?

My research focuses on design and development of new sensory-substitution systems, user interfaces and interactions to enhance sensorial and cognitive capabilities of humans.  This really is multidisciplinary in nature, including development of new hardware technologies, software algorithms, understanding the users and practical behavioral issues, understanding real-life contexts in which technologies function.

Can you tell us about your work on interactive installations, e.g. for Singapore’s 50th birthday? What are lessons learnt from working across disciplines?

I’ve always enjoyed working with people from different domains. Together with an interdisciplinary team, we designed an interactive light installation, iSwarm (http://ahlab.org/project/iswarm), for iLight Marina Bay, a light festival in Singapore. iSwarm consisted of 1600 addressable LEDs submerged in a bay area near the Singapore City center. iSwarm reacted to the presence of visitors with a modulation of its pattern and color.  This made a significant impact as more than 685,000 visitors came to see this (http://www.ura.gov.sg/uol/media-room/news/2014/apr/pr14-27.aspx).  Subsequently, the curators of the Wellington LUX festival invited us to feature a version of iSwarm (nZwarm) for their 2014 festival. Also, we were invited to create an interactive installation “SonicSG” (http://ahlab.org/project/sonicsg), for Singapore’s 50th anniversary SonicSG aimed at fostering a holistic understanding of the ways in which technology is changing our thinking about design in high-density contexts such as Singapore and how its creative use can reflect a sense of place. The project consisted of a large-scale interactive light installation that consisted on 1,800 floating LED lights in the Singapore River in the shape of the island nation.

Could you name a grand research challenge in your current field of work?

The idea of ‘universal design’, which, sometimes, is about creating main stream technology and adding a little ‘patch’ to label it being universal. Take the voiceover feature for example – it is better than nothing, but not really the ideal solution.  This is why despite efforts and the great variety of wearable assistive devices available, user acceptance is still quite low.  For example, the blind community is still so much dependent on the low-tech whitecane.

The grand challenge really is to develop assistive interfaces that feels like a natural extension of the body (i.e. Seamless to use), socially acceptable, works reliably in the complex, messy world of real situations and support independent and portable interaction.

When would you consider yourself successful in reaching your overall mission of humanizing technology?

We want to be able to create the assistive devices that set the de facto standard for people we work with – especially the blind community and deaf community.  We would like to be known as a team who “Provide a ray of light to the blind and a rhythm to the lives of the deaf”.

How and in what form do you feel we as academics can be most impactful?

For me it is very important to be able to understand where our academic work can be not just exciting or novel, but have a meaningful impact on the way people live.  The connection we have with the communities in which we live and with whom we work is a quality that will ensure our research will always have real relevance.


Bios

 

Suranga Nanayakkara:

Before joining SUTD, Suranga was a Postdoctoral Associate at the Fluid Interfaces group, MIT Media Lab. He received his PhD in 2010 and BEng in 2005 from the National University of Singapore. In 2011, he founded the “Augmented Human Lab” (www.ahlab.org) to explore ways of creating ‘enabling’ human-computer interfaces to enhance the sensory and cognitive abilities of humans. With publications in prestigious conferences, demonstrations, patents, media coverage and real-world deployments, Suranga has demonstrated the potential of advancing the state-of-the art in Assistive Human Computer Interfaces. For the totality and breadth of achievements, Suranga has been recognized with many awards, including young inventor under 35 (TR35 award) in the Asia Pacific region by MIT TechReview, Ten Outstanding Yong Professionals (TOYP) by JCI Sri Lanka and INK Fellow 2016.

Editor Biographies

Cynthia_Liem_2017Dr. Cynthia C. S. Liem is an Assistant Professor in the Multimedia Computing Group of Delft University of Technology, The Netherlands, and pianist of the Magma Duo. She initiated and co-coordinated the European research project PHENICX (2013-2016), focusing on technological enrichment of symphonic concert recordings with partners such as the Royal Concertgebouw Orchestra. Her research interests consider music and multimedia search and recommendation, and increasingly shift towards making people discover new interests and content which would not trivially be retrieved. Beyond her academic activities, Cynthia gained industrial experience at Bell Labs Netherlands, Philips Research and Google. She was a recipient of the Lucent Global Science and Google Anita Borg Europe Memorial scholarships, the Google European Doctoral Fellowship 2010 in Multimedia, and a finalist of the New Scientist Science Talent Award 2016 for young scientists committed to public outreach.

 

 

jochen_huberDr. Jochen Huber is a Senior User Experience Researcher at Synaptics. Previously, he was an SUTD-MIT postdoctoral fellow in the Fluid Interfaces Group at MIT Media Lab and the Augmented Human Lab at Singapore University of Technology and Design. He holds a Ph.D. in Computer Science and degrees in both Mathematics (Dipl.-Math.) and Computer Science (Dipl.-Inform.), all from Technische Universität Darmstadt, Germany. Jochen’s work is situated at the intersection of Human-Computer Interaction and Human Augmentation. He designs, implements and studies novel input technology in the areas of mobile, tangible & non-visual interaction, automotive UX and assistive augmentation. He has co-authored over 60 academic publications and regularly serves as program committee member in premier HCI and multimedia conferences. He was program co-chair of ACM TVX 2016 and Augmented Human 2015 and chaired tracks of ACM Multimedia, ACM Creativity and Cognition and ACM International Conference on Interface Surfaces and Spaces, as well as numerous workshops at ACM CHI and IUI. Further information can be found on his personal homepage: http://jochenhuber.com

MPEG Column: 119th MPEG Meeting in Turin, Italy

The original blog post can be found at the Bitmovin Techblog and has been updated here to focus on and highlight research aspects.

The MPEG press release comprises the following topics:

  • Evidence of New Developments in Video Compression Coding
  • Call for Evidence on Transcoding for Network Distributed Video Coding
  • 2nd Edition of Storage of Sample Variants reaches Committee Draft
  • New Technical Report on Signalling, Backward Compatibility and Display Adaptation for HDR/WCG Video Coding
  • Draft Requirements for Hybrid Natural/Synthetic Scene Data Container

Evidence of New Developments in Video Compression Coding

At the 119th MPEG meeting, responses to the previously issued call for evidence have been evaluated and they have all successfully demonstrated evidence. The call requested responses for use cases of video coding technology in three categories:

  • standard dynamic range (SDR) — two responses;
  • high dynamic range (HDR) — two responses; and
  • 360° omnidirectional video — four responses.

The evaluation of the responses included subjective testing and an assessment of the performance of the “Joint Exploration Model” (JEM). The results indicate significant gains over HEVC for a considerable number of test cases with comparable subjective quality at 40-50% less bit rate compared to HEVC for the SDR and HDR test cases with some positive outliers (i.e., higher bit rate savings). Thus, the MPEG-VCEG Joint Video Exploration Team (JVET) concluded that evidence exists of compression technology that may significantly outperform HEVC after further development to establish a new standard. As a next step, the plan is to issue a call for proposals at 120th MPEG meeting (October 2017) and responses expected to be evaluated at the 122th MPEG meeting (April 2018).

We already witness an increase of research articles addressing video coding technologies with capabilities beyond HEVC which will further increase in the future. The main driving force is over the top (OTT) delivery which calls for more efficient bandwidth utilization. However, competition is also increasing with the emergence of AV1 of AOMedia and we may observe also an increasing number of articles in that direction including evaluations thereof. An interesting aspect is also that the number of use cases is also increasing (e.g., see different categories above), which adds further challenges to the “complex video problem”.

Call for Evidence on Transcoding for Network Distributed Video Coding

The call for evidence on transcoding for network distributed video coding targets interested parties possessing technology providing transcoding of video at lower computational complexity than transcoding done using a full re-encode. The primary application is adaptive bitrate streaming where a highest bitrate stream is transcoded into lower bitrate streams. It is expected that responses may use “side streams” (or side information, some may call it metadata) accompanying the highest bitrate stream to assist in the transcoding process. MPEG expects submissions for the 120th MPEG meeting where compression efficiency and computational complexity will be assessed.

Transcoding has been discussed already for a long time and I can certainly recommend this article from 2005 published in the Proceedings of the IEEE. The question is, what is different now, 12 years later, and what metadata (or side streams/information) is required for interoperability among different vendors (if any)?

A Brief Overview of Remaining Topics…

  • The 2nd edition of storage of sample variants reaches Committee Draft and expands its usage to MPEG-2 transport stream whereas the first edition primarily focused on ISO base media file format.
  • The new technical report for high dynamic range (HDR) and wide colour gamut (WCG) video coding comprises a survey of various signaling mechanisms including backward compatibility and display adaptation.
  • MPEG issues draft requirements for a scene representation media container enabling the interchange of content for authoring and rendering rich immersive experiences which is currently referred to as hybrid natural/synthetic scene (HNSS) data container.

Other MPEG (Systems) Activities at the 119th Meeting

DASH is in fully maintenance mode as only minor enhancements/corrections have been discussed including contributions to conformance and reference software. The omnidirectional media format (OMAF) is certainly the hottest topic within MPEG systems which is actually between two stages (i.e., between DIS and FDIS) and, thus, a study of DIS has been approved and national bodies are kindly requested to take this into account when casting their votes (incl. comments). The study of DIS comprises format definitions with respect to coding and storage of omnidirectional media including audio and video (aka 360°). The common media application format (CMAF) has been ratified at the last meeting and awaits publications by ISO. In the meantime CMAF is focusing on conformance and reference software as well as amendments regarding various media profiles. Finally, requirements for a multi-image application format (MiAF) are available since the last meeting and at the 119th MPEG meeting a work draft has been approved. MiAF will be based on HEIF and the goal is to define additional constraints to simplify its file format options.

We have successfully demonstrated live 360 adaptive streaming as described here but we expect various improvements from standards available and under development of MPEG. Research aspects in these areas are certainly interesting in the area of performance gains and evaluations with respect to bandwidth efficiency in open networks as well as how these standardization efforts could be used to enable new use cases. 

Publicly available documents from the 119th MPEG meeting can be found here (scroll down to the end of the page). The next MPEG meeting will be held in Macau, China, October 23-27, 2017. Feel free to contact me for any questions or comments.

Report from ICMR 2017

ACM International Conference on Multimedia Retrieval (ICMR) 2017

ACM ICMR 2017 in “Little Paris”

ACM ICMR is the premier International Conference on Multimedia Retrieval, and from 2011 it “illuminates the state of the arts in multimedia retrieval”. This year, ICMR was in an wonderful location: Bucharest, Romania also known as “Little Paris”. Every year at ICMR I learn something new. And here is what I learnt this year.

ICMR2017

Final Conference Shot at UP Bucharest

UNDERSTANDING THE TANGIBLE: object, scenes, semantic categories – everything we can see.

1) Objects (and YODA) can be easily tracked in videos.

Arnold Smeulders delivered a brilliant keynote on “things” retrieval: given an object in an image, can we find (and retrieve) it in other images, videos, and beyond? Very interesting technique for tracking objects (e.g. Yoda) in videos based on similarity learnt through siamese networks.

Tracking Yoda with Siamese Networks

2) Wearables + computer vision help explore cultural heritage sites.

As showed in his keynote, at MICC University of Florence, Alberto del Bimbo and his amazing team have designed smart audio guides for indoor and outdoor spaces. The system detects, recognises, and describes landmarks and artworks from wearable camera inputs (and GPS coordinates, in case of outdoor spaces).

3) We can finally quantify how much images provide complementary semantics compared to text [BEST MULTIMODAL PAPER AWARD].

For ages, the community has asked how relevant different modalities are for multimedia analysis: this paper (http://dl.acm.org/citation.cfm?id=3078991) finally proposes a solution to quantify information gaps between different modalities.

4) Exploring news corpuses is now very easy: news graphs are easy to navigate and aware of the type of relations between articles.

Remi Bois and his colleagues presented this framework (http://dl.acm.org/citation.cfm?id=3079023), made for professional journalists and the general public, for seamlessly browsing through large-scale news corpus. They built a graph where nodes are articles in a news corpus. The most relevant items to each article are chosen (and linked) based on an adaptive nearest neighbor technique. Each link is then characterised according to the type of relation of the 2 linked nodes.

5) Panorama outdoor images are much easier to localise.

In his beautiful work (https://t.co/3PHCZIrA4N), Ahmet Iscen from Inria developed an algorithm for location prediction from StreetView images, outperforming the state of the art thanks to an intelligent stitching pre-processing step: predicting locations from panoramas (stitched individual views) instead of individual street images improves performances dramatically!

UNDERSTANDING THE INTANGIBLE: artistic aspects, beauty, intent: everything we can perceive

1) Image search intent can be predicted by the way we look.

In his best paper candidate research work (http://dl.acm.org/citation.cfm?id=3078995), Mohammad Soleymani showed that image search intent (seeking information, finding content, or re-finding content) can be predicted from physisological responses (eye gaze) and implicit user interaction (mouse movements).

2) Real-time detection of fake tweets is now possible using user and textual cues.

Another best paper candidate (http://dl.acm.org/citation.cfm?id=3078979), this time from CERTH. The team collected a large dataset of fake/real sample tweets spanning 17 events and built an effective model from misleading content detection from tweet content and user characteristics. A live demo here: http://reveal-mklab.iti.gr/reveal/fake/

3) Music tracks have different functions in our daily lives.

Researchers from TU Delft have developed an algorithm (http://dl.acm.org/citation.cfm?id=3078997) which classifies music tracks according to their purpose in our daily activities: relax, study and workout.

4) By transferring image style we can make images more memorable!

The team at University of Trento built an automatic framework (https://arxiv.org/abs/1704.01745) to improve image memorability. A selector finds the style seeds (i.e. abstract paintings) which are likely to increase memorability of a given image, and after style transfer, the image will be more memorable!

5) Neural networks can help retrieve and discover child book illustrations.

In this amazing work (https://arxiv.org/pdf/1704.03057.pdf), motivated by real children experiences, Pinar and her team from Hacettepe University collected a large dataset of children book illustrations and found that neural networks can predict and transfer style, allowing to make “Winnie the witch”-like many other illustrations.

Winnie the Witch

6) Locals perceive their neighborhood as less interesting, more dangerous and dirtier compared to non-locals.

In this wonderful work (http://www.idiap.ch/~gatica/publications/SantaniRuizGatica-icmr17.pdf), presented by Darshan Santain from IDIAP, researchers asked locals and crowd-workers to look at pictures from various neighborhoods in Guanajuato and rate them according to interestingness, cleanliness, and safety.

THE FUTURE: What’s Next?

1) We will be able to anonymize images of outdoor spaces thanks to Instagram filters, as proposed by this work (http://dl.acm.org/citation.cfm?id=3080543) in the Brave New Idea session.  When an image of an outdoor space is manipulated with appropriate Instagram filters, the location of the image can be masked from vision-based geolocation classifiers.

2) Soon we will be able to embed watermarks in our Deep Neural Network models in order to protect our intellectual property [BEST PAPER AWARD]. This is a disruptive, novel idea, and that is why this work from KDDI Research and Japan National Institute of Informatics won the best paper award. Congratulations!

3) Given an image view of an object, we will predict the other side of things (from Smeulders’ keynote). In the pic: predicting the other side of chairs. Beautiful.

Predicting the other side of things

THANKS: To the organisers, to the volunteers, and to all the authors for their beautiful work 🙂

EDITORIAL NOTE: A more extensive report from ICMR 2017 by Miriam is available on Medium

An interview with Prof. Ramesh Jain

Prof. Ramesh Jain in the year 2016.

Prof. Ramesh Jain in 2016.

Please describe your journey into computing from your youth up to the present. What foundational lessons did you learn from this journey? Why you were initially attracted to multimedia?

I am luckier than most people in that I have been able to experience really diverse situations in my life. Computing was just being introduced at Indian Universities when I was a student, so I never had a chance to learn computing in a classroom setting.  I took a few electronics courses as part of my undergraduate education, but nothing even close to computing.  I first used computers during my doctoral studies at the Indian Institute of Technology, Kharagpur, in 1970.  I was instantly fascinated and decided to use this emerging technology in the design of sophisticated control systems.  The information I picked up along the way was driven by my interests and passion.

I grew up in a traditional Indian Ashram, with no facilities for childhood education, so this was not the first time I faced a lack of formal instruction.  My father taught me basic reading, writing, and math skills and then I took a school placement exam.  I started school at the age of nine in fifth grade.

During my doctoral days, two areas fascinated me: computing and cybernetics.  I decided to do my research in digital control systems because it gave me a chance to combine computing and control.  At the time, the use of computing was very basic—digitizing control signals and understanding the effect of digitalization.  After my PhD, I became interested in artificial intelligence and entered AI through pattern recognition.  

In my current research, I am applying cybernetics to health.  Computing has finally matured enough that it can be applied in real control systems that play a critical role in our lives.  And what is more important to our well-being than our health?

The main driver of my career has been realizing that ultimately I am responsible for my own learning. Teachers are important, but ultimately I learn what I find interesting.  The most important attribute in learning is a person’s curiosity and desire to solve problems.  

Something else significantly impacted my thinking in my early research days.  I found that it is fundamental to accept ignorance about a problem and then examine concepts and techniques from multiple perspectives.  One person’s or one research paper’s perspective is just that—an opinion.  By examining multiple perspectives and relating those to your experiences, you can better understand a problem and its solutions.

Another important lesson is that problems or concepts are often independent of the academic and other organisational walls that exist.  Interesting problems always require perspectives, concepts, and technologies from different academic disciplines. Over time, it’s then necessary to create to new disciplines, or as Thomas Kuhn called them new paradigms [Kuhn 62].

In the late 1980s, much of my research was addressing different aspects of computer vision.  I was frustrated by the slow progress in computer vision.  In fact, I coauthored a paper on this topic that became quite controversial [Jain 91].  It was clear that computer vision could be central to computing in the real world, such as in industry, medical imaging, and robotics, but it was unable to solve any real problems.  Progress was slow.  

While working on object recognition, it became increasingly obvious to me that images alone do not contain enough information to solve the vision problem.  Projection of real-world images to a photograph results in a loss of information that can only be recovered by combining information from many other sources, including knowledge in many different forms, metadata, and other signals.  I started thinking that our goal should be to understand the real world using sensors and other sources of knowledge, not just images.  I felt that we were addressing the wrong problem—understanding the physical world using only images.  The real problem is to understand the physical world.  The physical world can only be understood by capturing correlated information.  To me, this is multimedia: understand the physical world using multiple disparate sensors and other sources of information.

This is a very good definition of multimedia. In this context, what do you think is the future of multimedia research in general?

Different aspects of physical world must be captured using different types of sensors. In early days, multimedia concerned itself with the two most dominant human senses:vision and hearing. As the field is advancing, we must deal with every type of sensor that is developed to capture information in different applications. Multimedia must become the area that processes disparate data in context to convert it to information.

Taking into account that you are working with AI for such a long time, what do you think about the current trend of deep learning and how it will develop?

Every field has its trends. Learning is definitely a very important step in AI and has attracted attention from early days. However, it was known that reasoning and search play equally important role in AI. Ultimately problem solving depends on recognizing real world objects and patterns and here learning plays key role. To design successful deep systems, learning needs to be combined with search and reasoning.

Prof. Ramesh Jain at an early stage of his career (1975).

Prof. Ramesh Jain at an early stage of his career (1975).

Please tell us more about your vision and objectives behind your current roles. What do you hope to accomplish, and how will you bring this about?

One thing that is of great interest to every human is their health.  Ironically, technology utilization in healthcare is not as pervasive as in many other fields.  Another intriguing fact about technology and health is that almost all progress in health is due to advances in technology, but barriers to using technology are also the most overwhelming in health.  I experienced the terrifying state of healthcare first hand while going through treatment for gastro-esophageal cancer in 2004.  It became clear to me during my fight with cancer that technology could revolutionize most aspects of treatment—from diagnosis to guidance and operationalization of patient care and engagement—but it was not being used.  During that period, it became clear to me that multimodal data leading to information and knowledge is the key to success in this and many other fields.  That experience changed my thinking and research.

Ancient civilizations observed that health is not the absence of disease; disease is a perturbation of a healthy state.  This wisdom was based on empirical observations and resulted in guidelines for healthy living that includes diet, sleep, and whole-body exercise, such as yoga or tai chi.  Now is the time to develop scientific guidelines based on the latest evolving knowledge and technology to maximize periods of overall health and minimize suffering during diseases in human lives.  It seems possible to raise life expectancy to 100+ years for most people.  I want to cross the 100-year threshold myself and live an active life until my last day.  I am working toward making that happen.

Technology for healthcare is increasingly a popular topic.  Data is at the center of healthcare, and new areas like precision health and wellness are becoming increasingly popular. At the University of California, Irvine (UCI), we’ve created a major effort to bring together researchers from Information and Computer Sciences, Health Sciences, Engineering, Public Health, Nursing, Biology, and others fields who are adopting a novel perspective in an effort to build technology that empowers people. From this perspective, we adopt a cybernetics approach to health.  This work is being done at the UCI’s Institute for Future Health, of which I am the founding director.  

At the Institute for Future Health, currently we are building a community that will do academic research as well as work closely with industry, local communities, hospitals, and start-up companies. We will also collaborate with global researchers and practitioners interested in this approach.  There is significant interest from several institutions in several countries to collaborate and pursue this approach.

This is very interesting and relevant! Do you think that the multimedia community will be open for such a direction or since it is so important and societal relevant would it be good to built a new research community around this idea?

As you said, this is the most important research direction I have been involved in and most challenging. And this is an important direction in itself — this needs to happen using all tech and other resources.

Since I can not wait for any community to be ready to address this, I started building a community to address Future Health. But, I believe that this could be the most relevant application for multimedia technology as well as the techniques from multimedia are very relevant to this area.

Exciting problem because the time is right to address this area.

Do you think that the multimedia community has the right skills to address medical multimedia problems and how could the community be encouraged into that direction?

Multimedia community is better equipped than any other community to deal with diverse types of data. New tools will be required for new challenges, but we already have enough tools and techniques to address many current challenges. To do this, however, the community has to become an open forward looking community going beyond visual information to consider all other modes that are currently ignored under ‘meta data’. All data is data and contributes to information.

Can you profile your current research and its challenges, opportunities, and implications?

I am involved in a research area that is one of the most challenging and that has implications for every human.

The most exciting aspect of health is that it is truly a multimodal data-intensive operation.  As discussed by Norbert Wiener in his book Cybernetics [Wiener 48] about 75 years ago, control and communication processes in machines and animals are similar and are based on information.  Until recently, these principles formed the basis for understanding health, but they can now be used to control health as well.  This is exciting for everybody, and it motivates me to work hard and make something happen. For others, but also for me.

We can discuss some fundamental components of this area from a cybernetics/information perspective:

Creating individual health model:  Each person is unique.  Our bodies and lives are determined by two major factors:  genetics and lifestyle.  Until recently, personal genome information was difficult to obtain, and personal lifestyle information was only anecdotally collected.  This century is different. Personal genomic, in fact all Omics, data is becoming easier to get and more precise and informative. And mobile phones, wearables, the Internet of Things (IoTs) around us, and social media are all coming together to quantitatively determine different aspects of our lifestyles as well as many bio-markers.

This requires combining multimodal data from different sources, which is a challenge. By collecting all such lifestyle data, we can start assembling a log of information—a kind of multimodal lifelog on turbo charge—that could be used to build a model of a person using event mining tools.  By combining genomic and lifestyle data, we can form a complete model of a person that contains all detailed health-related information.

Aggregating individual health models to population disease models:  Current disease models rely on limited data from real people.  Until recently, it was not possible to gather all such data. As discussed earlier, the situation is rapidly changing.  Once data is available for individual health models, it could be sliced and diced to formulate disease models for different populations and demographics.  This will be revolutionary.

Correlating health and related knowledge to actions for each individual and for society: Cybernetics underlies most complex engineering real-time systems.  The concept of feedback used generate a correct signal to be applied to a system to take it from the current state to a desired state is essential in all real-time control systems.  Even for the human body, homeostasis uses similar principles.  Can we use this to guide people in their lifestyle choices and medical compliance?  

Navigation systems are a good example of how an old, tedious problem can become extremely easy to use.  Only 15 years ago, we needed maps and a lot of planning to visit new places.  Now, mobile navigation systems can anticipate upcoming actions and even help you correct your mistakes gracefully, in real time.  They can also identify traffic conditions and suggest the best routes.

If technology can do this for navigation in the physical world, can we develop technology to help us select appropriate lifestyle decisions and do so perpetually?  The answer is obviously yes.  By compiling all health and related knowledge, determining your current personal health situation and surrounding environmental situations, and using your past chronicle to log your preferences, it can provide you with suggestions that will make your life not only more healthy but also more enjoyable.

This is our dream at the Institute for Future Health.

Future Health: Perpetual enhancement of health by managing lifestyle and environment.

Future Health: Perpetual enhancement of health by managing lifestyle and environment.

4) How would you describe your top innovative achievements in terms of the problems you were trying to solve, your solutions, and the impact it has today and into the future?

I am lucky to have been active for more than four decades and to have had the opportunity to participate in research and entrepreneurial activities in multiple countries at the best organizations. This gave me a chance to interact with the brightest young people as well as seasoned creative visionaries and researchers.  Thus, it is difficult for me to decide what to list.  I will adopt a chronological approach to answer your question.

Working in H.H. Nagel’s research group in Hamburg Germany, I got involved in developing an approach to motion detection and analysis in 1976.  We wrote the first papers on video analysis that worked with traffic video sequences and detected and analyzed the motion of cars, pedestrians, and other objects.  Our paper at IJCAI 1977 [Jain 77] was remarkable in showing these results at a time when digitizing a picture was a chore lasting minutes and the most powerful computer could not store a full video frame in its memory.  Even today, the first step in many video analysis systems is differencing, as proposed in that work.

Many bright people contributed powerful ideas in computer vision from my groups.  E. North Coleman was possibly the first person to propose Photometric Stereo in 1981 [Coleman].  Paul Besl’s work on segmentation using surface characteristics and 3D object recognition made a significant impact [Besl]. Tom Knoll did some exciting research on feature-indexed hypotheses for object recognition.  But Tom’s major contribution to current computer technology was his development of Photoshop when he was doing his PhD in my research group.  As we all know, Photoshop revolutionized how we view photos. Working with Kurt Skifstad at my first company Imageware, we demonstrated the first version of capturing a 3D shape of a person’s face and reproducing it using a machine in the next room at the Autofact Conference in 1994. I guess that was a primitive version of 3D printing.  At the time, we called it 3D fax.

The idea of designing a content-based organization to build a large database of images was considered crazy in 1990, but it bugged me so much that I started first a project and later a company, Virage, working with several people.  In fact, Bradley Horowitz left his research at MIT to join me in building Virage and later he managed the project that brought Google Photos to its current form.  That process building video databases resulted in my realizing that photos and videos are a lot more than just intensity values.  And that realization lead me to champion the idea that information about the physical world can be recovered more effectively and efficiently by combining correlated, but incomplete, information from several sources, including metadata.  This was the thinking that encouraged me to start building the multimedia community.

Since computing and camera technology had advanced enough by 1994, my research group at the University of California, San Diego (UCSD), particularly Koji Wakimoto[Jain 95] and then Arun Katkere and Saeed Moezzi [Moezzi 96] helped in developing initially Multiple Perspective Interactive Video and later Immersive video to realize compelling telepresence.  That research area in various forms attracted people from the movie industry as well as people interested in different art forms and collaborative spaces.  By licensing our patents from UCSD, we started a company Praja to bring immersive video technology to sports.  I left academia to be the CEO of Praja.

While developing technology for indexing sporting events, it became obvious that events are as important as objects, if not more, when indexing multimedia data.  Information about events comes from separate sources, and events combine different dimensions that play a key role in our understanding of the world.  This realization resulted in Westermann and I working on a general computational model for events.  Later we realized that by aggregating events over space and time, we could detect situations.  Vivek Singh and Mingyan Gao helped prototype an EventShop platform [Singh 2010], which was later converted to an open source platform under the leadership of Siripen Pongpaichet.

One of the most fundamental problems in society is connecting people’s needs to appropriate resources effectively, efficiently, and promptly in a given situation.  To understand people’s needs, it is essential to build objective models that could be used to recommend correct resources in given situations.  Laleh Jalali started building an event-mining framework that could be used to build an objective self model using the different types of data streams related to people that have now become easily available [Jalali 2015].  

All this work is leading to a framework that is behind my current thinking related to health intelligence. In health intelligence, our goal is to perpetually measure a person’s activities, lifestyle, environment, and bio-markers to understand his/her current state as well as continuously build his/her model. Using that model, current state, and medical knowledge, it is possible to provide perpetual guidance to help people take the right action in a given situation.

Over your distinguished career, what are the top lessons you want to share with the audience?

I have been lucky to get a chance to work on several fun projects.  More importantly, I have worked closely on an equal number of successful and not so successful projects. I consider a project successful if it accomplishes its goal and the people working on the project enjoy it.  Although each project is unique, I’ve noticed that some common themes make for a project successful.

Passion for the Project:  Time and again, I’ve seen that passion for the project makes a huge difference. When people are passionate, they don’t consider it work and will literally do whatever is required to make it successful.  In my own case, I find that the ideas that I find compelling, both in terms of their goals and implications, are the ones that motivate me to do my best.  I am focused, driven, and willing to work hard.  I learned long ago to work only on problems that I find important and compelling.  Some ideas are just not for me.  Otherwise, it is better for the project and for me if I dissociate with it at the first opportunity to do so.

Open Mind:  Departmental or similar boundaries in both academia and industry severely restrict how a problem is addressed.  Solving a problem should be the goal, not using the resources or technology of a specific department.  In academia, I often hear things like “this is not a multimedia problem” or “this is database problem.”  Usually, the goal of a project is to solve a problem, so we should use the best technique or resource available to solve the problem.

Most of the boundaries for academic disciplines are artificial, and because they keep changing, the departments based on any specific factor will likely also change over time.  By addressing challenging problems using appropriate technology and resources, we push boundaries and either expand older boundaries or create new disciplines.

Another manifestation of an open mind is the ability to see the same problem from multiple perspectives.  This is not easy—we all have our biases.  The best thing to do is to form a group of researchers from diverse cultural and disciplinary backgrounds.  Diversity naturally results in diverse perspectives.

Persistence:  Good research is usually the result of sustained efforts to understand and solve a challenge.  Many intrinsic and extrinsic issues must be handled during a successful research journey. By definition, an important research challenge requires navigating unchartered territories.  Many people get frustrated in an unmapped area and when there is no easy way to evaluate progress.  In my experience, even some of my brightest students are comfortable only when they can say I am better than X approach by N%.  In most novel problems, there is no X and no metrics to judge performance. Only a few people are comfortable in such situations where incremental progress may not be computable.  We require both kinds of people: those who can improve given approaches and those who can pioneer new areas.  The second group requires people that can be confident about their research directions without having concrete external evaluation measures.  The ability to work confidently without external affirmation is essential in important deep challenges.

In the current culture, a researcher’s persistence is also tested by “publish or perish” oriented colleagues who determine the quality of research by acceptance rates at the so-called top conferences. When your papers are rejected, you are dejected and sometimes feel that you are doing the wrong research.  Not always true.  The best thing about these conferences is that they test your self-confidence.

We have all read the stories about the research that ultimately resulted in the WWW and the paper on PageRank that later became the foundation of Google search.  Both were initially rejected. Yet, the authors were confident in their work so they persevered.  When one of my papers gets rejected (which is more often the case than with my much inferior papers), much of the time the reviewers are looking for incremental work—the trendy topics—and don’t have time, openness, and energy to think beyond what they and their friends have been doing. I read and analyze reviewers’ comments to see whether they understood my work and then decide whether to take them seriously or ignore them.  In other words, you have to be confident of your own ideas and review the reviews to decide your next steps.

I noticed that one of your favourite quotes is “Imagination is more important than knowledge.” In this regard, do you think there is enough “imagination” in today’s research, or are researchers mainly driven/constrained by grants, metrics, and trends? 

The complete quote by Albert Einstein is “Imagination is more important than knowledge. For knowledge is limited, whereas imagination embraces the entire world, stimulating progress, giving birth to evolution.”  So knowledge begins with imagination. Imagination is the beginning of a hypothesis. When the hypothesis is validated, that results in knowledge.

People often seek short-term rewards.  It is easier to follow trends and established paradigms than to go against them or create new paradigms.  This is nothing new; it has always happened. At one time scientists, like Galileo Galilei, were persecuted for opposing the established beliefs. Today, I only have to worry about my papers and grant proposals getting rejected.  The most engaged researchers are driven by their passion and the long-term rewards that may (or may not) come with it.

Albert Einstein (Source: Planet Science)

Albert Einstein (Source: Planet Science)

References:

  1. Kuhn, T. S. The Structure of Scientific Revolutions. Chicago: University of Chicago Press, 1962. ISBN 0-226-45808-3
  2. R. Jain and T. O. Binford, “Ignorance, Myopia, and Naiveté in Computer    Vision Systems,” CVGIP, Image Understanding, 53(1), 112-117. 1991.   
  3. Norbert Wiener, Cybernetics: Or Control and Communication in the Animal and the Machine. Paris, (Hermann & Cie) & Camb. Mass. (MIT Press) ISBN 978-0-262-73009-9; 2nd revised ed. 1961.
  4. R. Jain, D. Militzer and H. Nagel, “Separating a Stationary Form from Nonstationary Scene Components in a Sequence of Real World TV Frames,” Proceedings of IJCAI 77, Cambridge, Massachusetts, 612-618. 1977.
  5. E. N. Coleman and R. Jain, “Shape from Shading for Surfaces with Texture    and Specularity,” Proceedings of IJCAI. 1981.  
  6. P. Besl, and R. Jain, “Invariant Surface Characteristics for 3-D Object     Recognition in Depth Maps,” Computer Vision, Graphics and Image Processing, 33, 33-80. 1986.
  7. R. Jain and K. Wakimoto, “Multiple Perspective Interactive Video,” Proceedings of IEEE Conference on Multimedia Systems. May 1995.
  8. S. Moezzi, Arun Katkere, D. Kuramura, and R. Jain, “Reality Modeling    and Visualization from Multiple Video Sequences,” IEEE Computer     Graphics and Applications, 58-63. November 1996.
  9. Vivek Singh, Mingyan Gao, and Ramesh Jain,”Social Pixels: Genesis and evaluation”, Proc. ACM Multimedia, 2010.
  10. Laleh Jalali, Ramesh Jain: Bringing Deep Causality to Multimedia Data Streams. ACM Multimedia 2015: 221-230

Bios

 

About Prof. Ramesh Jain: 

Ramesh Jain is an entrepreneur, researcher, and educator. He is a Donald Bren Professor in Information & Computer Sciences at University of California, Irvine.  Earlier he has been at Georgia Tech, University of California, San Diego, University of Michigan, and some other universities in many countries.  He was educated at Nagpur University (B.E.) and Indian Institute of Technology, Kharagpur (Ph.D.) in India.  His current research is in Social Life Networks including EventShop and Objective Self, and Health Intelligence.  He has been an active member of professional community serving in various positions and contributing more than 400 research papers and coauthoring several books including text books in Machine Vision and Multimedia Computing.  He is a Fellow of AAAI, AAAS, ACM, IEEE, IAPR, and SPIE.

Ramesh co-founded several companies, managed them in initial stages, and then turned them over to professional management.  He also advised major companies in multimedia and search technology.  He still enjoys the thrill of start-up environment.

His research and entrepreneurial interests have been in computer vision, AI, multimedia, and social computing. He is the founding director of Institute for Future Health at UCI.

Michael Alexander Riegler: 

Michael is a scientific researcher at Simula Research Laboratory. He received his Master’s degree from Klagenfurt University with distinction and finished his PhD at the University of Oslo in two and a half years. His PhD thesis topic was efficient processing of medical multimedia workloads.

His research interests are medical multimedia data analysis and understanding, image processing, image retrieval, parallel processing, gamification and serious games, crowdsourcing, social computing and user intentions. Furthermore, he is involved in several initiatives like the MediaEval Benchmarking initiative for Multimedia Evaluation, which runs this year the Medico task (automatic analysis of colonoscopy videos)footnote{http://www.multimediaeval.org/mediaeval2017/medico/}.

Since 1997 Alan Smeaton has been a Professor of Computing at Dublin City University. He joined DCU (then NIHED) in 1987 having completed his PhD in UCD under the supervision of Prof. Keith van Rijsbergen. He also completed an M.Sc. and  B.Sc. at UCD.

In 1994 Alan was chair of the ACM SIGIR Conference which he hosted in Dublin, program co-chair of  SIGIR in Toronto in 2003 and general chair of the Conference on Image and Video Retrieval (CIVR) which he hosted in Dublin in 2004.  In 2005 he was program co-chair of the International Conference on Multimedia and Expo in Amsterdam, in 2009 he was program co-chair of ACM MultiMedia Modeling conference in Sophia Antipolis, France and in 2010 co-chair of the program for CLEF-2010 in Padova, Italy.

Alan has published over 600 book chapters, journal and refereed conference papers as well as dozens of other presentations, seminars and posters and he has a Google Scholar h-index of 58. He was an Associate Editor of the ACM Transactions on Information Systems for 8 years, and has been a member of the editorial board of four other journals. He is presently a member of the Editorial Board of Information Processing and Management.

Alan has graduated 50 research students since 1991, the vast majority at PhD level. He has acted as examiner for PhD theses in other Universities on more than 30 occasions, and has assisted the European Commission since 1990 in dozens of advisory and consultative roles, both as an evaluator or reviewer of project proposals and as a reviewer of ongoing projects. He has also carried out project proposal reviews for more than 20 different research councils and funding agencies in the last 10 years.

More recently Alan is a Founding Director of the Insight Centre for Data Analytics, Dublin City University (2013-2019), the largest single non-capital research award given by a research funding agency in Ireland. He is Chair of ACM SIGMM (Special Interest Group in Multimedia), (2017-) and a member of the Scientific Committee of COST (European Cooperation in Science and Technology), an EU funding program with a budget of €300m in Horizon 2020.

In 2001 he was joint (and founding) coordinator of TRECVid – the largest worldwide benchmarking evaluation on content-based analysis of multimedia (digital video) which runs annually since then and way back in 1991 he was a member of the founding steering group of TREC, the annual Text Retrieval Evaluation Conference carried out at the US National Institute for Standards and Technology, US, 1991-1996.

Alan was awarded the Royal Irish Academy Gold Medal for Engineering Sciences in 2015. Awarded once every 3 years, the RIA Gold Medals were established in 2005 “to acclaim Ireland’s foremost thinkers in the humanities, social sciences, physical & mathematical sciences, life sciences, engineering sciences and the environment & geosciences”.

He was jointly awarded the Niwa-Takayanagi Prize by the Institute of Image Information and Television Engineers, Japan for outstanding achievements in the field of video information media and in promoting basic research in this field.  He is a member of the Irish Research Council (2012-2015, 2015-2018), an appointment by the Irish Government and winner of Tony Kent Strix award (2011) from the UK e-Information Society for “sustained contributions to the field of … indexing and retrieval of image, audio and video data”.

Alan is a member of the ACM, a Fellow of the IEEE and is a Fellow of the Irish Computer Society.

Michael Alexander Riegler:  

Michael is a scientific researcher at Simula Research Laboratory. He received his Master’s degree from Klagenfurt University with distinction and finished his PhD at the University of Oslo in two and a half years. His PhD thesis topic was efficient processing of medical multimedia workloads.

His research interests are medical multimedia data analysis and understanding, image processing, image retrieval, parallel processing, crowdsourcing, social computing and user intent. Furthermore, he is involved in several initiatives like the MediaEval Benchmarking initiative for Multimedia Evaluation, which runs this year the Medico task (automatic analysis of colonoscopy videos)footnote{http://www.multimediaeval.org/mediaeval2017/medico/}.