Editorial

Dear Member of the SIGMM Community, welcome to the first issue of the SIGMM Records in 2013.

This issue is full of opportunities that SIGMM gives you in 2013. Inside you find the calls for nominations for SIGMM’s three main awards: The SIGMM Technical Achievement Award, awarded for lasting contributions in our field, the SIGMM Award for Outstanding PhD Thesis, awarded for the best thesis in our field that was defended in the 12 months of 2012, and the Nicolas D. Georganas Best Paper Award for the best paper that was published in an issue of TOMCCAP in 2012.

One of the major changes in SIG life is upcoming: SIGMM elects new chairs, and we want to remind you to cast your vote. Our current chair, Klara Nahrstedt, gave an interview for the Records on the issue of ACM Fellowships.

Three PhD thesis summaries are included in this issue, and in our regular columns, you can read news from the 103rd MPEG meeting, in the education column you can learn about a practice book on visual information retrieval, and a toolset for DASH is presented in the open source column.

Of course, we include also a variety of calls for contribution. Please give attention to two particular ones: TOMCCAP is calling for special issue proposals, a major opportunity because TOMCCAP publishes only one special issue per year; and details about the ACM Multimedia Grand Challenges of 2013 are described in some detail. A lot of the other included calls refer to tracks and workshops of ACM Multimedia 2013, but also included are calls for some other events, and open positions.

Last but most certainly not least, you find pointers to the latest issues of TOMCCAP and MMSJ, and several job announcements.

We hope that you enjoy this issue of the Records.

The Editors
Stephan Kopf, Viktor Wendel, Lei Zhang, Pradeep Atrey, Christian Timmerer, Pablo Cesar, Mathias Lux, Carsten Griwodz

Open Source Column: Dynamic Adaptive Streaming over HTTP Toolset

Introduction

Multimedia content is nowadays omnipresent thanks to technological advancements in the last decades. A major driver of today’s networks are content providers like Netflix and YouTube, which do not deploy their own streaming architecture but provide their service over-the-top (OTT). Interestingly, this streaming approach performs well and adopts the Hypertext Transfer Protocol (HTTP), which has been initially designed for best-effort file transfer and not for real-time multimedia streaming. The assumption of former video streaming research that streaming on top of HTTP/TCP will not work smoothly due to its retransmission delay and throughput variations, has apparently be overcome as supported by [1]. Streaming on top of HTTP, which is currently mainly deployed in the form of progressive download, has several other advantages. The infrastructure deployed for traditional HTTP-based services (e.g., Web sites) can be exploited also for real-time multimedia streaming. Typical problems of real-time multimedia streaming like NAT or firewall traversal do not apply for HTTP streaming. Nevertheless, there are certain disadvantages, such as fluctuating bandwidth conditions, that can not be handled with the progressive download approach, which is a major drawback especially for mobile networks where the bandwidth variations are tremendous. One of the first solutions to overcome the problem of varying bandwidth conditions has been specified within 3GPP as Adaptive HTTP Streaming (AHS) [2]. The basic idea is to encode the media file/stream into different versions (e.g., bitrate, resolution) and chop each version into segments of the same length (e.g., two seconds). The segments are provided on an ordinary Web server and can be downloaded through HTTP GET requests. The adaptation to the bitrate or resolution is done on the client-side for each segment, e.g., the client can switch to a higher bitrate – if bandwidth permits – on a per segment basis. This has several advantages because the client knows best its capabilities, received throughput, and the context of the user. In order to describe the temporal and structural relationships between segments, AHS introduced the so-called Media Presentation Description (MPD). The MPD is a XML document that associates an uniform resource locators (URL) to the different qualities of the media content and the individual segments of each quality. This structure provides the binding of the segments to the bitrate (resolution, etc.) among others (e.g., start time, duration of segments). As a consequence each client will first request the MPD that contains the temporal and structural information for the media content and based on that information it will request the individual segments that fit best for its requirements. Additionally, the industry has deployed several proprietary solutions, e.g., Microsoft Smooth Streaming [3], Apple HTTP Live Streaming [4] and Adobe Dynamic HTTP Streaming [5], which more or less adopt the same approach.

Figure 1: Concept of Dynamic Adaptive Streaming over HTTP.

Recently, ISO/IEC MPEG has ratified Dynamic Adaptive Streaming over HTTP (DASH) [6] an international standard that should enable interoperability among proprietary solutions. The concept of DASH is depicted in Figure 1. The Institute of Information Technology (ITEC) and, in particular, the Multimedia Communication Research Group of the Alpen-Adria-Universität Klagenfurt has participated and contributed from the beginning to this standard. During the standardization process a lot of research tools have been developed for evaluation purposes and scientific contributions including several publications. These tools are provided as open source for the community and are available at [7].

Open Source Tools Suite

Our open source tool suite consists of several components. On the client-side we provide libdash [8] and the DASH plugin for the VLC media player (also available on Android). Additionally, our suite also includes a JavaScript-based client that utilizes the HTML5 media source extensions of the Google Chrome browser to enable DASH playback. Furthermore, we provide several server-side tools such as our DASH dataset, consisting of different movie sequences available in different segment lengths as well as bitrates and resolutions. Additionally, we provide a distributed dataset mirrored at different locations across Europe. Our datasets have been encoded using our DASHEncoder, which is a wrapper tool for x264 and MP4Box. Finally, a DASH online MPD validation service and a DASH implementation over CCN completes our open source tool suite.

libdash

Figure 2: Client-Server DASH Architecture with libdash.

The general architecture of DASH is depicted in Figure 2, where orange represents standardized parts. libdash comprises the MPD parsing and HTTP part. The library provides interfaces for the DASH Streaming Control and the Media Player to access MPDs and downloadable media segments. The download order of such media segments will not be handled by the library. This is left to the DASH Streaming Control, which is an own component in this architecture but it could also be included in the Media Player. In a typical deployment, a DASH server provides segments in several bitrates and resolutions. The client initially receives the MPD through libdash which provides a convenient object-oriented interface to that MPD. Based on that information the client can download individual media segments through libdash at any point in time. Varying bandwidth conditions can be handled by switching to the corresponding quality level at segment boundaries in order to provide a smooth streaming experience. This adaptation is not part of libdash and the DASH standard and will be left to the application which is using libdash.

DASH-JS

Figure 3: Screenshot of DASH-JS.

DASH-JS seamlessly integrates DASH into the Web using the HTML5 video element. A screenshot is shown in Figure 3. It is based on JavaScript and uses the Media Source API of Google’s Chrome browser to present a flexible and potentially browser independent DASH player. DASH-JS is currently using WebM-based media segments and segments based on the ISO Base Media File Format.

DASHEncoder

DASHEncoder is a content generation tool – on top of the open source encoding tool x264 and GPAC’s MP4Box – for DASH video-on-demand content. Using DASHEncoder, the user does not need to encode and multiplex separately each quality level of the final DASH content. Figure 4 depicts the workflow of the DASHEncoder. It generates the desired representations (quality/bitrate levels), fragmented MP4 files, and MPD file based on a given configuration file or by command line parameters.

Figure 4: High-level structure of DASHEncoder.

The set of configuration parameters comprises a wide range of possibilities. For example, DASHEncoder supports different segment sizes, bitrates, resolutions, encoding settings, URLs, etc. The modular implementation of DASHEncoder enables the batch processing of multiple encodings which are finally reassembled within a predefined directory structure represented by single MPD. DASHEncoder is available open source on our Web site as well as on Github, with the aim that other developers will join this project. The content generated with DASHEncoder is compatible with our playback tools.

Datasets

Figure 5: DASH Dataset.

Our DASH dataset comprises multiple full movie length sequences from different genres – animation, sport and movie (c.f. Figure 5) – and is located at our Web site. The DASH dataset is encoded and multiplexed using different segment sizes inspired by commercial products ranging from 2 seconds (i.e., Microsoft Smooth Streaming) to 10 seconds per fragment (i.e., Apple HTTP Streaming) and beyond. In particular, each sequence of the dataset is provided with segments sizes of 1, 2, 4, 6, 10, and 15 seconds. Additionally, we also offer a non-segmented version of the videos and the corresponding MPD for the movies of the animation genre, which allows for byte-range requests. The provided MPDs of the dataset are compatible with the current implementation of the DASH VLC Plugin, libdash, and DASH-JS. Furthermore, we provide a distributed DASH (D-DASH) dataset which is, at the time of writing, replicated on five sites within Europe, i.e., Klagenfurt, Paris, Prague, Torino, and Crete. This allows for a real-world evaluation of DASH clients that perform bitstream switching between multiple sites, e.g., this could be useful as a simulation of the switching between multiple Content Distribution Networks (CDNs).

DASH Online MPD Validation Service

The DASH online MPD validation service implements the conformance software of MPEG-DASH and enables a Web-based validation of MPDs based on a file, URI, and text. As the MPD is based on XML schema, it is also possible to use an external XML schema file for the validation.

DASH over CCN

Finally, the Dynamic Adaptive Streaming over Content Centric Networks (DASC áka DASH over CCN) implements DASH utilizing a CCN naming scheme to identify content segments in a CCN network. Therefore, the CCN concept from Jacobson et al. and the CCNx implementation (www.ccnx.org) of PARC is used. In particular, video segments formatted according to MPEG-DASH are available in different quality levels but instead of HTTP, CCN is used for referencing and delivery.

Conclusion

Our open source tool suite is available to the community with the aim to provide a common ground for research efforts in the area of adaptive media streaming in order to make results comparable with each other. Everyone is invited to join this activity – get involved in and excited about DASH.

Acknowledgments

This work was supported in part by the EC in the context of the ALICANTE (FP7-ICT-248652) and SocialSensor (FP7-ICT-287975) projects and partly performed in the Lakeside Labs research cluster at AAU.

References

[1] Sandvine, “Global Internet Phenomena Report 2H 2012”, Sandvine Intelligent Broadband Networks, 2012. [2] 3GPP TS 26.234, “Transparent end-to-end packet switched streaming service (PSS)”, Protocols and codecs, 2010. [3] A. Zambelli, “IIS Smooth Streaming Technical Overview,” Technical Report, Microsoft Corporation, March 2009. [4] R. Pantos, W. May, “HTTP Live Streaming”, IETF draft, http://tools.ietf.org/html/draft-pantos-http-live-streaming-07 (last access: Feb 2013). [5] Adobe HTTP Dynamic Streaming, http://www.adobe.com/products/httpdynamicstreaming/ (last access: Feb 2013). [6] ISO/IEC 23009-1:2012, Information technology – Dynamic adaptive streaming over HTTP (DASH) – Part 1: Media presentation description and segment formats. Available here [7] ITEC DASH, http://dash.itec.aau.at [8] libdash open git repository, https://github.com/bitmovin/libdash  

Call for Multimedia Grand Challenge Solutions

Overview

The Multimedia Grand Challenge presents a set of problems and issues from industry leaders, geared to engage the Multimedia research community in solving relevant, interesting and challenging questions about the industry’s 3-5 year vision for multimedia.
The Multimedia Grand Challenge was first presented as part of ACM Multimedia 2009 and has established itself as a prestigious competition in the multimedia community. This year’s conference will continue the tradition with by repeating previous challenges, and by introducing brand new challenges.

Challenges

NHK Where is beauty? Grand Challenge

Scene Evaluation based on Aesthetic Quality

Automatic understanding of viewer’s impressions from image or video sequences is a very difficult task, but an interesting theme for study. Therefore, more and more researchers have investigated this theme recently. To achieve automatic understanding, various elemental features or techniques need to be used in a comprehensive manner, such as the balance of color or contrast, composition, audio, object recognition, and object motion. In addition, we might have to consider not only image features but also semantic features.

The task NHK sets is “Where is Beauty?”, which aims at automatically recognizing beautiful scenes in a set of video sequences. The important point of this task is “how to evaluate beauty using an engineering approach”, which is a challenging task involving human feelings. We will provide participants with approx. 1,000 clips of raw broadcast video footage, containing various categories such as creatures, landscape, and CGI. These video clips last about 1 min. Participants will have to evaluate the beautifulness of these videos automatically, and rank them in terms of beauty.

The proposed method will be evaluated on the basis of its originality and accuracy. We expect that participants will consider a diverse range of beauty, not only the balance of color but also composition, motion, audio, and other brand new features! The reliability and the diversity of the extracted beauty will be scored by using manually annotated data. In addition, if a short video composed of the highly ranked videos is submitted, it will be included in the evaluation.

More details

Technicolor – Rich Multimedia Retrieval from Input Videos Grand Challenge

Visual search that aims at retrieving copies of an image as well as information on a specific object, person or place in this image has progressed dramatically in the past few years. Thanks to modern techniques for large scale image description, indexing and matching, such an image-based information retrieval can be conducted either in a structured image database for a given topic (e.g., photos in a collection, paintings, book covers, monuments) or in an unstructured image database which is weakly labeled (e.g., via user-input tags or surrounding texts, including captions).

This Grand Challenge aims at exploring tools to push this search paradigm forward by addressing the following question: how can we search unstructured multimedia databases based on video queries? This problem is already encountered in professional environments where large semi-structured multimedia assets, such as TV/radio archives or cultural archives, are operationally managed. In these cases, resorting to trained professionals such as archivists remains the rule, both to annotate part of the database beforehand and to conduct searches. Unfortunately, this workflow does not apply to large-scale search into wildly unstructured repositories accessible on-line.

The challenge is to retrieve and organize automatically relevant multimedia documents based on an input video. In a scenario where the input video features a news story for instance, can we retrieve other videos, articles and photos about the same news story? And, when the retrieved information is voluminous, how can these multimedia documents be linked, organized and summarized for easy reference, navigation and exploitation?

More details

Yahoo! – Large-scale Flickr-tag Image Classification Grand Challenge

Image classification is one of the fundamental problems of computer vision and multimedia research. With the proliferation of the Internet, the availability of cheap digital cameras, and the ubiquity of cell-phone cameras, the amount of accessible visual content has increased astronomically. Websites such as Flickr alone boast of over 5 billion images, not counting the may such websites and countless other images that are not published online. This explosion poses unique challenges for the classification of images.

Classification of images with a large number of classes and images has attracted several research efforts in recent years. The availability of datasets such as ImageNet, which boasts of over 14 million images and over 21 thousand classes, has motivated researchers to develop classification algorithms that can deal with large quantities of data. However, most of the effort has been dedicated to building systems that can scale up when the number of classes is large. In this challenge we are interested to learn classifiers when the number of images is large. There has been some recent work that deals with thousands of images for training, however in this challenge we are looking at upwards of 250,000 images per class. What makes the challenge difficult is that the annotations are provided by users of Flickr (www.flickr.com), which might not be always accurate. Furthermore each class can be considered as a collection of sub-classes with varied visual properties.

More details

Huawei/3DLife – 3D human reconstruction and action recognition Grand Challenge

3D human reconstruction and action recognition from multiple active and passive sensors

This challenge calls for demonstrations of methods and technologies that support real-time or near real-time 3D reconstruction of moving humans from multiple calibrated and remotely located RGB cameras and/or consumer depth cameras. Additionally, this challenge also calls for methods for human gesture/movement recognition from multimodal data. The challenge targets mainly real-time applications, such as collaborative immersive environments and inter-personal communications over the Internet or other dedicated networking environments.

To this end, we provide two data sets to support investigation of various techniques in the fields of 3D signal processing, computer graphics and pattern recognition, and enable demonstrations of various relevant technical achievements.

Consider multiple distant users, which are captured in real-time by their own visual capturing equipment, ranging from a single Kinect (simple user) to multiple Kinects and/or high-definition cameras (advanced users), as well as non-visual sensors, such as Wearable Inertial Measurement Units (WIMUs) and multiple microphones. The captured data is either processed at the capture site to produce 3D reconstructions of users or directly coded and transmitted, enabling rendering of multiple users in a shared environment, where users can “meet” and “interact” with each other or the virtual environment via a set of gestures/movements.

More details

MediaMixer/VideoLectures.NET – Temporal Segmentation and Annotation Grand Challenge

Semantic VideoLectures.NET segmentation service

VideoLectures.NET mostly hosts lectures 1 to 1.5h long linked with slides and enriched with metadata and additional textual contents. With automatic temporal segmentation and annotation of the video we would gain on efficiency of our video search engine and be able to provide users with the ability to search for sections within a video, as well as recommend similar content. This would mean that the challenge partcipants develop tools for automatic segmentation of videos that could then be implemented in VideoLectures.NET.

More details

Microsoft: MSR – Bing Image Retrieval Grand Challenge

The Second Microsoft Research (MSR)-Bing challenge (the “Challenge”) is organized into a dual track format, one scientific and the other industrial. The two tracks share exactly the same task and timelines but independent submission and ranking processes.

For the scientific track, we will follow exactly what MM13 GC outlines. The papers will be submitted to MM13, and go through the review process. The accepted ones will be presented at the conference. At the conference, the authors of the accepted papers will be requested to introduce their solutions, give a quick demo, and take questions from the judges and the audience. Winners will be selected for Multimedia Grand Challenge Award based on their presentation.

The industrial track of the Challenge will be conducted over the internet through a website maintained by Microsoft. Contestants participating in the industrial track are encouraged to take advantage of the recent advancements in the cloud computing infrastructure and public datasets and must submit their entries in the form of publicly accessible REST-based web services (further specified below). Each entry will be evaluated against a test set created by Bing on queries received at Bing Image Search in the EN-US market. Due to the global nature of the Web the queries are not necessarily limited to the English language used in the United States.

More details

Submissions

Submissions should:

  • Significantly address one of the challenges posted on the web site.
  • Depict working, presentable systems or demos, using the grand challenge dataset where provided.
  • Describe why the system presents a novel and interesting solution.

Submission Guidelines

The submissions (max 4 pages) should be formatted according to ACM Multimedia formatting guidelines. The submissions should be formatted according to ACM Multimedia formatting guidelines. Multimedia Grand Challenge reviewing is Double-blind so authors shouldn’t reveal their identity in the paper. The finalists will be selected by a committee consisting of academia and industry representatives, based on novelty, presentation, scientific interest of the approache and, for the evaluation-based challenges, on the performance against the task.

Finalist submissions will be published in the conference proceedings, and will be presented in a special event during the ACM Multimedia 2013 conference in Barcelona, Spain. At the conference, finalists will be requested to introduce their solutions, give a quick demo, and take questions from the judges and the audience.
Winners will be selected for Multimedia Grand Challenge awards based on their presentation.

Important Dates

Challenges Announced: February 25, 2013
Paper Submission Deadline: July 1, 2013
Notification of Acceptance: July 29, 2013
Camera-Ready Submission Deadline: August 12, 2013

Contact

For any questions regarding the Grand Challenges please email the Multimedia Grand Challenge Solutions Chairs:

Neil O’Hare (Yahoo!, Spain)
Yiannis Kompatsiaris (CERTH, Greece)

SIGMM Elections

Dear SIGMM members:
This year we have ACM SIGMM elections. All SIGMM members are invited to cast their vote for the three SIGMM officers:
– SIGMM Chair
– SIGMM Vice Chair
– SIGMM Director of Conferences.

Our candidates are
for Chair:
Dick C.A. Bulterman
Shih-Fu Chang

for Vice Chair:
Rainer Lienhart
Yong Rui

for Director of Conferences:
Susanne Boll
Nicu Sebe

You find all the information on the candidates as well as on ACM’s SIG election policies and procedures on this website:
http://www.acm.org/sigs/elections

Call for Nominations: SIGMM Award for Outstanding PhD Thesis

in Multimedia Computing, Communications and Applications

Award Description

This award will be presented at most once per year to a researcher whose PhD thesis has the potential of very high impact in multimedia computing, communication and applications, or gives direct evidence of such impact. A selection committee will evaluate contributions towards advances in multimedia including multimedia processing, multimedia systems, multimedia network protocols and services, multimedia applications and interfaces. The award will recognize members of the SIGMM community and their research contributions in their PhD theses as well as the potential of impact of their PhD theses in multimedia area. The selection committee will focus on candidates’ contributions as judged by innovative ideas and potential impact resulting from their PhD work.

The award includes a US$500 honorarium, an award certificate of recognition, and an invitation for the recipient to receive the award at a current year’s SIGMM-sponsored conference, the ACM International Conference on Multimedia (ACM Multimedia). A public citation for the award will be placed on the SIGMM website, in the SIGMM Records e-newsletter as well as in the ACM e-newsletter.

Funding

The award honorarium, the award plaque of recognition and travel expenses to the ACM International Conference on Multimedia will be fully sponsored by the SIGMM budget.

Nomination Applications

Nominations will be solicited by the 1st May 2013 with an award decision to be made by August 30. This timing will allow a recipient to prepare for an award presentation at ACM Multimedia in that Fall (October/November).

The initial nomination for a PhD thesis must relate to a dissertation deposited at the nominee’s Academic Institution between January and December of the year previous to the nomination. As discussed below, some dissertations may be held for up to three years by the selection committee for reconsideration. If the original thesis is not in English, a full English translation must be provided with the submission. Nominations for the award must include:

  1. PhD thesis (upload at:  https://cmt.research.microsoft.com/SIGMM2012/ )
  2. A statement summarizing the candidate’s PhD thesis contributions and potential impact, and justification of the nomination (two pages maximum);
  3. Curriculum Vitae of the nominee
  4. Three endorsement letters supporting the nomination including the significant PhD thesis contributions of the candidate. Each endorsement should be no longer than 500 words with clear specification of nominee PhD thesis contributions and potential impact on the multimedia field.
  5. A concise statement (one sentence) of the PhD thesis contribution for which the award is being given. This statement will appear on the award certificate and on the website.

The nomination rules are:

  1. The nominee can be any member of the scientific community.
  2. The nominator must be a SIGMM member.
  3. No self-nomination is allowed.

If a particular thesis is considered to be of exceptional merit but not selected for the award in a given year, the selection committee (at its sole discretion) may elect to retain the submission for consideration in at most two following years. The candidate will be invited to resubmit his/her work in these years.

A thesis is considered to be outstanding if:

  1. Theoretical contributions are significant and application to multimedia is demonstrated.
  2. Applications to multimedia is outstanding, techniques are backed by solid theory with clear demonstration that algorithms can be applied in new domains –  e.g., algorithms must be demonstrably scalable in application in terms of robustness, convergence and complexity.

The submission process of nominations will be preceded by the call for nominations. The call of nominations will be widely publicized by the SIGMM awards committee and by the SIGMM Executive Board at the different SIGMM venues, such as during the SIGMM premier ACM Multimedia conference (at the SIGMM Business Meeting) on the SIGMM web site, via SIGMM mailing list, and via SIGMM e-newsletter between September and December of the previous year.

Submission Process

  • Register an account at https://cmt.research.microsoft.com/SIGMM2012/  and upload one copy of the nominated PhD thesis. The nominee will receive a Paper ID after the submission.
  • The nominator must then collate other materials detailed in the previous section and upload them as supplementary materials, except the endorsement letters, which must be emailed separately as detailed below.
  • Contact your referees and ask them to send all endorsement letters to sigmmaward@gmail.com with the title: “PhD Thesis Award Endorsement Letter for [YourName]”. The web administrator will acknowledge the receipt and the submission CMT website will reflect the status of uploaded documents and endorsement letters.

It is the responsibility of the nominator to follow the process and make sure documentation is complete. Thesis with incomplete documentation will be considered invalid.

Selection Committee

For the period 2013-2014, the award selection committee consists of:

Call for Nominations: SIGMM Technical Achievement Award

for Outstanding Technical Contributions to Multimedia Computing, Communications and Applications

Award Description

This award is presented every year to a researcher who has made significant and lasting contributions to multimedia computing, communication and applications. Outstanding technical contributions through research and practice are recognized. Towards this goal, contributions are considered from academia and industry that focus on major advances in multimedia including multimedia processing, multimedia content analysis, multimedia systems, multimedia network protocols and services, and multimedia applications and interfaces. The award recognizes members of the community for long-term technical accomplishments or those who have made a notable impact through a significant technical innovation. The selection committee focuses on candidates’ contributions as judged by innovative ideas, influence in the community, and/or the technical/social impact resulting from their work. The award includes a $1000 honorarium, an award certificate of recognition, and an invitation for the recipient to present a keynote talk at a current year’s SIGMM-sponsored conference, the ACM International Conference on Multimedia (ACM Multimedia). A public citation for the award will be placed on the SIGMM website.

Funding

The award honorarium, the award certificate of recognition and travel expenses to the ACM International Conference on Multimedia is fully sponsored by the SIGMM budget.

Nomination Process

Nominations are solicited by May 31, 2013 with decision made by July 30 2013, in time to allow the above recognition and award presentation at ACM Multimedia 2013.

Nominations for the award must include:

  1. A statement summarizing the candidate’s accomplishments, description of the significance of the work, and justification of the nomination (two pages maximum);
  2. Curriculum Vitae of the nominee;
  3. Three endorsement letters supporting the nomination including the significant contributions of the candidate. Each endorsement should be no longer than 500 words with clear specification of nominee contributions and impact on the multimedia field;
  4. A concise statement (one sentence) of the achievement(s) for which the award is being given. This statement will appear on the award certificate and on the website.

The nomination rules are: The nominee can be any member of the scientific community.

  1. The nominator must be a SIGMM member.
  2. No self-nomination is allowed.
  3. Nominations that do not result in an award will be valid for two further years. After three years a revised nomination can be resubmitted.
  4. The SIGMM elected officers as well as members of the Awards Selection Committee are not eligible.

Please submit your nomination to the award committee by email.

Committee

Previous Recipients

  • 2012: Hong-Jiang Zhang (pioneering contributions to and leadership in media computing including content-based media analysis and retrieval, and their applications).
  • 2011: Shi-Fu Chang (for pioneering research and inspiring contributions in multimedia analysis and retrieval).
  • 2010: Ramesh Jain (for pioneering research and inspiring leadership that transformed multimedia information processing to enhance the quality of life and visionary leadership of the multimedia community).
  • 2009: Lawrence A. Rowe (for pioneering research in continuous media software systems and visionary leadership of the multimedia research community).
  • 2008: Ralf Steinmetz (for pioneering work in multimedia communications and the fundamentals of multimedia synchronization).

Call for Nominations: ACM TOMCCAP Nicolas D. Georganas Best Paper Award

The Editor-in-Chief of ACM TOMCCAP invites you to nominate candidates for the “ACM Transactions on Multimedia Computing, Communications and Applications Nicolas D. Georganas Best Paper Award”.

The award is given annually to the author(s) of an outstanding paper published in ACM TOMCCAP within the previous legal year from January 1 until December 31. The award carries a plaque as well as travel funds to the ACM MM conference where the awardee(s) will be honored.

Procedure

Nominations for the award must include the following:
– A statement describing the technical contributions of the nominated paper and a description of the significance of the paper. The statement should not exceed 500 words. No self-nomination is accepted.
– Two additional supporting statements by recognized experts in the field regarding the technical contribution of the paper and its significance to the respective field.

Only papers published in regular issues (no Special Issues) can be nominated.

Nominations will be reviewed by the Selection Committee and the winning paper will finally be voted by the TOMCCAP Editorial Board.

Deadline

Deadline for nominations of papers published in 2012 (Volume 8) is the 15th of June 2013.

Contact

Please send your nominations to the Editor-in-Chief at steinmetz.eic@kom.tu-darmstadt.de
If you have questions, please contact the TOMCCAP information director at TOMCCAP@kom.tu-darmstadt.de

Further details can be found at http://tomccap.acm.org/

Call for TOMCCAP Special Issue Proposals

ACM Transactions on Multimedia Computing, Communications and Applications (ACM – TOMCCAP)

Deadline for Proposal Submission: May, 1st 2013

Notification: June, 1st 2013

http://tomccap.acm.org/

ACM – TOMCCAP is one of the world’s leading journals on multimedia. As in previous years we are planning to publish a special issue in 2014. Proposals are accepted until May, 1st 2013. Each special issue is the responsibility of guest editors. If you wish to guest edit a special issue you should prepare a proposal as outlined below, then send this via e-mail to EiC Ralf Steinmetz(steinmetz.eic@kom.tu-darmstadt.de)

Proposals should:

  • Cover a current or emerging topic in the area of multimedia
    computing, communications and applications;
  • Set out the importance of the special issue’s topic in that area;
  • Give a strategy for the recruitment of high quality papers;
  • Indicate a draft time-scale in which the special issue could be
    produced (paper writing, reviewing, and submission of final copies
    to TOMCCAP), assuming the proposal is accepted.

As in the previous years, the special issue will be published as online-only issue in the ACM Digital Library. This gives the guest editors higher flexibility in the review process and the number of papers to be accepted, while yet ensuring a timely publication.A notification of acceptance for the proposals will be given until June, 1st 2013. Once a proposal is accepted we will contact you to discuss the further process.

For questions please contact:

Ralf Steinmetz – Editor in Chief (steinmetz.eic@kom.tu-darmstadt.de)
Sebastian Schmidt – Information Director (TOMCCAP@kom.tu-darmstadt.de)

SIGMM Education Column

SIGMM Education Column of this issue highlights a new book, titled “Visual Information Retrieval using Java and LIRE,” which gives an introduction to the fields of information retrieval and visual information retrieval and points out selected methods, as well as their use and implementation within Java and more specifically LIRE, a Java CBIR library. The book is authored by Dr. Mathias Lux, from Klagenfurt University, Austria, and Prof. Oge Marques, of Florida Atlantic University, and it is published in the Synthesis Lectures on Information Concepts, Retrieval, and Services by Morgan & Claypool.

 

The basic motivation for writing this book was the need for a fundamental course book that contained just the necessary knowledge to get students started with content-based image retrieval. The book is based on lectures given by the authors over the last years and has been designed to fulfill that need. It will also provide developers for content-based image solutions with a head start by explaining the most relevant concepts and practical requirements.

 

The book begins with a short introduction, followed by explanations of information retrieval and retrieval evaluation. Visual features are then explained, and practical problems and common solutions are outlined. Indexing strategies of visual features, including linear search, nearest neighbor search, hashing and bag of visual words, are discussed next, and the use of these strategies with LIRE is shown. Finally, LIRE is described in detail, to allow for employment of the library in various contexts and for extension of the functions provided.

 

There is also a companion website for the book (http://www.lire-project.net), which gives pointers to additional resources and will be updated with slides, figures, teaching materials and code samples.

 

Interview with ACM Fellow and SIGMM Chair Prof Klara Nahrstedt

Prof. Dr. Klara Nahrstedt, SIGMM Chair

SIGMM Editor: “Why do societies such as ACM offer Fellows status to some of its members?”

Prof Klara Nahrstedt: The ACM society celebrates through its ACM Fellows Status Program the exceptional contributions of the leading members in the computing field. These individuals have helped to enlighten researchers, developers, practitioners and end-users of computing and information technology throughout the world. The new ACM Fellows join a distinguished list of colleagues to whom ACM and its members look for guidance and leadership in computing and information technology.

SIGMM Editor: “What is the significance for you as an individual research in becoming an ACM Fellow?”

Prof Klara Nahrstedt: Receiving the ACM Fellow Status represents a great honor for me due to the high distinction of this award in the computing community.  The ACM Fellow award recognizes  my own research in the area of “Quality of Service (QoS) management  for distributed multimedia systems”, as well as the joint work in this area with my students and colleagues at my home institution, the University of Illinois, Urbana-Champaign, and other institutions, research labs, and companies with whom I have collaborated over the years.  Furthermore, becoming an ACM Fellow allows me to continue and push new ideas of QoS in distributed multimedia systems in three societal domains, the trustworthy cyber-physical infrastructure for smart grid environments, the collaborative immersive spaces in tele-health-care, and robust mobile multimedia systems in airline-airplane maintenance ecosystem.
SIGMM Editor: “How is this recognition perceived by your research students, department, and University? “

Prof Klara Nahrstedt: My research students, department and university are delighted that I have received the ACM Fellow status since this type of award very much reflects the high quality of students that get admitted to our department and I work with, colleagues I interact with, and resources I get provided by the department and university.

SIGMM Editor: “You have been one of the important torch bearers of the SIGMM community. What does this recognition imply for the SIGMM Community?”

Prof Klara Nahrstedt: SIGMM community is a relatively young community, having only recently celebrated 20 years of its existence. However, as the multimedia community is maturing, it is important for our community to promote its outstanding researchers and assist them towards the ACM Fellow status.  Furthermore, multimedia technology is becoming ubiquitous in all facets of our lives; hence it is of great importance that SIGMM leaders, especially its ACM Fellows, are at the table with other computing researchers to guide and drive future directions in computing and information technologies.

SIGMM Editor: “How will this recognition influence the SIGMM community?”

Prof Klara Nahrstedt: I hope that my ACM Fellow status recognition will influence the SIGMM community at least in three directions: (1) it will motivate young researchers in academia and industry to work towards high impact research accomplishments in multimedia area that will lead to the ACM Fellow status at the later stage of their careers, (2) it will impact female researchers to strive towards recognition of their work through the ACM Fellow Status, and (3) it will increase the distinguished group of ACM Fellows within the SIGMM, which again will be able to promote the next generation of multimedia researchers to join the ACM Fellows ranks.