Multidisciplinary Community Spotlight: Assistive Augmentation

Author: Jochen Huber
Affiliation: Synaptics
EditorsCynthia C. S. Liem and Jochen Huber


Emphasizing the importance of neighboring communities for our work in the field of multimedia was one of the primary objectives we set out with when we started this column about a year ago. In past issues, we gave related communities a voice through interviews and personal accounts. For instance, in the third issue of 2017, Cynthia shared personal insights from the International Society of Music Information Retrieval [4]. This issue continues the spotlight series.

Since its inception, I was involved with the Assistive Augmentation community—a multidisciplinary field that sits at the intersection of accessibility, assistive technologies, and human augmentation. In this issue, I briefly reflect on my personal experiences and research work within the community.

First, let me provide a high-level view on Assistive Augmentation and its general idea which is that of cross-domain assistive technology. Instead of putting sensorial capability in individual silos, the approach puts it on a continuum of usability for a specific technology. As an example, a reading aid for people with visual impairments enables access to printed text. At the same time, the reading aid can also be used by those with an unimpaired visual sense for other applications like language learning. In essence, the field is concerned with the design, development, and study of technology that substitutes, recovers, empowers or augments physical, sensorial or cognitive capabilities, depending on specific user needs (see Figure 1).

Assistive Augmentation Continuum

Figure 1.  Assistive Augmentation Continuum

Now let us take a step back. I joined the MIT Media Lab as a postdoctoral fellow in 2013 pursuing research on multi-sensory cueing for mobile interaction. With my background in user research and human-computer interaction, I was immediately attracted by an ongoing project at the lab lead by Roy Shilkrot, Suranga Nanayakkara and Pattie Maes, that involved studying how the MIT visually impaired and blind user group (VIBUG) uses assistive technology. People in that group are particularly tech-savvy. I came to know products like the ORCAM MyEye. It is priced at about 2500-4500 USD and aims at recognizing text, objects and so forth. Back in 2013 it had a large footprint and made its users really stand out. Our general observations were, to briefly summarize, that many tools we got to know during regular VIBUG meetings were highly specialized for this very target group. The latter is, of course, a good thing since it focuses directly on the actual end user. However, we also concluded that it locks the products in silos of usability defined by its’ end users’ sensorial capabilities. 

These anecdotal observations bring me back to the general idea of Assistive Augmentation. To explore this idea further, we proposed to hold a workshop at a conference, jointly with colleagues in neighboring communities. With ACM CHI attracting folks from different fields of research, we felt like it would be a good fit to test the waters and see whether we could get enough interest from different communities. Our proposal was successful: the workshop was held in 2014 and set the stage for thinking about, discussing and sketching out facets of Assistive Augmentation. As intended, our workshop attracted a very diverse crowd from different fields. Being able to discuss opportunities and the potential of Assistive Augmentation with such a group was immensely helpful and contributed significantly to our ongoing efforts to define the field. A practice I would encourage everyone at a similar stage to follow.

As a tangible outcome of this very workshop, our community decided to pursue a jointly edited volume which Springer published earlier this year [3]. The book illustrates two main areas of Assistive Augmentation by example: (i) sensory enhancement and substitution and (ii) design for Assistive Augmentation. Peers contributed comprehensive reports on case studies which serve as lighthouse projects to exemplify Assistive Augmentation research practice. Besides, the book features field-defining articles that introduce each of the two main areas.

Many relevant areas have yet to be touched upon, for instance, ethical issues, quality of augmentations and their appropriations. Augmenting human perception, another important research thrust, has recently been discussed in both SIGCHI and SIGMM communities. Last year, a workshop on “Amplification and Augmentation of Human Perception” was held by Albrecht Schmidt, Stefan Schneegass, Kai Kunze, Jun Rekimoto and Woontack Woo at ACM CHI [5]. Also, one of last year’s keynotes at ACM Multimedia focused on “Enhancing and Augmenting Human Perception with Artificial Intelligence” by Achin Bhowmik [1]. These ongoing discussions in academic communities underline the importance of investigating, shaping and defining the intersection of assistive technologies and human augmentations. Academic research is one avenue that must be pursued, with work being disseminated at dedicated conference series such as Augmented Human [6]. Other avenues that highlight and demonstrate the potential of Assistive Augmentation technology include for instance sports, as discussed within the Superhuman Sports Society [7]. Most recently, the Cybathlon was held for the very first time in 2016. Athletes with “disabilities or physical weakness use advanced assistive devices […] to compete against each other” [8].

Looking back at how the community came about, I conclude that organizing a workshop at a large academic venue like CHI was an excellent first step for establishing the community. In fact, the workshop created a fantastic momentum within the community. However, focusing entirely on a jointly edited volume as the main tangible outcome of the workshop had several drawbacks. In retrospect, the publication timeline was far too long, rendering it impossible to capture the dynamics of an emerging field. But indeed, this cannot be the objective of a book publication—this should have been the objective of follow-up workshops in neighboring communities (e.g., at ACM Multimedia) or special issues in a journal with a much shorter turn-around. With our book project now being concluded, we aim to pick up on past momenta with a forthcoming special issue on Assistive Augmentation in MDPI’s Multimodal Technologies and Interaction journal. I am eagerly looking forward to what is next and to our communities’ joint work across disciplines towards pushing our physical, sensorial and cognitive abilities.


[1]       Achin Bhowmik. 2017. Enhancing and Augmenting Human Perception with Artificial Intelligence Technologies. In Proceedings of the 2017 ACM on Multimedia Conference(MM ’17), 136–136.

[2]       Ellen Yi-Luen Do. 2018. Design for Assistive Augmentation—Mind, Might and Magic. In Assistive Augmentation. Springer, 99–116.

[3]       Jochen Huber, Roy Shilkrot, Pattie Maes, and Suranga Nanayakkara (Eds.). 2018. Assistive Augmentation. Springer Singapore.

[4]       Cynthia Liem. 2018. Multidisciplinary column: inclusion at conferences, my ISMIR experiences. ACM SIGMultimedia Records9, 3 (2018), 6.

[5]       Albrecht Schmidt, Stefan Schneegass, Kai Kunze, Jun Rekimoto, and Woontack Woo. 2017. Workshop on Amplification and Augmentation of Human Perception. In Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems, 668–673.

[6]       Augmented Human Conference Series. Retrieved June 1, 2018 from

[7]       Superhuman Sports Society. Retrieved June 1, 2018 from

[8]       Cybathlon. Cybathlon – moving people and technology. Retrieved June 1, 2018 from


About the Column

The Multidisciplinary Column is edited by Cynthia C. S. Liem and Jochen Huber. Every other edition, we will feature an interview with a researcher performing multidisciplinary work, or a column of our own hand. For this edition, we feature a column by Jochen Huber.

Editor Biographies

Cynthia_Liem_2017Dr. Cynthia C. S. Liem is an Assistant Professor in the Multimedia Computing Group of Delft University of Technology, The Netherlands, and pianist of the Magma Duo. She initiated and co-coordinated the European research project PHENICX (2013-2016), focusing on technological enrichment of symphonic concert recordings with partners such as the Royal Concertgebouw Orchestra. Her research interests consider music and multimedia search and recommendation, and increasingly shift towards making people discover new interests and content which would not trivially be retrieved. Beyond her academic activities, Cynthia gained industrial experience at Bell Labs Netherlands, Philips Research and Google. She was a recipient of the Lucent Global Science and Google Anita Borg Europe Memorial scholarships, the Google European Doctoral Fellowship 2010 in Multimedia, and a finalist of the New Scientist Science Talent Award 2016 for young scientists committed to public outreach.


jochen_huberDr. Jochen Huber is a Senior User Experience Researcher at Synaptics. Previously, he was an SUTD-MIT postdoctoral fellow in the Fluid Interfaces Group at MIT Media Lab and the Augmented Human Lab at Singapore University of Technology and Design. He holds a Ph.D. in Computer Science and degrees in both Mathematics (Dipl.-Math.) and Computer Science (Dipl.-Inform.), all from Technische Universität Darmstadt, Germany. Jochen’s work is situated at the intersection of Human-Computer Interaction and Human Augmentation. He designs, implements and studies novel input technology in the areas of mobile, tangible & non-visual interaction, automotive UX and assistive augmentation. He has co-authored over 60 academic publications and regularly serves as program committee member in premier HCI and multimedia conferences. He was program co-chair of ACM TVX 2016 and Augmented Human 2015 and chaired tracks of ACM Multimedia, ACM Creativity and Cognition and ACM International Conference on Interface Surfaces and Spaces, as well as numerous workshops at ACM CHI and IUI. Further information can be found on his personal homepage:


Bookmark the permalink.