Could you tell us a bit about your background, and what the road to your current position was?
Well, this road is marked by wonderful people who inspired me and sparked my interest in the research fields I pursued. In addition, it is marked by two of my major deficiencies: I cannot stop to investigate the role of my research in the larger context of systems and disciplines, and I have the strong desire to see “inventions” by researchers make their way into practice i.e. turn into “innovations”. The first of these deficiencies led to the unusually broad research interests of my lab and myself, and the second one made me spend a substantial part of my career conceptualizing and leading technology transfer organizations, for the most part industry-funded ones.
More precisely, I started to cooperate with Digital Equipment Corp. (DEC) during the time of my Diploma thesis already. DEC was then the second largest computer manufacturer and spearhead of the efforts to build affordable “computers for every engineering group”. My boss, the late Professor Krüger, gave me a lot of freedom, so I was able to turn the research cooperation into the first funded European research project of DEC and later into their first research center in Europe, conceived as a campus-based organization that worked very closely with academia. I am proud to say that I was allowed to conceptualize this academia-industry cooperation and that it was later on copied – often with my help and consultancy – many times across the globe, by several companies and governments. I acted as the founding director of the first such center, but at that time I was already determined to follow the academic career path. At the age of 32, I was appointed professor at the university of Kaiserslautern. Over the years, I was offered positions at prestigious universities in Canada, France, and the Netherlands, and I accepted positions in Austria and Germany (Karlsruhe, Darmstadt). My sabbaticals led me to Australia, France and Canada, and for the most part to California (San Diego and four times Palo Alto). In retrospective it was exciting to start at a new academic position every couple of years in the beginning, but it was also exciting to “finally settle” in Darmstadt and to build the strengths and connections there that were necessary to drive even larger cooperative projects than before.
The Telecooperation Lab embraces many different disciplines. Celebrating its 20th birthday next year, how did these disciplines evolve over the years?
It started with my excitement for distributed systems, based on solid knowledge about computer networks. At the time (the early 1980s), little more than point-to-point communication between file transfer or e-mail agents existed, and neither client-server nor multi-party systems were common. My early interest in this field concerned software engineering for distributed systems, ranging from design and specification support via programming and simulation to debugging and testing. Soon, multimedia became feasible due to advancements in computer hardware– and in peripherals: think of the late laser disk, a clumsy predecessor of today’s DVDs and BDs. Multimedia grabbed my immediate attention since numerous problems arose from the interest to enable it in a distributed manner. Almost at the same time, e-learning became my favorite application field since I saw the great potential of distributed multimedia for this domain, given the challenges of global education and of the knowledge society. I believe that technology has come a long way with respect to e-learning, but we are still far from mastering the challenges of technology supported education and knowledge work.
Soon came the time when computers left the desk and became ubiquitous. From my experience in multimedia and e-learning, it was obvious to me that human computer interaction would be a key to the success of ubiquitous computing. Simply extrapolating the keyboard-mouse-monitor based interaction paradigm to a future where tens, hundreds, or thousands of computers would surround an individual – what a nightmare! This threat of a dystopia made us work on implicit and tangible interaction, hybrid cyber-physical knowledge work, novel mobile and workspace interaction, augmented and virtual reality, and custom 3D printed interaction – HCI became our “new multimedia”.
Regarding applications domains, our research in supporting the knowledge society evolved towards supporting ‘smart environments and spaces’, a natural consequence of the evolution of our core research towards networked ubiquitous computers. My continued interest in turning inventions into innovations made us work on urgent problems of industry – mainly revolving around business processes – and on computers that expect the unexpected: emergencies and disasters. Both these domains were a nice fit since they could benefit from appropriate smart spaces. Looking at smart spaces of ever larger scale, we naturally hit the challenge of supporting smart cities and critical infrastructures.
Finally, a bit more than ten years ago, our ubiquitous computing research made us encounter and realize the “ubiquity” of related cybersecurity threats to at large, in particular threats to privacy and appropriate trustworthiness estimation and of detecting networked attacks. These cybersecurity research activities were, like those in HCI, natural consequences of my afore-mentioned deficiency: my desire to take a holistic look at systems – in my case, ubiquitous computing systems.
Finally, the fact that we adapt, apply and sometimes further machine learning concepts in our research is nothing but a natural consequence of the utility of those concepts for our purposes.
How would you describe the interrelationship between those disciplines? Do these benefit from cross-fertilization effects and if so, how?
In my answer to your last question, I unwillingly used the word “natural” several times. This shows already that research on ubiquitous computing and smart spaces with a holistic slant almost inevitably leads you to looking at the different aspects we investigate. These aspects just happen to concern different research disciplines in computer science. The starting point is the fact that ubiquitous computing devices are much less general-purpose computers than dedicated components. Networking and distributed systems support are therefore a prerequisite for orchestrating these dedicated skills, forming what can be called a truly smart space. Such spaces are usually meant to assist humans, so that multimedia – conveying “humane” information representations – and HCI – for interacting with many cooperating dedicated components – are indispensable. Next, how can a smart space assist a human if it is subject to cyber-vulnerabilities? Instead, it has to enforce its users’ concerns with respect to privacy, trust, and intended behavior. Finally, true smartness is by nature bound to adopting and adapting best-of-breed AI techniques.
You also asked for cross-fertilizing effects. Let me share just three of the many examples in this respect. (i) Our AI related work cross-feritlized our cyberattack defense. (ii) On the other hand, the AI work introduced new challenges in distributed and networked systems, driving our research on edge computing forward. (iii) New requirements are added to this edge computing research by HCI since we want to support collaborative AR applications at large i.e. city-wide scale.
Moreover, cross-fertilizing goes beyond the research fields of computer science that we integrate in my own lab. As you know, I was and am heading highly interdisciplinary doctoral schools, formerly on e-learning, and now on privacy and trust for mobile users. When you work with researchers from sociology, law, economics, and psychology on topics like privacy protecting Smartphones, you first consider these topics as pertaining to computer science. Soon, you realize that the other disciplines dealt with issues like privacy and trust long before computers existed. Not only can you learn a lot from the deep and concise findings brought forth by these disciplines for decades or centuries, you can quickly establish a very fruitful cooperation with researchers from these disciplines who address the new challenges of mobile and ubiquitous computing from their perspective. I am convinced that the unique role of Xerox PARC in the history of computer science, with so many of the most fundamental innovations originating there, is mainly a consequence of their highly interdisciplinary approaches, combining the “science of computers” with the “sciences concerned with humans”.
Please tell us about the main challenges you faced when uniting such diverse topics under the Telecooperation Lab’s multi-disciplinary umbrella?
The major challenge lies in a balancing act for each PhD thesis and researcher. On one hand, the work must be strictly anchored in a narrow academic field; as a young researcher, you are lucky if you can make yourself a bit of a name in a single narrow community–which is a prerequisite for any further academic career steps for many reasons. Trying to get rooted in more than one community during a PhD would be what I call academic suicide. The second side of the balancing act, for us, is the challenge to keep that narrow and focused PhD well connected to the multi-area context of my lab – and for the members of the doctoral schools, even connected to the respective multi-disciplinary context. While this second side is not a prerequisite for a PhD, it is an inexhaustible source of both new challenges for, and new approaches to, the respective narrow PhD fields. In fact, reaching out to other fields while mastering your own field costs some additional time; in my experience, however, this additional time can easily be spared in the search for original scientific contributions that will earn you a PhD. The reason is that the cross-fertilizing from a multi-area or even multi-disciplinary setting will lead you to original contributions much faster, due to a fresh look at both, challenges and approaches.
When it comes to Postdoctoral researchers, things are a bit different since they are already rooted in a field, which means that they can reach out a bit further to other areas and disciplines, thereby creating a unique little research domain in which they can make themselves a name for their further career. My aim for my postdocs is to help them attain a status where, when I mention their name in a pertinent academic circle, my colleagues would say “oh, I know, that’s the guy who is working on XYZ”, with XYZ being a concise subdomain of research which that postdoc was instrumental in shaping.
The Telecooperation Lab is part of CRISP, the National Research Center for Applied Cybersecurity in Germany, which embraces many disciplines as well. Can you give us some insights into multidisciplinarity in such an environment?
Let me start by explaining that we started the first large cybersecurity research center in Darmstadt more than ten years ago, CRISP in its current form as a national center has only started to exist. By the way, CRISP will have to be renamed again for legal reasons (sigh!). Therefore, let me address our cybersecurity research in general. This research involved a very broad spectrum of disciplines, from physicists that address quantum related aspects to psychologists that investigate usable security and mental models. The most fruitful cooperations always concern areas that establish a “mutual benefits and challenges” relationship with the computer science side of cybersecurity. Two examples that come to my mind are The Laws and Economics. Computer science solutions to security and privacy always have limits. For instance, cryptographic solutions are always linked to trust at their boundaries (cf. trusted certificate authorities, trusted implementations of theoretically “proven-secure” protocols, trust in the absence of insider threats etc.). At such boundaries, law must punish what technology cannot guarantee, otherwise the systems remain insecure. In the reverse direction, new technical possibilities and solutions must be reflected in law. A prominent example is the power of AI: privacy law, such as the European Union’s GDPR, holds data processing organizations liable if they process personally identifiable information, PII for short. If data is not considered to be PII, it can be released. Now what if, three years later, a novel AI algorithm can link that data to some background data and infer PII from it? Privacy law needs a considerable update due to these new technical possibilities. I could talk about these mutual benefits and challenges on and on, but let me just quickly mention one more example from economics: if technology comes up with new privacy preserving schemes then these schemes may open up new opportunities for privacy-respecting services. In order for such services to succeed in the market, we need to learn about possible corresponding business models. This kind of economics research may lead to new challenges for technical approaches, and so on. Such “cycles of innovation” across different disciplines are among the most exciting facets of interdisciplinary research.
Could you name a grand challenge of multidisciplinary research in the Multimedia community?
Oh, I think I have a quite dedicated opinion on this one! We clearly live in the era of the fusion of bits and atoms – and this metaphor is of course just one way to characterize what is going on. Firstly, in the cyber-physical society that we are currently creating, the digital components are becoming the “brains” of complex real-world systems such as the transport system, energy grids, industrial production etc. This development creates already significant challenges concerning our future society, but beyond this trend and directly related to multimedia, there is an even more striking development: we increasingly feed the human senses by means of digitally created or processed signals – and hence, basically by means of multimedia. TV and telephone, social media and Web based information, Skype conversations and meetings, you-name-it: our perception of objects, spaces, and of our conversation partners – in other words: of the physical world – is conveyed, augmented, altered, and filtered by means of computers and computer networks. Now, you will ask what I consider the challenge in this development that goes on since decades. Consider that this field “jumps forward” in our days due to AI and other advancements: it is the challenge for interdisciplinary multimedia research to properly conserve the distinction between “real” and “imaginary” in all cases where we would or should conserve it. To cite a field that is only marginally concerned here, let me mention games: in games, it is – mostly – desired to blur the distinction between the real and the virtual. However, if you think of fake news or of highly persuasive social media governmental election campaigns, you get an idea of what I mean. The challenge here is highly multidisciplinary: for instance, many computer science areas have to come together already in order to check where in the media processing chain we can intervene in order to keep a handle on the real-versus-virtual distinction. Way beyond that, we need many disciplines to work hand-in-hand in order to figure out what we want and how we can achieve it. We have to recognize that many long-existing trends are at the fringe of jumping forward to an unprecedented level of perfection. We must figure out what society needs and wants. It is reckless to leave this development to economic or even malicious forces or to tech nerds who invent their own ethics. The examples are endless, let me cite a few in addition to those mentioned above, highlighting fake news and manipulative election campaigns.
Machine learning experts may call me paranoid, hinting at the fact that the detection of manipulated photos or deep fake videos is still a much simpler machine learning task than creating them. While this is true, I fear that it may change in the future. Moreover, alluding to the multidisciplinary challenges mentioned, let me remind you that we currently don’t have processes in place that would sufficiently check content for authenticity in a systematic way.
As another example, humans are told they are “valued customers”, but they are since long considered as consumers at best. More recently, they are downgraded to mass objects in which purchase desires are first created then directed–by sophisticated algorithms and with ever more convincing multimedia content. Meanwhile in the background, pricing discrimination is rising to new levels of sophistication. On a different field, questionable political powers are more and more capable of destabilizing democracies from a save seat across the Internet, using curated and increasingly machine-created influential media.
As a next big wave, we are witnessing a giants’ race among global IT players for the crown in the augmented and virtual reality markets. What is still a niche area may become wide spread technology tomorrow – reckon that the first successful smartphone was introduced only little more than a decade ago and that meanwhile the majority of the world’s population use Smartphones to access the Internet. A similar success story may lie ahead for AR/VR: at the latest when a generation grows up wearing AR contact lenses, noise-cancelling earplugs and haptics-augmented cloths, reality will not be threatened by fake information any more but digitally created, imaginary content will be reality, rendering the question “what is real?” obsolete. Of course, the list of technologies and application domains mentioned here is by far non-exhaustive.
The problem is that all these trends appear to be evolutionary, not disruptive as they are. Marketing has influenced customers already centuries ago, fake news existed even longer, and the movie industry has always had a leading role in imaginary technology, from chroma keying to the most advanced animation techniques. Therefore, the new and upcoming AI-powered multimedia technology is not (yet) recognized as disruptive and hence as a considerable threat to the fundamental rules of our society. This is a key reason why I consider this field a grand interdisciplinary research challenge. We need definitely far more than technology solutions. As an outset, we need to come to grips with appropriate ethical and socio-political norms. To what extend do we want to keep and protect the governing rules of society and humankind? Which changes do we want, which ones not? What does all that mean in terms of governing rules for AI-powered multimedia, for the merging of the real and the virtual? Apart from basic research, we need a participatory approach that involves society in general and the rising generations in particular. Since we cannot expect these fundamental societal process to lead to a final decision, we have to advance the other research challenges in parallel. For instance, we need a better understanding of social implications and of psychological factors related to the merge of the real and the virtual. Technology-related research must be intertwined with these efforts; as to technology fields concerned, multimedia research must go hand-in-hand with others like AI, cybersecurity, privacy, etc. –the selection depends on the particular questions addressed. This research must be further intertwined with human-related fields such as Law: laws must again regulate what technology can’t solve, and reflect what technology can achieve for the good or the evil. In all this, I did not yet mention further related issues like for instance biometric access control: as we try to make access control more user friendly, we rely on biometric data, most of which are variants of multimedia, namely speech, face or iris photos, gait and others. The difference between real and virtual remain important here and we can expect enormous malicious efforts to blur it. You see, there is really a multidisciplinary grand challenge for multimedia.
How and in what form do you feel we as academics can be most impactful?
During the first half of my career, computer science was still in that wonderful gold diggers’ era: if you had a good idea and just decent skills to convey it to your academic peers, you could count on that idea to be heart, valued, and – if it was socially and economically viable – realized. Since then, we have moved to a state in which good research results are not even half the story. Many seemingly marginal factors drive innovation today. No wonder have we reached a point at which many industry players think that innovation should be driven by the company’s product groups in a close loop with customers, or by startups that can be acquired if successful, or – for the small part that requires long-term research – by a few top research institutions. I am confident that this opinion will be replaced by a new craze among CEOs in a few years. Meanwhile, academics should do there homework in three ways. (a) They should look for the true kernel in the current anti-academic trend and improve academic research accordingly. (b) They should orient their research towards the unique strength of academia, like the possibility to carry out true interdisciplinary research at universities. (c) They should tune their role, their words and deeds to those much-increased societal responsibilities highlighted above.
Academics from computer science trigger confusion and reshaping of our society to a bigger and bigger extend; it is time for them to live up to their responsibility.
Bios
Prof. Dr. Max Mühlhäuser is head of the Telecooperation Lab at Technische Universität Darmstadt, Informatics Dept. His Lab conducts research on smart ubiquitous computing environments for the ‘pervasive Future Internet’ in three research fields: middleware and large network infrastructures, novel multimodal interaction concepts, and human protection in ubiquitous computing (privacy, trust, & civil security). He heads or co-supervises various multilateral projects, e.g., on the Internet-of-Services, smart products, ad-hoc and sensor networks, and civil security; these projects are funded by the National Funding Agency DFG, the EU, German ministries, and industry. Max is heading the doctoral school Privacy and Trust for Mobile Users and serves as deputy speaker of the collaborative research center MAKI on the Future Internet. Max has also led several university wide programs that fostered E-Learning research and application. In his career, Max put a particular emphasis on technology transfer, e.g., as the founder and mentor of several campus-based industrial research centers.
Max has over 30 years of experience in research and teaching in areas related to Ubiquitous Computing (UC), Networks, Distributed Multimedia Systems, E-Learning, and Privacy&Trust. He held permanent or visiting professorships at the Universities of Kaiserslautern, Karlsruhe, Linz, Darmstadt, Montréal, Sophia Antipolis (Eurecom), and San Diego (UCSD). In 1993, he founded the TeCO institute (www.teco.edu) in Karlsruhe, Germany, which became one of the pace-makers for Ubiquitous Computing research in Europe. Max regularly publishes in Ubiquitous and Distributed Computing, HCI, Multimedia, E-Learning, and Privacy&Trust conferences and journals and authored or co-authored more than 400 publications. He was and is active in numerous conference program committees, as organizer of several annual conferences, and as member of editorial boards or guest editor for journals like Pervasive Computing, ACM Multimedia, Pervasive and Mobile Computing, Web Engineering, and Distance Learning Technology.
Editor Biographies
Dr. Cynthia C. S. Liem is an Assistant Professor in the Multimedia Computing Group of Delft University of Technology, The Netherlands, and pianist of the Magma Duo. She initiated and co-coordinated the European research project PHENICX (2013-2016), focusing on technological enrichment of symphonic concert recordings with partners such as the Royal Concertgebouw Orchestra. Her research interests consider music and multimedia search and recommendation, and increasingly shift towards making people discover new interests and content which would not trivially be retrieved. Beyond her academic activities, Cynthia gained industrial experience at Bell Labs Netherlands, Philips Research and Google. She was a recipient of the Lucent Global Science and Google Anita Borg Europe Memorial scholarships, the Google European Doctoral Fellowship 2010 in Multimedia, and a finalist of the New Scientist Science Talent Award 2016 for young scientists committed to public outreach.
Dr. Jochen Huber is a Senior User Experience Researcher at Synaptics. Previously, he was an SUTD-MIT postdoctoral fellow in the Fluid Interfaces Group at MIT Media Lab and the Augmented Human Lab at Singapore University of Technology and Design. He holds a Ph.D. in Computer Science and degrees in both Mathematics (Dipl.-Math.) and Computer Science (Dipl.-Inform.), all from Technische Universität Darmstadt, Germany. Jochen’s work is situated at the intersection of Human-Computer Interaction and Human Augmentation. He designs, implements and studies novel input technology in the areas of mobile, tangible & non-visual interaction, automotive UX and assistive augmentation. He has co-authored over 60 academic publications and regularly serves as program committee member in premier HCI and multimedia conferences. He was program co-chair of ACM TVX 2016 and Augmented Human 2015 and chaired tracks of ACM Multimedia, ACM Creativity and Cognition and ACM International Conference on Interface Surfaces and Spaces, as well as numerous workshops at ACM CHI and IUI. Further information can be found on his personal homepage: http://jochenhuber.com