Miriam Redi (Wikimedia Foundation / King’s College London, UK)
Xavier Alameda-Pineda (INRIA, France)
The inclusiveness and transparency of automatic information processing methods is a research topic that exhibited growing interest in the past years. In the era of digitized decision-making software where the push for artificial intelligence happens worldwide and at different strata of the socio-economic tissue, the consequences of biased, unexplainable and opaque methods for content analysis can be dramatic.
Several initiatives have raisen to address these issues in different communities. From 2014 to 2018, the FAT/ML workshop was co-located with the International Conference on Machine Learning. This year, the FATE/CV workshop (E standing for Ethics) was co-located with the International Conference on Computer Vision and Pattern Recognition. Similarly, the FAT/MM workshop is co-located with ACM Multimedia 2019. This initiatives, and specifically the FAT/ML series of workshop, converge to the birth of the ACM FAT* conference, having its first edition in New York in 2018, this years in Atlanta, and the third edition, next year in Barcelona.
ACM FAT* is a very recent interdisciplinary conference dedicated to bringing together a multidisciplinary community of researchers from computer science, law, social sciences, and humanities to investigate and tackle issues in this emerging area. The focus of the conference is not limited to technological solutions regarding potential bias, but also to address the question of whether decisions should be outsourced to data- and code-driven computing systems. This question is very timely given the impressive number of algorithmic systems (adopted in a growing number of contexts) fueled by big data. These systems aim to filter, sort, score, recommend, personalize, and shape human experience. They increasingly make/inform decisions with major impact on credit, insurance, healthcare, and immigration, to cite a few key fields with inherent critical risks.
In this context, we believe that the multimedia community should put together the necessary efforts in the same direction, investigating how to transform the current technical tools and methodologies to derive computational models that are transparent and inclusive. Information processing is one of the fundamental pillars of multimedia, it does not matter whether data is processed for content delivery, experience or systems applications, the automatic analysis of content is used in every corner of our community. Typical risks of large-scale computational models include model bias and algorithmic discrimination. These risks become particularly prominent in the multimedia field, which historically has been focusing on user-centered technologies. This is why it is crucial to start bringing the notion of fairness, accountability and transparency into ACM Multimedia.
ACM Multimedia 2019 in Nice will benefit from mainly two initiatives to start melting with the trend of Fairness, Accountability and Transparency. First, one of the workshops co-located with ACM Multimedia 2019 (as mentioned above) will deal with Fairness, Accountability and Transparency in Multimedia (FAT/MM, held on October 27th). The FAT/MM workshop is the first attempt to foster research efforts that focus on addressing fairness, accountability and transparency issues in the Multimedia field. To ensure a healthy and constructive development of the best multimedia technologies, this workshop offers a space to discuss how to develop fair, unbiased, representative, and transparent multimedia models, bringing together researchers from different areas to present computational solutions to these issues.
Second, one of the two selected Conference Ambassadors of SIGMM for 2019 attended the FATE/CV workshop at CVPR earlier this year, identified a speaker that could be of great interest for the Multimedia field, and invited them to FAT/MM to meet and discuss with the Multimedia community. The paper selected covers topics such as age bias in datasets and the impact this could have in real-world applications, such as autonomous driving or recommendation systems.
We hope that, by organising and getting strongly involved in these two initiatives, we can raise awareness within our community, and finally come to create a group of researchers interested in analysing and solving potential issues associated to fairness, accountability and transparency in multimedia.