David Berry - DH Speaker Series 2020-2021
- David Berry
- Thursday, November 19, 2020
- 4:30 PM - 5:30 PM
Register via Zoom: https://mit.zoom.us/meeting/register/tJYqd-yrqDMjH9LT6d2VZeoDd4xBMq1byHWl
Presented by: David M. Berry, Professor of Digital Humanities, School of Media, Film and Music at the University of Sussex
The ubiquity of digital technologies in citizen’s lives marks a major qualitative shift where automated decisions taken by algorithms deeply affect the lived experience of ordinary people. This has created many new social experiences and improved contemporary life. However, a lack of public understanding of how algorithms work also makes them a source of distrust, especially concerning the way in which they can be used to create frames or channels for social and individual behaviour. This public concern has been magnified by election hacking, social media disinformation, data extractivism, and a sense that Silicon Valley companies are out of control. The wide adoption of algorithms into so many aspects of peoples’ lives, often without public debate, has meant that increasingly algorithms are seen as mysterious and opaque, when they are not seen as inequitable or biased (Berry 2014). Up until recently it has been difficult to challenge algorithms or to question their functioning, especially with wide acceptance that software’s inner workings were incomprehensible, proprietary or secret (cf. Berry 2008 regarding open source). Asking why an algorithm did what it did often was not thought particularly interesting outside of a strictly programming context. This meant that there has been a widening explanatory gap in relation to understanding algorithms and their effect on peoples’ lived experiences.
This paper argues for a new research programme in the Digital Humanities that seeks to examine this gap and develop a set of theoretical responses, tools and archives that seek to address the explanatory deficit in modern societies from a reliance on information technologies. The challenge of new forms of social obscurity from the implementation of technical systems is heightened by the example of machine learning systems that have emerged in the past decade. As a result, a new explanatory demand has crystallized in an important critique of computational opaqueness and new forms of technical transparency called “explainability.” We see this, for example, in challenges to facial recognition technologies, public unease with algorithmic judicial systems and other automated decision systems. Explainability is a key new area of research within the fields of artificial intelligence and machine-learning and requires a computational system to be able to provide an explanation for a decision that has been made. This notion of explainability has emerged as a term that captures precisely the ethical and interpretative lacuna, and often seeks to close it by means of explanatory responses generated by the technology itself. However, I argue that the idea of a technological response to an interpretability problem is doomed to failure while explainability is understood through such narrow technical criteria. In this paper, therefore, I seek to widen its applicability by calling on Digital Humanities to contribute to this research programme in explainability and in doing so, to create connections to ideas of explanatory publics, public humanities and digital literacies.