0263276420966386.pdf (709.67 kB)
Beyond human: deep learning, explainability and representation
This article addresses computational procedures that are no longer constrained by human modes of representation and considers how these procedures could be philosophically understood in terms of ‘algorithmic thought’. Research in deep learning is its case study. This artificial intelligence (AI) technique operates in computational ways that are often opaque. Such a black-box character demands rethinking the abstractive operations of deep learning. The article does so by entering debates about explainability in AI and assessing how technoscience and technoculture tackle the possibility to ‘re-present’ the algorithmic procedures of feature extraction and feature learning to the human mind. The article thus mobilises the notion of incommensurability (originally developed in the philosophy of science) to address explainability as a communicational and representational issue, which challenges phenomenological and existential modes of comparison between human and algorithmic ‘thinking’ operations.
History
Publication status
- Published
File Version
- Published version
Journal
Theory, Culture and SocietyISSN
0263-2764Publisher
SAGE PublicationsExternal DOI
Issue
7-8Volume
38Page range
55-77Department affiliated with
- Media and Film Publications
Research groups affiliated with
- Sussex Humanities Lab Publications
Full text available
- Yes
Peer reviewed?
- Yes
Legacy Posted Date
2020-11-03First Open Access (FOA) Date
2020-11-03First Compliant Deposit (FCD) Date
2020-11-03Usage metrics
Categories
No categories selectedKeywords
Licence
Exports
RefWorks
BibTeX
Ref. manager
Endnote
DataCite
NLM
DC