* Kate Crawford, Atlas of AI: Power, Politics and the Planetary Costs of Artificial Intelligence, Yale University Press, 2021
* Erik J. Larson, The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do, Harvard University Press, 2021
For those merely remotely interested in artificial intelligence (AI), this type of intelligence may seem cryptic or even magical. All too often, its heavily prefigured treatment in the mainstream media and by some companies is little suited to better understanding a phenomenon whose impact on everyday life is constantly growing. Put briefly, the ongoing debates on AI focus more on its potential than on anything concrete, on the possible rather than the probable. The New York Times recently acknowledged this shortcoming. While those eager to embrace fiction or foresight might appreciate this approach, those seeking to keep abreast of developments will quickly go astray. The traditional technological jargon and the most improbable predictions, besides being contradictory, make developing an informed opinion on AI impossible today.
Two recent and perfectly complimentary publications mitigate this pitfall and provide a comprehensive and realistic overview of where AI stands, where it is heading, and which ethical, social, environmental, and economic issues it involves. The first, Atlas of AI – Power, Politics and the Planetary Costs of Artificial Intelligence, has been authored by Kate Crawford, a senior principal researcher at Microsoft Research. Her comprehensive account of AI relies on atlases, collections of maps and texts that permit “rereading the world, linking disparate pieces differently.” The second publication, The Myth of Artificial Intelligence – Why Computers Can’t Think the Way We Do, is by Erik J. Larson, a computer scientist and tech entrepreneur. Seeking to sharpen our view of AI from a technical and technological perspective, Larson above all seeks to tackle its founding myth, which he considers an intellectual impasse.
Behind the code, lies the reality
Kate Crawford’s Atlas of AI begins underground, in lithium mines and wherever rare earths are mined, where the quintessential prerequisites of computing, and hence also of AI, are sourced. From the outset, Crawford thus such materializes the ecological and social impact of AI, and more broadly of technology. She then addresses another, often invisible reality of AI: “We can – and should – speak instead of the hard physical labor of mine workers, the repetitive factory labor of the assembly line, the cybernetic labor in the cognitive sweatshops of outsourced programmers, the poorly paid crowdsourced labor of Mechanical Turks workers, and the unpaid immaterial work of everyday users. These are the places where we can see how planetary computation depends on the exploitation of human labor, all along the supply chain of extraction” (p. 69).
Crawford also analyzes the importance of data, another indispensable AI resource, at length: “By looking at the layers of training data that shape and inform AI models and algorithms, we can see that gathering and labelling data about the world is a social and political intervention, even as it masquerades as a purely technical one. The way data is understood, captured, classified, and named is fundamentally an act of world-making and containment. It has enormous ramifications for the way artificial intelligence works in the world and which communities are most affected” (p. 121).
Reading Crawford’s Atlas makes us appreciate her remarkable ability to render visible the usually invisible, to concretize what otherwise might seem ethereal or distant. By interweaving these diverse subjects, even if they have been discussed, albeit disparately, by the scientific community (on the notion of work, see, for example, Antonio A. Casilli ’s Waiting for the Robots; on data, see Cathy O’Neil’s Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy; etc.), Crawford offers a truly comprehensive and innovative account of AI.
For Crawford, artificial intelligence is“neither artificial nor intelligent” (p. 8). Nevertheless, she argues that it must be considered in terms of power. She concludes her stimulating account by encouraging researchers and specialists to pursue this power-oriented line of inquiry: “AI systems are expressions of power that emerge from wider economic and political forces, created to increase profits and centralize control for those who wield them. But this is not how the story of artificial intelligence is typically told” (p. 211)
Rethinking how we talk about AI
Erik Larson’s The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do recounts the story of artificial intelligence (or rather how that story is told). As his title suggests, Larson argues that the story is a myth, more precisely a Promethean myth: “The myth is not that true AI is possible. As to that, the future of AI is a scientific unknown. The myth of artificial intelligence is that its arrival is inevitable, and only a matter of time – that we have already embarked on the path that will lead to human-level AI, and then superintelligence. We have not. The path exists only in our imaginations” (p. 1). He considers the prevailing discourse, that we are inevitably heading toward a “true AI” or “superintelligence,” problematic: lacking a concrete basis, this discourse highlights our inability to completely understand how human intelligence works. Let alone how it might be applied to machines…
Larson begins his reflections by reviewing at length how intelligence has been “artificialized,” ever since Alan Turing and I. J. Good to the present. He also considers the current limitations of such systems: how to implement social, emotional, or situational intelligence in machines? How to make it all work? This pedagogical approach enables readers to rediscover the notions of inference, induction, deduction, and relevance… at the risk of sometimes getting lost.
For Larson, today’s current race toward more data and more computational power will not make the emergence of any “augmented” AI possible: “Data are just observed facts, stored in computers for accessibility. And observed facts, no matter how much we analyze them, don’t get us to general understanding or intelligence” (p. 141). Larson goes even further: “Perhaps we could start with a frank acknowledgment that deep learning is a dead end, as is data-centric AI in general, no matter how many advertising dollars it might help bring into big tech’s coffers. We might also give further voice to a reality that increasing numbers of AI scientists themselves are now recognizing, if reluctantly: that, as with prior periods of great AI excitement, no one has the slightest clue how to build an artificial general intelligence” (p. 275). This idea, little heard in the tech ecosystem, pervades Larson’s thinking and at several junctures echoes Gary Marcus’s Rebooting AI: Building Artificial Intelligence We Can Trust.
More than a critique of the AI industry (even if some might read it as little else), The Myth of Artificial Intelligence refutes the false, yet frequently proclaimed promises about AI. For Larson, there is “nothing to be gained by indulging in the myth here: it can offer no solutions to our human condition except in the manifestly negative sense of discounting human potential and limiting future human possibility” (p. 280). How, then, to move forward? Achieving comprehensive or “true” AI “will require a major scientific breakthrough, and no one currently has the slightest idea what such a breakthrough would even look like, let alone the details of getting to it” (p. 2).
The advance Larson calls for is neither technical nor technological, yet conceptual or theoretical: “It’s possible that […] we’re out of ideas. If so, the myth represents our final, unrecoverable turn away from human possibility – a darkly comforting fairy tale, a pretense that out of our ashes something else, something great and alive, must surely and inevitably arise” (p. 281).
Reading Crawford and Larson may well leave us perplexed. On the one hand, we will surely believe that the much-hailed apocalyptic future, where machines will govern humanity, is clearly not for tomorrow and is indeed science fiction. On the other, the vision of a future “augmented” by AI, benefitting everyone, is equally remote. And yet, as the cryptic becomes almost clear, and as the magical aura surrounding AI gradually evaporates, those reading Crawford and Larson will undoubtedly have a clearer vision and understanding of where artificial intelligence stands today. They will also have gained a realistic sense of matters, one denying neither the complexity nor the potential, perfectly real in fact, of AI in some fields (see, for example, here or here). A perspective, moreover, stripped of all the mythology or vagueness as a rule clinging to AI. Well, at least until now.
This article has been translated into English by Mark Kyburz.