The Worker and the Marionette: What Marxism Has to Say about Artificial Intelligence
It is questionable if all the mechanical inventions yet made have lightened the day’s toil of any human being.
John Stuart Mill, Principles of Political Economy
The instrument of labor, when it takes the form of a machine, immediately becomes a competitor of the workman himself. The self-expansion of capital by means of machinery is thenceforward directly proportional to the number of the workpeople, whose means of livelihood have been destroyed by that machinery.
Karl Marx, Capital
In 1825, German showman and inventor Johann Nepomuk Mälzel toured the world demonstrating his astonishing mechanized chess player. Although the automaton, originally built in Germany by Baron Wolfgang von Kempelen, entertained and amazed audiences all over the East Coast of the US, one spectator who was not fooled was Edgar Allen Poe. Poe’s essay “Maelzel's Chess Player” brilliantly debunked the machine. Poe showed that the Turk, as it was called, could not possibly have been a purely mechanical chess grandmaster. Poe argued that if the machine worked algorithmically—like Charles Babbage’s recent difference engine—it would have perfectly formalized the game of chess and would have been unbeatable—which it wasn’t. However, the Turk’s clunky and repetitive motions showed that it was also a poor imitation of a fallible, organic human. Poe argued (correctly, as it turns out) that what von Kempelen’s machine did was simulate what the audience imagined a chess-playing machine would act like. Poe asserted that the mechanical chess player was indeed a sophisticated machine, but it was sophisticated as a piece of stagecraft. The genius of the machine was that it disguised the real chess player inside, and disguised it in a way that would obscure, not simulate, the real human labor running the machine.
Edgar Allen Poe was no Marxist, but his debunking of von Kempelen’s chess player has something in common with the Marxist conception of fetishism. In fetishism, the great masses of working people, when confronted by the commodities that they themselves have produced, misattribute the source of human productivity to the genius of the ruling class, to an abstraction called the “economy,” or to the commodity-form itself. When workers see the signs of their own Promethean power, they misperceive them as signs of their helplessness. Because everyday social life is fragmented and organized for the production of surplus-value—not the benefit of the direct producers—people under capitalism see themselves as passive objects, not active subjects of history. We also see something like this in the uncritical language about so-called artificial intelligence (AI). Capital presents us with AI as a kind of machine magic, a super-brain. But it is nothing of the kind. Artificial intelligence does not think, does not innovate, does not create, does not labor. Artificial intelligence does nothing other than sort and reorganize the real intellectual work of living, breathing people. True, the results of this electronic legerdemain can be astonishing; but our astonishment comes from the fact that the algorithm resides out of sight—either on a hard drive or, increasingly, in the so-called cloud—seemingly outside of human concerns and toil. In fact, under capitalism, AI is merely a form of stagecraft for disguising the theft of real thought, creativity, and insight from real human beings.
Putting things more bluntly, physicist Dan McQuillan argues that AI is a bullshit generator. AI produces nothing. It simply reshuffles the deck of everything that has already been produced and then, using data generated by users, picks a hand that’s statistically the most likely to win. The algorithm knows nothing about poker, card games, or winning. It just feeds back to users what they already (collectively) know, or more often than not, what they think they know. McQuillan writes:
If a generated sentence makes sense to you, the reader, it means the mathematical model has made sufficiently good guess to pass your sense-making filter. The language model has no idea what it’s talking about because it has no idea about anything at all. It’s more of a bullshitter than the most egregious egoist you’ll ever meet, producing baseless assertions with unfailing confidence because that’s what it’s designed to do. It’s a bonus for the parent corporation when journalists and academics respond by generating acres of breathless coverage, which works as PR even when expressing concerns about the end of human creativity.
McQuillan’s point is that, like von Kempelen’s chess player, AI is a carnival sideshow, albeit a sophisticated one. Of course, all of this sophisticated filtering and sorting requires an enormous input of resources, but it’s not thinking in any sense. Thinking is happening, but the AI is not doing it; rather, the AI is an interface that makes it seem like capital is thinking, when in fact only workers think.
Worse still, as stagecraft, AI is not only deceptive, but deeply anti-human. Not only does it falsely (and clumsily) emulate human thinking, but in doing so, it directly enacts the ideology that real human beings are themselves merely machines. If algorithms think like us, then our thinking must in some sense be algorithmic. But if humans are algorithmic like software, then there is no warrant for the fundamental ethical insight that human beings should be ends in themselves, and not ends for other human beings. Calling this a fascist worldview would be unfair to fascism. Scholar David Golumbia writes that we should really see this as nihilism, the complete abandonment of human values and principles:
The idea that “intelligence” (however it is measured) is the only or the fundamental feature of consciousness is one with a profoundly conservative pedigree, associated with eugenics and other forms of racism. The people who insist that these are identical despite the huge amount of evidence that they aren’t seem committed to a nihilist philosophy: humans are just machines.
To be sure, this way of thinking is nothing new. Matteo Pasquinelli writes that this kind of thinking goes back to Charles Babbage himself at the earliest and more recently to the godfather of neoliberalism, Friedrich Hayek. Pasquinelli points out that for Hayek, human thought was merely a matter of pattern recognition, as it is for deep-learning models today. Anticipating today’s massively parallel processing techniques of computing, Hayek believed that while individual human brains may recognize patterns faultily, markets could, through sheer volume of individual decisions, correct for the errors of individuals. Markets were, for Hayek, a kind of infallible super-individual whose distributed cognition could approximate perfect rationality. From a Marxist point of view, Hayek was partially right; we ourselves argue that economics is indeed a matter of social, collective, and supra-individual decision making. However, Hayek’s glorification of markets was deeply flawed; it assumed market rationality was itself superhistorical and inherent in human nature. Like today’s algorithmic fetishists, Hayek misrecognized market activity as the locus of human rationality and cooperation. According to Pasquinelli,
What escapes Hayek’s assessment is that this decentralized and unconscious rationality is not only of markets, but can be found in other forms of human organization and cooperation. Karl Marx, for example, recognized the division of labor in workshops and manufactories as a form of spontaneous and unconscious rationality.
While Hayek limits the social function of cognition to the acts of buying and selling, Marx saw collective reasoning in the active processes of giving shape to the raw materials of the earth on a large scale—in other words, in labor. Hayek assumed as a premise the very thing that he attempted to explain—that human beings are not fundamentally builders, caregivers, architects, storytellers, poets, or carpenters, metallurgists, or scientists, but simple rational maximizers. If we are individually algorithmic machines, then it makes sense that society itself should be an even bigger, more powerful, algorithmic machine.
Again, Marxists do not deny that cognition is a supra-individual process. In fact, the work of the great Soviet philosopher Evald Ilyenkov in many ways anticipated the currently in-vogue hypothesis of distributed cognition, following directly on the insights of Friedrich Engels. But markets do not think like super-individuals because markets are characterized by the class structure of capitalism, in which some individuals participate as capitalists and others (the vast majority) as owners of nothing but their own labor-power. This is an inherent contradiction of capital: Capital depends on the creativity, originality, and sociability of the proletariat for the extraction of profit, while the forces of competition require the technical standardization, deskilling, and mindless regimentation of more and more branches of industry. Artificial intelligence, in turn, only appears to mimic human thinking because it does, in fact, directly replicate the kind of thinking increasingly rewarded by capital—dull, vapid, reductive, repetitious, mechanical thinking. Artificial intelligence can never think like a human being, but Marxists argue that in a socialist society, human beings will be free to think in a fully human way for the first time. Every other hitherto existing society has had a special class of thinkers, a social stratum distinct from the average run of humanity. In every other society, the many have had to diaper the babies, make the soup, weld the steel, brew the espresso, deliver the mail, and tune up the cars, while the few have had the privilege of actively participating in art, philosophy, religion, and all the other uniquely human undertakings. Under capitalism, the worker is both a marionette, like von Kempelen’s chess player, and the living heart of an inhuman system. Under socialism, workers will no longer labor to build technofetishist dreams for capital. Humanity will be in control democratically and machines will serve, and when this happens, no one will possibly think an algorithm is really a thinking human being.