Human Replacement: How Artificial Intelligence Became a Political Battleground
As artificial intelligence (AI) continues to develop, not only does the list of tasks that robots perform better than humans grow, but so does the list of troubling questions. Will people lose their jobs, and if so, who will be affected first? Will neural networks embody the worst aspects of humanity? Is a multicultural AI possible? Viktor Vakhshtayn, PhD in Sociology and Dean of the Faculty of Philosophy and Sociology at the Institute of Social Sciences, RANEPA, discusses the main narratives of human replacement—forced idleness, the relationship between robots and the state, and cultural techno-opportunism.
Old Debates and New Agendas
Some topics in the public sphere have been debated for so long that the arguments are well-worn: same-sex marriage, climate change, institutional discrimination, and so on. The lines are drawn, and opposing camps—right and left, progressives and conservatives, religious and atheists, social justice advocates and free market supporters—trade familiar arguments.
But things get more interesting with so-called new agendas—emerging issues where positions are not yet established. Here, it’s a Hobbesian war of all against all, and traditional rhetorical strategies often fail. One such agenda is “artificial intelligence and the developing world.”
The Rise of the Human Replacement Narrative
In 2016, the World Economic Forum published a report, “The Future of Jobs: Employment, Skills and Workforce Strategy for the Fourth Industrial Revolution.” After surveying HR specialists from 371 international corporations, researchers concluded that by 2020, new technologies would eliminate about 7 million jobs. Three years earlier, a McKinsey report predicted even more dramatic numbers: between 110 and 140 million jobs would be replaced by well-trained algorithms in the coming decade. In 2017, Carl Frey and Michael Osborne’s influential study, “The Future of Employment,” claimed that 47% of U.S. jobs were at high risk—AI could do these jobs better and cheaper.
This gave rise to one of the main narratives of the 2010s: “Robots will take your job. And they’re already here.” In short: “The main purpose of automation is to replace human labor with machines. Modern history is a series of industrial revolutions. Just as factories killed off cottage industries, AI will push ‘mental labor proletarians’ out of the market. Only the highest- and lowest-paid workers will keep their jobs—the former because their work can’t yet be automated, the latter because it’s not yet profitable to do so.”
The State vs. Robots
These arguments (known as “framing moves” in political analysis) have two main consequences. First, a crusade against the coming “forced idleness.” Last year, the European Parliament held heated debates on “Civil Law Rules on Robotics.” Julie Ward, a member of the Progressive Alliance, summed up her faction’s position: “Fighting automation is pointless. Progress is inevitable. We need to think about what to do with the millions who will find themselves in ‘forced leisure.’” This brought the idea of universal basic income back into the spotlight.
Second, where human replacement is actually happening (albeit more slowly than the World Economic Forum predicted), paternalistic attitudes and technophobia are on the rise. According to a 2017 Pew Research Center survey, 72% of American adults worry that “in the future, robots and computers will do most of the work currently done by humans.” Another 67% fear that “an algorithm will be developed to hire and evaluate workers.” People expect the state to pursue “reasonable protectionism”: evil corporations invent new technologies to take away our jobs, and only a strong government can stop them. In this context, Bill Gates’s proposal to tax robot labor is especially interesting. The additional revenue from new technologies should be invested in retraining those who lose their jobs to technological progress.
In Russia, however, the situation is the opposite. Here, distrust of the state and trust in technology go hand in hand. According to our “Eurobarometer in Russia” study, from 2016 to 2018, trust in the courts fell by 8%—while support for the idea of a robot judge rose by about the same amount. In the U.S., citizens fear being replaced by robots and hope the state will protect them. In Russia, citizens hope robots will replace the state.
Who Will Be Replaced First?
Let’s return to the human replacement narrative. The linear story of industrial revolutions misses one key point: replacement has already happened. Long before AI’s global rise, the 1970s saw a wave of layoffs in developed countries as production moved to developing nations. This created a new global division of labor, which authors like Alan Blinder and John Urry called the “unnoticed industrial revolution.” There’s a direct link between the recent “offshoring” of jobs (replacing some people with others who work for less) and the coming replacement of people with non-humans.
The human replacement narrative frames automation as, first, a problem for developed countries—hence the war on idleness and the European Parliament’s question, “What will we do with people who lose their jobs?” Second, it’s seen as a problem for the lower middle class. The argument goes that the highest- and lowest-paid workers won’t be affected. But the idea that “it’s not profitable to replace low-paid workers” is wrong. The first to be hit will be those who have already been “saved on” once.
Take the telephone survey industry. Most call centers specializing in sociological data collection are outside Moscow. Cheaper phone service and computerization (CATI systems and the like) allowed this “production” to be outsourced to the regions and nearby countries. If developers of a new CATI system—a robot interviewer—manage to create a working prototype, major research centers will line up to replace their regional contractors with robots (which not only don’t falsify data but also speak without strong dialects). Of course, if the new robot can also process data and even (hopefully soon!) write standardized analytical reports or—better yet!—columns for business newspapers, a few Moscow analysts will lose their jobs too. But the first to go will be the many call center employees. First, clients saved money by outsourcing data collection; now, they’ll save on taxes.
Cultural Techno-Opportunism
Here, a split emerges between moderate progressives and anti-globalists. Both see human replacement as a threat and a risk of mass unemployment. But progressives believe in the Frey-Osborne curve, ignore the “global context,” and focus on issues of forced leisure and fair redistribution of income from new “inhuman” economies. Anti-globalists see human replacement in the context of global inequality and the offshoring of the 1970s–80s, viewing AI as a new pillar of the “neoliberal world order.” The key difference is in who they see as the main victim of the new technological revolution.
This is where a third, more philosophically nuanced narrative appears: cultural techno-opportunism. The techno-opportunist addresses technophobes in both camps—progressives and anti-globalists—with a hint of superiority: “You focus on consequences and miss the causes; you see threats and ignore opportunities. For you, new technology is something external to society, changing it from the outside. In reality, AI is a reflection of social relations, cultural stereotypes, economic logics, and Western values.”
Our task is not to resist the advance of AI like outdated Luddites, nor to repeat tired critiques of neoliberalism in a new technophobic form. To paraphrase Emmanuel Mounier: “Our task is to wrest technology from the hands of the global bourgeoisie.” We must seize not only the political agenda around AI, but AI itself.
How to Seize the AI Agenda?
The cultural techno-opportunist has two answers. First, we need to learn the language of the “enemy” and start offering solutions in their terms. For example, venture capitalist and technocrat Kai-Fu Lee writes: “AI runs on data, and this dependence leads to a self-reinforcing cycle of consolidation in (new) industries: the more data you have, the better your product; the better your product, the more users you have; the more users, the more data you get.” So why not involve refugees and other marginalized groups from the developing world—potential victims of the technological revolution and current victims of the neoliberal order—in data collection? This would allow them to earn money from major AI corporations. For instance, the company REFUNITE has launched an app that lets refugees from South Sudan and Congo earn money by training neural networks to recognize images. Redistribution in action.
The second answer is even more interesting. The techno-opportunist agrees: AI in its current form embodies dry Western rationality, individualism and egoism, male chauvinism, and globalist universalism. It’s insensitive to issues like discrimination against women, cultural diversity, environmental threats, and non-Western ethics. When Germany developed an ethical code for self-driving cars, 14 experts—engineers, programmers, philosophers, and even theologians—were involved. But they were all Western! Even when South Korea developed an ethical code for robots, it was based not on Korean traditions but on Asimov’s Three Laws of Robotics. But what’s stopping others from joining in the development and training of AI?
What’s stopping us from making AI more culturally sensitive, politically aware, and socially engaged? Above all—our own technophobia and reluctance to understand technological details!
I won’t express my own opinion on the emerging agenda of “cultural techno-opportunism.” It leaves me with mixed feelings (somewhere between disgust and admiration). I’ll simply note the birth of this new narrative for further study. I’ll offer just one hypothesis: the success of techno-opportunism depends less on its ability to convince traditional left-wing technophobes of the need to seize AI, and more on its ability to build alliances with AI developers. A separate question: can techno-opportunists, with their cultural relativism, fit into the agenda of “emotional artificial intelligence” research?