Human Replacement: How Artificial Intelligence Became a Political Battleground
As artificial intelligence (AI) continues to develop, not only does the list of tasks that robots perform better than humans grow, but so does the list of troubling questions. Will people lose their jobs, and if so, who will be affected first? Will neural networks embody the worst aspects of humanity? Is a multicultural AI possible? Viktor Vakhshtayn, PhD in Sociology and Dean of the Faculty of Philosophy and Sociology at the Institute of Social Sciences, RANEPA, discusses the main narratives of human replacementâforced idleness, the relationship between robots and the state, and cultural techno-opportunism.
Old Debates and New Agendas
Some topics in the public sphere have been debated for so long that the arguments are well-worn: same-sex marriage, climate change, institutional discrimination, and so on. The lines are drawn, and opposing campsâright and left, progressives and conservatives, religious and atheists, social justice advocates and free market supportersâtrade familiar arguments.
But things get more interesting with so-called new agendasâemerging issues where positions are not yet established. Here, itâs a Hobbesian war of all against all, and traditional rhetorical strategies often fail. One such agenda is âartificial intelligence and the developing world.â
The Rise of the Human Replacement Narrative
In 2016, the World Economic Forum published a report, âThe Future of Jobs: Employment, Skills and Workforce Strategy for the Fourth Industrial Revolution.â After surveying HR specialists from 371 international corporations, researchers concluded that by 2020, new technologies would eliminate about 7 million jobs. Three years earlier, a McKinsey report predicted even more dramatic numbers: between 110 and 140 million jobs would be replaced by well-trained algorithms in the coming decade. In 2017, Carl Frey and Michael Osborneâs influential study, âThe Future of Employment,â claimed that 47% of U.S. jobs were at high riskâAI could do these jobs better and cheaper.
This gave rise to one of the main narratives of the 2010s: âRobots will take your job. And theyâre already here.â In short: âThe main purpose of automation is to replace human labor with machines. Modern history is a series of industrial revolutions. Just as factories killed off cottage industries, AI will push âmental labor proletariansâ out of the market. Only the highest- and lowest-paid workers will keep their jobsâthe former because their work canât yet be automated, the latter because itâs not yet profitable to do so.â
The State vs. Robots
These arguments (known as âframing movesâ in political analysis) have two main consequences. First, a crusade against the coming âforced idleness.â Last year, the European Parliament held heated debates on âCivil Law Rules on Robotics.â Julie Ward, a member of the Progressive Alliance, summed up her factionâs position: âFighting automation is pointless. Progress is inevitable. We need to think about what to do with the millions who will find themselves in âforced leisure.ââ This brought the idea of universal basic income back into the spotlight.
Second, where human replacement is actually happening (albeit more slowly than the World Economic Forum predicted), paternalistic attitudes and technophobia are on the rise. According to a 2017 Pew Research Center survey, 72% of American adults worry that âin the future, robots and computers will do most of the work currently done by humans.â Another 67% fear that âan algorithm will be developed to hire and evaluate workers.â People expect the state to pursue âreasonable protectionismâ: evil corporations invent new technologies to take away our jobs, and only a strong government can stop them. In this context, Bill Gatesâs proposal to tax robot labor is especially interesting. The additional revenue from new technologies should be invested in retraining those who lose their jobs to technological progress.
In Russia, however, the situation is the opposite. Here, distrust of the state and trust in technology go hand in hand. According to our âEurobarometer in Russiaâ study, from 2016 to 2018, trust in the courts fell by 8%âwhile support for the idea of a robot judge rose by about the same amount. In the U.S., citizens fear being replaced by robots and hope the state will protect them. In Russia, citizens hope robots will replace the state.
Who Will Be Replaced First?
Letâs return to the human replacement narrative. The linear story of industrial revolutions misses one key point: replacement has already happened. Long before AIâs global rise, the 1970s saw a wave of layoffs in developed countries as production moved to developing nations. This created a new global division of labor, which authors like Alan Blinder and John Urry called the âunnoticed industrial revolution.â Thereâs a direct link between the recent âoffshoringâ of jobs (replacing some people with others who work for less) and the coming replacement of people with non-humans.
The human replacement narrative frames automation as, first, a problem for developed countriesâhence the war on idleness and the European Parliamentâs question, âWhat will we do with people who lose their jobs?â Second, itâs seen as a problem for the lower middle class. The argument goes that the highest- and lowest-paid workers wonât be affected. But the idea that âitâs not profitable to replace low-paid workersâ is wrong. The first to be hit will be those who have already been âsaved onâ once.
Take the telephone survey industry. Most call centers specializing in sociological data collection are outside Moscow. Cheaper phone service and computerization (CATI systems and the like) allowed this âproductionâ to be outsourced to the regions and nearby countries. If developers of a new CATI systemâa robot interviewerâmanage to create a working prototype, major research centers will line up to replace their regional contractors with robots (which not only donât falsify data but also speak without strong dialects). Of course, if the new robot can also process data and even (hopefully soon!) write standardized analytical reports orâbetter yet!âcolumns for business newspapers, a few Moscow analysts will lose their jobs too. But the first to go will be the many call center employees. First, clients saved money by outsourcing data collection; now, theyâll save on taxes.
Cultural Techno-Opportunism
Here, a split emerges between moderate progressives and anti-globalists. Both see human replacement as a threat and a risk of mass unemployment. But progressives believe in the Frey-Osborne curve, ignore the âglobal context,â and focus on issues of forced leisure and fair redistribution of income from new âinhumanâ economies. Anti-globalists see human replacement in the context of global inequality and the offshoring of the 1970sâ80s, viewing AI as a new pillar of the âneoliberal world order.â The key difference is in who they see as the main victim of the new technological revolution.
This is where a third, more philosophically nuanced narrative appears: cultural techno-opportunism. The techno-opportunist addresses technophobes in both campsâprogressives and anti-globalistsâwith a hint of superiority: âYou focus on consequences and miss the causes; you see threats and ignore opportunities. For you, new technology is something external to society, changing it from the outside. In reality, AI is a reflection of social relations, cultural stereotypes, economic logics, and Western values.â
Our task is not to resist the advance of AI like outdated Luddites, nor to repeat tired critiques of neoliberalism in a new technophobic form. To paraphrase Emmanuel Mounier: âOur task is to wrest technology from the hands of the global bourgeoisie.â We must seize not only the political agenda around AI, but AI itself.
How to Seize the AI Agenda?
The cultural techno-opportunist has two answers. First, we need to learn the language of the âenemyâ and start offering solutions in their terms. For example, venture capitalist and technocrat Kai-Fu Lee writes: âAI runs on data, and this dependence leads to a self-reinforcing cycle of consolidation in (new) industries: the more data you have, the better your product; the better your product, the more users you have; the more users, the more data you get.â So why not involve refugees and other marginalized groups from the developing worldâpotential victims of the technological revolution and current victims of the neoliberal orderâin data collection? This would allow them to earn money from major AI corporations. For instance, the company REFUNITE has launched an app that lets refugees from South Sudan and Congo earn money by training neural networks to recognize images. Redistribution in action.
The second answer is even more interesting. The techno-opportunist agrees: AI in its current form embodies dry Western rationality, individualism and egoism, male chauvinism, and globalist universalism. Itâs insensitive to issues like discrimination against women, cultural diversity, environmental threats, and non-Western ethics. When Germany developed an ethical code for self-driving cars, 14 expertsâengineers, programmers, philosophers, and even theologiansâwere involved. But they were all Western! Even when South Korea developed an ethical code for robots, it was based not on Korean traditions but on Asimovâs Three Laws of Robotics. But whatâs stopping others from joining in the development and training of AI?
Whatâs stopping us from making AI more culturally sensitive, politically aware, and socially engaged? Above allâour own technophobia and reluctance to understand technological details!
I wonât express my own opinion on the emerging agenda of âcultural techno-opportunism.â It leaves me with mixed feelings (somewhere between disgust and admiration). Iâll simply note the birth of this new narrative for further study. Iâll offer just one hypothesis: the success of techno-opportunism depends less on its ability to convince traditional left-wing technophobes of the need to seize AI, and more on its ability to build alliances with AI developers. A separate question: can techno-opportunists, with their cultural relativism, fit into the agenda of âemotional artificial intelligenceâ research?