The symbiosis of humans and machines is neither inherently good nor evil. We must embrace AI’s potential while remaining vigilant about its risks. Otherwise, we risk stumbling into a nightmarish new Stone Age, this time powered by robots, Valdai Club Chairman Andrey Bystritskiy writes.
We often discuss the future and the factors shaping it: the political ambitions of certain leaders, the volatility of elite sentiments or their capacity for unity, natural resources, the objective circumstances of international relations – often tied to the technological capabilities of nations – and, of course, military power, which is not merely about the fighting spirit of soldiers but also heavily depends on the quality of their weapons (indeed, better weapons might even make them braver). And this is just the tip of the iceberg.
Yet, all these factors rest on the foundation of remarkable, life-altering inventions humanity has created throughout history: iron, gunpowder, spaceships, genetic engineering, printing, and other equally transformative innovations. The advent of artificial intelligence (AI) is undoubtedly one of those pivotal inventions capable of reshaping humanity’s trajectory, along with its social and political structures.
It’s astonishing how the modern world simultaneously hosts breath-taking technological progress and brutal, barbaric violence. Yet, this duality is hardly surprising. It has always been this way throughout human history. At times, progress has transformed lives as radically as the bloodiest wars. The question is, what will AI bring us? Opinions on this vary widely.
Nearly all inventions, from the dawn of Homo sapiens, have not only advanced humanity as a whole but also fuelled intraspecific competition. In other words, they helped some individuals gain an edge over others. A person skilled with a stone axe could chop firewood faster, hunt more effectively, or even eliminate a rival in a fight over a mate. Thus, the earliest tools of labour immediately sowed the seeds of inequality (though, to be fair, it was relatively modest at first).
There’s no doubt that AI, like any significant invention, will play a colossal role in accelerating social competition. Konrad Lorenz once observed that intraspecific competition is far more consequential than interspecific competition. Consider the colourful fish near a coral reef: striped, red, long-tailed, or short-tailed, they peacefully graze on different parts of the coral. Yet, when two red fish occupy the same spot, conflict erupts. The reason? They compete for the same limited resource – a single coral bush can sustain only one.
For humans, the situation is both more complex and nuanced. On one hand, we are all fundamentally similar; on the other, complete equality is unattainable, and resources are perpetually scarce. Yet, humanity has built intricate societies that skilfully regulate this inevitable competition. However, progress and new inventions continually present fresh challenges.
For instance, the advent of automobiles introduced a new social hierarchy. Some drive luxury supercars, while others settle for modest vehicles. Yet, superior driving skills don’t confer significant social advantages – being a skilled driver isn’t a hallmark of success, unless you’re a Formula 1 racer like Lewis Hamilton or Fernando Alonso.
Similarly, computers gave rise to new professions, but career advancement in these fields wasn’t solely dependent on technical prowess. Being an excellent machine operator doesn’t qualify one to become a plant director, just as IT expertise alone doesn’t guarantee a top-tier position. Leadership roles are still shaped by traditional human traits: communication skills, persistence, and moral fortitude.
However, some inventions have dramatically intensified competition, reshaping entire societies.
For example, Cro-Magnons (essentially us) once coexisted with Neanderthals (partly us, but only to a small extent). Over time, Neanderthals vanished, with one theory suggesting that Cro-Magnons’ superior tool use and communication skills gave them the edge. While other explanations exist, the precise answer remains elusive – perhaps AI will one day provide clarity.
The discovery of iron revolutionized warfare, toppling empires and entrenching slavery through more efficient weapons and increased prisoners of war. Similarly, horses, stirrups, and chariots influenced social structures and political hierarchies.
In short, not all inventions are created equal. Some reshape society gradually, while others trigger rapid, radical changes.
For example, there is such a point of view that much in the tragedy of the Khmer Rouge and Cambodia was connected with the fact that the highlanders of this country, who lived as semi-Stone Age, got their hands on firearms. They had no experience with them and their culture had not developed recipes or concepts for using such means of violence. Of course, you can kill a lot of people with a club, but it is much more convenient with a machine gun. Of course, the troubles of Cambodia cannot be explained by the appearance of firearms alone, but they did shoot using guns. Without them, it would not have happened.
Inventions matter. Eric Schmidt, a key architect of the modern information age, once called the internet the world’s most dangerous and poorly controlled experiment in anarchy. Google CEO Sundar Pichai has noted that AI’s impact will surpass that of personal computers and mobile devices.
So, what large-scale consequences can we expect from AI?
I’m not referring to robot uprisings or sci-fi battles between humans and machines.
Someone works on an article, a dissertation, or simply analyses data. AI can help a person do this work much faster. In many cases, AI can process as much data as an ordinary researcher studies in his entire life. De facto, AI can become an intellectual extension of a person, just like a gun allows you to kill at a distance of kilometres, which cannot be done through mere muscle power.
Moreover, AI will very soon become indispensable on the battlefield. Not a single talented officer will cope with the operational management of a battle in which thousands or even tens of thousands of units of various equipment, including hundreds of thousands or even millions of people or, quite possibly, combat robots participate. The AI trained by this officer, however, can provide invaluable assistance and take into account all the data in one minute or second. It all depends on the speed of the machine in his hands.
But my focus isn’t on technicalities. I’m concerned about AI’s potential to exacerbate inequality.
First, there may be people who master AI much better than others and become related to it. It's as if they receive a new brain and hands and turn into people of the highest class. Others, almost inevitably, will lag behind them. De facto, a transnational community of exceptionally gifted intellectuals will emerge. The drama of the disappearance of the Neanderthals, displaced by the Cro-Magnons, may repeat itself. Those who lag behind intellectually, and cannot cope with the exploitation of AI, they may become second-class people in the most literal sense of the word.
Second, countries that are capable of confidently using and creating AI, and assembling teams of specialists, will gain an advantage over less developed countries that no nuclear weapon can offer. Using nuclear weapons is very inconvenient. It is cumbersome and the consequences are terrible, but AI can manipulate others so that no one even notices. Of course, it will not be the AI that will manipulate, but the person who has mastered it.
Third, in general, you can provoke big problems altering the perception of reality by the masses. More precisely, some people will get the opportunity to shape the picture of the world in the eyes of others. This will be cooler than any "Matrix". Moreover, this threat is already being realised: many people actually do not have the ability to distinguish lies from truth, or reliable information from unreliable. So far, there are enough loyal and servile journalists for such a simple operation with public opinion - look at how deftly they manage facts on many channels, for example, on CNN. The active introduction of AI into the information and communication environment will allow the creation of entire fictitious or false worlds, from which it will be extremely difficult to escape.
In general, due to the fact that AI is becoming an everyday tool, many risks arise. Let me repeat, there are no risks associated with the people that AI uses. AI itself does not pose any particular danger to us in the foreseeable future. On the contrary, it will help.
So, it seems to me, humanity should take precautions. Probably, there should be two types of control. On the one hand, some limitations should be initially provided for the AI architecture itself. On the other hand, the objects of control to some extent should also be the owners of these AIs, that is, people.
It is clear that such measures are unlikely in the near future, given the lack of international consensus. Instead, we should brace for AI’s brutal application in warfare.
Dario Amodei, director of Anthropic, likened AI’s capabilities to the emergence of a new, borderless state populated by highly intelligent individuals. Such a force can direct its aspirations anywhere. Given the characteristics of human beings, it is clear where. Towards new dominance and new inequality.
While I don’t overestimate the likelihood of such a scenario – human resilience, both in virtue and vice, is formidable – a new danger, a new threat still exists. We could see the formation of a new type of society, in which stratification will not be according to biological characteristics, i.e. who is stronger or taller, nor by external parameters: wealth, status, fame, and the possession of more powerful weapons or natural aggressiveness. Greed and the stupid desire to assert superiority over others won’t lead to a new social order, either. It may be based on the unification of ambitious, educated intellectuals who have decided to engage in a new form of self-organisation. What will come of this - God knows. We'll see.
In conclusion, the symbiosis of humans and machines is neither inherently good nor evil. We must embrace AI’s potential while remaining vigilant about its risks. Otherwise, we risk stumbling into a nightmarish new Stone Age, this time powered by robots.