We are moving from control to co-dependency, where machines no longer simply execute commands, but generate solutions — something that has historically never been the task of technology. Anna Sytnik, Associate Professor of the St. Petersburg State University, Managing Director of the INPO “Kolaboratorija”, writes about the risks of dependence on artificial intelligence and the prospects of technological redistribution of influence in the world. The article was prepared specially for the Valdai roundtable on the topic “Homo Perplexus: How to Stop Fearing and Learn to Love Change”.
The nature of modern AI technology is fundamentally different from all earlier human inventions that have influenced the course of world history in one way or another — be it the wheel, gunpowder, the printing press, or splitting the atom. AI will affect absolutely all spheres of human life. In this regard, there is the parallel with electricity. At one point it became invisible, but critical infrastructure, and it still underlies everything. Today, AI is becoming the “new electricity” of the digital age.
Algorithms are already walking hand in hand with us. AI-based recommendation systems are present in our smartphones, in loan approvals, in route optimisation, and in making medical diagnoses — the list of examples is quite long. It is impossible to name a sphere of public life into which AI-based solutions have not been integrated.
The key difference between AI and electricity is that electrical technologies are not cognitive in nature: they transmit energy, but do not comprehend information. Advanced AI systems, however, are based on large language models; they operate with meanings and interpretations, and can engage in processes previously considered exclusively human — from text analysis to decision making. Artificial Intelligence is perhaps best thought of as the“artificial ability to reason intelligently”.
Humanity has always dreamed of reproducing its own kind: from the Homunculus in alchemical treatises to the Terminator in science fiction. In the coming years, we will see the culmination of humanity’s age-old desire to recreate itself. The latest advances in AI are based on the so-called human-in-the-loop feedback. But if we turn the engineering principle into a cultural and philosophical metaphor, then increasingly humans — and humanity more broadly — are being drawn into a loop of AI dependence.
We are moving from control to co-dependency, where machines no longer simply execute commands, but generate solutions — something that has historically never been the task of technology. The creation and selection of solutions have always remained the prerogative of human thinking, culture, and responsibility. Now, the very architecture of meaning and action is increasingly being formed by algorithms, going beyond the usual techno-rational context.
AI is, by its nature, a tool for use in optimisation. Due to its mathematical structure, it strives to find the only “correct” solution, relying on data arrays and its own ability to self-learn constantly. Or, as Henry Kissinger once said, “AI knows only one purpose: to win.” This gives AI systems quasi-subjectivity — the ability not just to react, but to anticipate, to create the logic of action.
But despite the risks of dependence, AI will most likely remain just a tool in human hands. The prospect of the emergence of so-called AGI (Artificial General Intelligence) — "general" (or "strong") artificial intelligence that can completely replace humans, is still vague. Although some experts from technology corporations engaged in the race for leadership are flirting with investors, announcing that they know how to create AGI, other specialists who run AI departments say that such a concept is utopian, and should not exist at all. The scientific community cannot yet provide a clear answer about the limits of AI development, although they have successfully provided forecasts for the coming years.
Therefore, while AI will penetrate all spheres, it will not alter the logic of global competition — it will become another tool in the hands of conflicting actors, albeit an all-pervasive tool. This is precisely the main challenge at this stage.
Understanding current trends in the development of AI will allow us to maintain strategic awareness in the context of the increasing technological dependence of world politics.
The contours of changes we can expect to happen soon are already visible. The next stage after generative AI will be AI agents: autonomous assistant systems which are also based on LLMs. The current buzz about them promises to grow louder and louder this year, but in reality they are just a superstructure. We should watch where the development model itself is heading. It leads us to the next stage — to robots.
A more complex system is needed for the development of robots — multimodal models. These are AI systems that can perceive and process several types of information at once: text, images, sound, video and even physical signals. Unlike conventional large language models (LLMs), which only work with text, multimodal ones can “look”,“listen”, “read”, and "understand" at the same time. Robots are taught the anatomy of action. Their training will require not only theoretical engineers, but also experts with real experience: mechanics, surgeons, drivers, agronomists, logisticians and military personnel. People whose hands and skills will provide the basis for the creation of future machines.
These two stages of technological progress between which we find ourselves are significantly different. Generative AI has become a tool in the hands of people who come into contact with information. Robots will help people in work that requires physical activity. They will be used to sort goods in warehouses, prepare food, assist doctors, and in patrolling and delivering. Thus, AI will finally penetrate all spheres of human life.
We must already ask ourselves about the structure of the emerging technological interdependence in the world, and therefore — the future global political landscape.
First, this concerns the nature of AI technologies themselves: will they be built on open-source solutions, available and adapted by various countries and companies, or will they end up under the control of a limited circle of states and corporations? Now we see how open models with a delay of only a few months catch up with closed ones, reproducing their capabilities in an area previously considered prohibitively difficult for open source.
Second, the process of model commoditisation is important.That is, the transformation of AI systems from a unique competitive advantage into a standard, easily scalable and interchangeable product. If basic language models become as widespread and accessible as browsers or operating systems are today, this will dramatically change the AI economy: those who adapt models to local tasks and contexts faster will gain an advantage rather than monopolistic companies.
Finally, the labour market will become a key indicator and battlefield. AI changes requirements for competencies, creating demand for new forms of creative, engineering and management activities. The country or region that quickly trains specialists who are able not only to use AI, but also to design its application, will gain a long-term advantage.
The transformation of employment is inevitable. Before the industrial revolution, a person chose a profession and was stuck with it. With the advent of industrialisation, changing professions became possible and even necessary. Now, in the age of AI, this dynamic is intensifying: job switching will become more frequent, and resilience amid change will become a core competency. Experts who master the new tools will likely become even more effective than before. This will be a shift that will open up new horizons for those who can adapt themselves.
***
A natural question arises: what can be done in the face of such rapid changes?
Continuing to theoretically reflect on technologies without relying on the real challenges that practitioners face on the AI “fronts” is fraught with avoiding pressing issues for several reasons.
First, since generative AI technologies have been in vogue for the past two years, many experts have focused on the so-called “nationality” of AI, or the sovereignty of the technology in terms of the information it generates. Those passionate about this issue quite rightly point out that our AI models (like ChatGPT in America, Deepseek in China, or GigaChat in Russia) must produce information in accordance with the policies and values of this or that state. Therefore, in their opinion, the most important thing we should do is to create our own ethical filters and sovereign datasets in our own language, which will make our models “orthodox” or at least free them from excessive political bias.
In addition to the fact that such additional settings slow down the development of large language models, the logical trap that distracts us from the essence of the problem is the following: “LLMs develop through language (national datasets/search engines), and language is, as constructivists bequeathed, culture, identity, history. This means that each large AI model is a digital mirror of the civilisation that gave birth to it. Therefore, we need to train the model so that it reflects our values as much as possible.” This is important, but the market leaders of the escalating confrontation are not striving for this.
The main goal of the largest developers of AI models at this stage is to get as many users as possible around the world, that is, to create models that are universal for the whole world. The AI application people use to perform tasks in American, Chinese or Russian — is determined at the everyday level by convenience, not ideology. The logic of markets sometimes encourages the Chinese to use ChatGPT, since it is better in some respects. Bans on Deepseek in some parts of the world have not affected the popularity of the Chinese application. People use the "Chinese digital mirror" because it is simple and easy to use.
Participants in the second line of discussion about AI take a slightly different position, arguing that AI is hype, and forecasts about the speed of its implementation are greatly overestimated. A popular position goes as follows: there is too much noise around, AI will be implemented slowly, we will have time to adapt.
These arguments are also not without merit. For the most part, this is due to the risk of job losses. For example, in China, the current situation does not favour the widespread replacement of human labour with machine labour. This is fraught with unemployment and the resulting social problems. Regulation is developing, especially in countries such as the EU and Japan, which is holding back the development of AI. All countries generally agree on the need for the safe implementation of AI and its responsible use. There are countries and regions that are currently struggling with challenges of a different nature — their level of economic and social development does not allow them to pay attention to the technological sphere.
However, in this race, no one will wait for those who lag behind. It is worth paying attention not so much to the pace of AI implementation around the world, but to the growing private-public partnerships in such areas as the development of critical infrastructure, the military and public administration. Leading states and IT giants are jointly engaged in the active implementation of the latest AI technologies. It is useless to reassure yourself with the argument that “we have a lot of time to catch up.” The lag accumulates and the longer we delay, the more difficult it will be.
You can argue about the time frame for the widespread implementation of AI:five years, 15, or even 50. But let's remember that three years ago, few people understood what generative AI was. Yes, robots are in their infancy, but they are developing.
And finally, the third direction is the most dangerous: “Alien technologies are available, so why should we knowingly take part in a losing race?” This is a fairly common idea among the countries of the World Majority. Supporters of this idea believe that it is necessary to establish a stable niche, and not undermine the economy with unbearable investments in the AI sphere. This implies the creation of our own solutions using weaker models, for example: small, highly specialised LLMs or large ones trained on foreign information. This direction, in fact, includes the first, but is not focused only on nationality. Numerous practical techno-optimists explain that we can use other people's technologies and use them to create competitive AI agents, so everything is fine. This opinion is especially widespread in countries that do not have their own breakthrough AI developments. This is not surprising, these powers took the same position on nuclear technologies. But to hear them in Russia, for which the issue of possessing nuclear technologies was critically important in terms of survival during the last century, is at least strange.
The current stage of AI development in the world is characterised by the fact that technologies are shared. But this is not done out of the kindness of the heart, but to capture data, markets, and finally, to further train the models themselves. But it won't always be like this. The accumulation of data will end, but the confrontation will not. That is why the essence of sovereign technologies is not in sovereign superstructures, but in infrastructure, hardware, brains and constant shock development. When it comes to a real technological redistribution of influence in the world, no one will just share their "nuclear weapons".
Sovereign technologies cannot be achieved through nationalisation, a strategy of careful implementation and constant reliance on other people's decisions. These are all just small components of potential success. A breakthrough in AI depends entirely on people and on the organisation of the process at all levels of technological development.