Global Alternatives 2024
Ethics of Artificial Intelligence as a Focal Point for ​​International Competition of Cultures

The need for an international discussion of the ethical aspects of AI, despite the fact that it took decades for the negotiations on nuclear technologies to produce results, testifies to their significant difference. The specifics of these technologies, machine learning for example, presuppose the use of database arrays, that is, what is created and is constantly being created by people and loaded and constantly added to computing systems, write Natalya Pomozova and Nikolay Litvak.

The rapid development of a whole family of digital technologies, united by the concept of artificial intelligence (AI), is increasingly shifting from the theoretical and especially the technological sphere to the area of ​​ethics. Scientists, engineers and programmers face complex tasks, especially with regard to so-called Artificial General Intelligence (AGI), which is capable of not only performing individual functions of human thinking, but to a large extent emulates it as a whole. Nevertheless, given existing developments, its use by states and organisations as well as private individual users has yielded a whole range of ethical problems. Moreover, their seriousness and scale have already necessitated the development of ethical AI regulation at the state level as well as through interstate policy, creating a new area of ​​international competition. This is evidenced by the fact that many countries have adopted national strategies and plans for the development of AI, most of which attempt to introduce ethical principles to its development and use.

Despite regular international events involving various countries being devoted to the problems associated with developing these technologies, their participants have not yet been able to adopt a single binding transnational document. There are several reasons for this. First, in the context of growing conflict in modern international relations, almost any restrictions on research or the introduction of new technologies and equipment can lead to the emergence of new vulnerabilities for states. This is especially true when it comes to artificial intelligence, which literally has a mesmerizing effect on many minds, including politicians and the military, due to the multiplying proposals and plans for its use. Many of them are not only futuristic in nature, but already have real military potential (primarily autonomous combat robots, but also the development of information weapons).

Second, AI technologies are completely understood by a very small number of people, and are also increasingly diverse and difficult to verify. This, by the way, has led to a more or less productive movement towards, perhaps, the first consensus in this area – labelling products created by AI. This, however, is unlikely to eliminate the first issue mentioned above, if only because everything that is developed for military purposes will be keep secret.

Third, businesses have taken great efforts, and invested enormous amounts of money, to develop AI. The largest players in the technology sector and many others are in fierce competition with each other, which can also be lost, like rivalries between nations. At the same time, in the USA, there are entrepreneurs who first turned to their authorities to urge the need for state regulation of the industry.

This aspect is important for the second party in any business process – workers. Changes in the labour market have already begun, and it is increasingly obvious that a large-scale retraining of personnel is coming, as well as the adaptation of the entire educational system to AI. Finally, attention to the use of AI is rapidly growing from human rights organisations and societies in almost all countries, due to such risks and threats as the use of these new technologies in the creation and dissemination of false information, unreliable data (including in vital industries such as medicine), as well as automatic surveillance, facial recognition systems, and so on. This is not to mention the potential emergence of Artificial General Intelligence (AGI), which has prompted widespread fears of catastrophic scenarios.

Currently, the leading positions in the field of AI (personnel, computing power, publications in scientific journals, the level of digitalisation of the economy) are occupied by the United States, which China is approaching in many respects. The EU and Russia (which the West has been studiously ignoring in recent years) have also made important developments. However, ethics is not a simple derivative of the technological or economic power of a state, which could set the tone for regulating AI in the international arena and, even more, impose its own corresponding rules. As the world moves toward multipolarity, an increasing number of players are striving for independent positions on key issues, including ethics.

Thus, since 2017, China has begun to adopt documents that comprehensively regulate the development and implementation of AI in various areas. The first of these was “New Generation Artificial Intelligence”, which contains more than 80 goals to be achieved by 2030.

In 2021, the “Ethical Code of New Generation AI” appeared, formulating six principles: improving human well-being, promoting fairness, ensuring data security, governance and reliability, responsibility, and improving ethical literacy.

The process of conceptualising the regulation of artificial intelligence in the European Union began in 2018, the interim result of which can be considered the Artificial Intelligence Act approved by MEPs in December 2023, the coordination of which was difficult given the diverse interests of its participants, representing 27 countries.

All Chinese and European documents on AI include a section on ethics, which addresses the issues of security, trust, and transparency. While featuring similar content at first glance, a more detailed analysis reveals fundamental differences in the value approaches dominant in China and the EU. Among the distinctive features of European concepts, one can highlight increased attention to inclusiveness and non-discrimination – the involvement of more women and people of different origins, including those with disabilities, in the development and implementation of AI. This politicized approach echoes the principle of "positive discrimination" practiced in the United States and, in addition to reducing the qualifications of personnel, carries the risk of presenting an ideologically distorted generated data.

The need for an international discussion of the ethical aspects of AI, despite the fact that it took decades for the negotiations on nuclear technologies to produce results, testifies to their significant difference. AI ethics directly affects the interests of protagonists at all levels – states, societies, businesses and other organisations, as well as individuals. The specifics of these technologies, machine learning for example, presuppose the use of database arrays, that is, what is created and is constantly being created by people and loaded and constantly added to computing systems. This means that competition is moving into the field of ethics and, more broadly, into the field of values ​​and culture as a whole. After all, if the same question is asked of systems with different or differently selected data, that is, generated in a particular ethical and cultural environment, the systems give different answers. Who takes the leading positions in this competition will probably be known in the near future.

Views expressed are of individual Members and Contributors, rather than the Club's, unless explicitly stated otherwise.