On the Depth, Transparency and Power of Today’s AI - Modern Diplomacy

2021-12-27 22:33:58 By : Ms. Amy Liu

Two years into our last review on state of the artt in the area of artificial intelligence, there has been a widening gap between the seeming omnipotence of neural network models based on “deep learning”, which are offered by market leaders, and the demand for an “algorithmic transparency” emanating from the society. In this review, we will try to probe this gap, discussing what trends and solutions can help resolve the problem or lead to its further exacerbation.

First of all, what we know as strong or general AI (AGI) has become a well-established item on the global agenda. A team of Russian-speaking researchers and developers has published a book on this topic, where they provide a thorough analysis of the possible prospects of this technology. Open seminars are being held on a weekly basis during the last two years by the Russian-speaking community of the AGI developers.

Consciousness. One of the key problems concerning AGI is the issue of consciousness, as was outlined in our earlier review. Controversy surrounds both the very possibility of imbuing artificial systems with it and the extent to which it would be prudent for humanity to endow such systems with “consciousness”, if possible at all. As Konstantin Anokhin has put it at the OpenTalks.AI conference in 2018, “we must explore the issue of consciousness to prevent that AI is imbued with it.” According to the materials of a round table held at the AGIRussia seminar in 2020, one of the first requirements for the emergence of consciousness in artificial systems is the ability of AI systems to carry out “multimodal” behaviour, which implies integrating information from various sensory modalities (e.g., text, image, video, sound, etc.) by “grounding” it from different modalities in the surrounding reality, enabling them to construct coherent “images of the world”—just as humans do.

Multimodality. It is here that a number of promising technological breakthroughs took place in 2021. For example, having been trained on a multimodal dataset including text–image pairs, OpenAI’s DALL-E system can now generate images of various scenes from text descriptions. In the meantime, the Codex system, which is also developed by OpenAI, has learnt to generate software code in accordance with an algorithm written in plain English.

Super-deep learning. The race for the “depth” of neural models, while has long been dominated by the American giants Google, Microsoft (jointly with OpenAI) and Amazon, is now joined by China’s tech giants Baidu, Tencent and Alibaba. In November 2021, Alibaba created the M6 multimodal network that boasts a record number of parameters or connections (10 trillion in total)—this is a mere one tenth behind the number of synapses in the human brain, as the latest data suggest.

Foundation models. Super-deep multimodal neural network models have been termed “foundation models.” Their potential capabilities and related threats are analysed in a detailed report prepared by the world’s leading AI specialists at Stanford University. On the one hand, the further development of these models can be seen as the closest achievement on the way towards AGI, with the system’s intelligence increased by virtue of an increasing number of parameters (more than in the human brain), perceived modalities (including new modalities that are inaccessible to humans) as well as huge amounts of training data (something that no individual person could ever process). The latter allows some researchers to speculate that a “super-human AI” could be built on such systems in the not-too-distant future. However, there remain some serious issues, both those raised in the report and others discussed below.

Algorithmic transparency/opacity. The further “deepening” of deep models serves to exacerbate the conflict between this approach and the requirements of the “algorithmic transparency,” which is increasingly imperative for AI-based decision-making systems as they proliferate. Limitations on the applicability of “opaque” AI in the areas that concern the security, rights, life and health of people are adopted and discussed around the world. Interestingly, such restrictions can seriously hinder the applicability of AI in contexts where it may be useful, such as in the face of the ongoing COVID-19 pandemic, where it could help solve the problem of mass diagnostics amid a mounting wave of examinations and a catastrophic shortage of skilled medical personnel.

Totally-used AI. AI algorithms and applications are becoming ubiquitous to encompass all aspects of daily lives, be it any kind of movement or financial, consumer, cultural and social activities. Global corporations and the states that exert control over them are those that control and derive benefits from this massive use of AI. As we have argued earlier, the planet’s digitally active population is divided into unequal spheres of influence between American (Google, Facebook, Microsoft, Apple, Amazon) and Chinese (Baidu, Tencent, Alibaba) corporations. Objectively, any possible manipulations on the part of these corporations and states, since they seek to maximize the profits of majority shareholders while preserving the power of the ruling elites, will only increase as the AI power at their disposal grows. It is symptomatic that OpenAI, initially intended as an open public-oriented project, has shifted to closed source, and it is now becoming all the more dependent on Microsoft in its finances.

Energy (in)efficiency. As with cryptocurrency mining, which has long been drawing criticism due to its detrimental environmental impact, power consumption of “super-deep” learning systems and the associated carbon footprint are becoming another matter of concern. In particular, the latest results of the OpenAI Codex multimodal system, developed jointly with Microsoft, touch on the environmental impact of this technology in a separate section. Given that the existing number of parameters in the largest neural network models is several orders of magnitude less than the number of synapses in the human brain, an increase in the number of such models and their parameters will lead to an exponential increase in the negative impact of such systems on the environment. The efficiency of the human brain, as it consumes immeasurably less energy for the same number of parameters, remains unattainable for existing computing architectures.

Militarization. With no significant progress in imposing an international ban on the creation of Lethal Autonomous Weapons Systems (LAWS), such systems are being employed by special services. As the successful use of attack drones has already become a decisive factor in local military conflicts, a wider use of autonomous systems in military operations may become a reality in the near future, especially that live pilots are no longer able to compete with AI systems in the simulation of real air battles. Poor ability to explain and predict the behaviour of such systems at a time of their proliferation and possible expansion into space invites no special comment. Unfortunately, aggravated strategic competition between world leaders in both the AI and arms races leaves little hope for reaching a consensus in a competitive environment, as was stated in the Aigents review back in 2020.

Given the insights shared earlier on, we shall briefly discuss the possible “growth zones,” including those where further development is essentially critical.

What can the “depth” reveal? As shown by expert discussions, such as the workshop with leading computational linguists from Sberbank and Google held in September 2021, “there is no intelligence there,” to quote one of the participants. The deepest neural network models are essentially high-performance and high-cost associative memory devices, albeit operating at speeds and information volumes that exceed human capabilities in a large number of applications. However, by themselves, they are failing to adapt to new environmental conditions if not tuned to them manually, and they are unable to generate new knowledge by identifying phenomena in the environment to connect them into causal models of the world, let alone share this knowledge with other constituents of the environment, be they people or other similar systems.

Can parameters be reduced to synapses? Traditionally, the power of “deep” neural models is compared to the resources of the human brain on the basis of a whole set of neural network parameters and proceeding from the assumption that each parameter corresponds to a synapse between biological neurons, as is the case of the classical graph model of the human brain connectome, reproduced using neural networks since the invention of the perceptron over 60 years ago. However, this leaves out of account the ability of dendritic arms to independently process information, or hypergraph and metagraph axon and dendrite structures, or the possibility of different neurotransmitters acting in the same synapses, or the effects associated with interference of neurotransmitters from various axons in receptive clusters. Failing to reflect if one of these factors in full means that the complexity and capacity of existing “super-deep” neural network models is removed by dozens of orders of magnitude from the actual human brain, which in turn calls into question the fitness of their architectures for the task of reproducing human intelligence “in silico”.

From “explainability” to interpretability. Although the developments in Explainable AI technologies make it possible to “close” legal problems in cases related to the protection of civil rights, allowing companies to generate rather satisfactory “explanations” in cases stipulated by law, in general the problem cannot be considered solved. It is still an open question whether it is possible to “interpret” trained models before putting them to use, in order to avoid situations where belated “explanations” can no longer bring back human lives. In this regard, the development of hybrid neuro-symbolic architectures, “vertical” and “horizontal,” appears promising. Vertical neuro-symbolic architecture involves artificial neural networks at the “lower levels” for low-level processing of input signals (for example, audio and video) while using “symbolic” systems based on probabilistic logic (Evgenii Vityaev, Discovery, and Ben Goetzel, OpenCog) or non-axiomatic logic (Pei Wang, NARS) for high-level processing of behavioural patterns and decision-making. Horizontal neuro-symbolic architecture implies the possibility of representing the same knowledge either in a neural network, implementing an intuitive approach (what Daniel Kahneman calls System 1) or in a logical system (System 2) operating on the basis of the abovementioned probabilistic or non-axiomatic logic. With this, it is assumed that “models,” implicitly learned and implicitly applied in the former system, can be transformed into “knowledge,” explicitly deduced and analysed in the latter, and both systems can act independently on an adversarial basis, sharing their “experience” with each other in the process of continuous learning.

Can ethics be formalized? As the ethics of applying AI in various fields are increasingly discussed at the governmental and intergovernmental levels, it is becoming apparent that there are certain national peculiarities in the related legislation, first of all in the United States, the European Union, China, Russia and India. Research shows significant differences in the intuitive understanding of ethics by people belonging to different cultures. Asimov’s Three Laws of Robotics seem particularly useless as in critical situations people have to choose whether (and how) their action or inaction will cause harm to some in favour of others. If AI systems continue to be applied (as they are in transport) in fields where automated decisions lead to the death and injury of some in favour of others, it is inevitable that legislation will develop in relation to such systems, reflecting different ethical norms across countries, and AI developers working in international markets will have to adapt to local laws in the field of AI ethics, which is exactly what is now happening with personal data processing, where IT companies have to adapt to the legislation of each individual country.

From a humanitarian perspective, it seems necessary to intensify cooperation between the states leading in AI and arms races (Russia, the United States and China) within the UN framework in order to effect a complete ban on the development, deployment and use of Lethal Autonomous Weapon Systems (LAWS).

When entering international markets, developers of universal general AI systems will have to ensure that their AI decision-making systems can be pre-configured to account for the ethical norms and cultural patterns of the target markets, which could be done, for example, by embedding “core values” of the target market, building on the fundamental layer of the “knowledge graph,” when implementing systems based on “interpretable AI.”

Russia cannot hope to aspire for global leadership with its current lag in the development and deployment of “super-deep” neural network models. The country needs to close the gap on the leaders (the United States and China) by bringing its own software developments to the table, as well as its own computing equipment and data for training AI models.

However, keeping in mind the above-identified fundamental problems, limitations and opportunities, there may still be some potential for a breakthrough in the field of interpretable AI and hybrid neuro-symbolic architectures, where Russia’s mathematical school still emerges as a leader, which has been demonstrated by the Springer prize granted to a group of researchers from Novosibirsk for best cognitive architecture at the AGI 2020 International Conference on Artificial General Intelligence. In terms of practical applicability, this area is somewhat in a state similar to that of deep neural network models some 10–15 years ago; however, any delay in its practical development can lead to a strategic lag.

Finally, an additional opportunity to dive into the problems and solutions in the field of strong or general AI will be presented to participants in the AGI 2022 conference, which is expected to take place in St. Petersburg next year and which certainly deserves the attention of all those interested in this topic.

Ethical aspects relating to cyberspace: Utilitarianism and deontology

AI for Human Resources Toolkit Helps Organizations Overcome Implementation Challenges

Digital Child’s Play: protecting children from the impacts of AI

193 countries adopt the first global agreement on the Ethics of Artificial Intelligence

China beats the USA in Artificial Intelligence and international awards

Artificial intelligence can help halve road deaths by 2030

More efforts needed to boost trust in AI in the financial sector

Obviously web ethics must primarily be behavioural in nature. Its task is to serve as a tool for making decisions in morally difficult situations. However, as long as web ethics is seen only as one of the mechanisms of the Internet normative self-regulation, based on the spontaneously formed ethos of cyberspace, it will lack the critical scale to evaluate this behaviour, and hence change it on the basis of a real assessment. Therefore, web ethics needs a philosophical and theoretical justification using traditional ethical methodology, which should help it avoid subjectivity. At the same time, the two most common principles in the construction of ethical argumentation – i.e. the utilitarian and deontological ones – come up against great difficulties when applied to the analysis of the Internet communication.

Therefore, as we know, utilitarian ethical theories focus on the practical feasibility of behaviour in terms of achieving the social good, considering the actions that bring the greatest benefit to the greatest number of people to be morally justified. Nevertheless, as a rule, any action has both positive and negative consequences, many of which are impossible to predict (and even more impossible to assess) in advance. It is even more difficult to remain impartial in determining which interests should be compromised.

If this is so, the implementation of the “principle of maximum benefit”, which is the basis of utilitarianism, as a criterion of moral evaluation only gives very approximate and far from reliable results, which means that it cannot claim to be objective.

In cyberspace, the subjectivity of the utilitarian approach is particularly acute. The complexity of the ever-changing information environment often makes it impossible to predict the extent of the immediate and distant consequences of an individual action, and the virtual nature of this action changes – at least subjectively – its moral status. This is due to the fact that individuals interacting in a virtual environment tend to perceive as potentially immoral only those actions that affect physical, tangible objects and lead to an easily observable result. The immaterial nature of information, however, creates a misleading feeling that everything happening in the infosphere happens as if for “fun”, without exerting any influence on reality. In this way, action in cyberspace is subjectively perceived differently from the same action in the “real” world, and hence a person is very often unable to adequately assess the consequences of his or her actions.

Moreover, many of the actions carried out in cyberspace do not in fact produce any visible effect, which allows them to remain not only unpunished, but often not even noticed, i.e. they seem to be non-existent from the viewpoint of consequences, while they instead affect and influence millions of unprepared users, ranging from the naive and innocent child, to the “adult” who has very specific aims. Therefore, the use of a consequential approach to evaluate them – which focuses on the results of an action, and not on its motives – loses its meaning and in fact produces no visible effect, which allows malicious surfers to go not only unpunished, but often not even be remembered from the viewpoint of consequences, as if they did not exist.  

Unlike utilitarian theories, deontological ethical theories attach particular importance to universal formal rules of interaction, irrespective of the outcome of their compliance in a particular situation. These rules, defined in the form of universal moral laws (the best known of which are the “golden rule of ethics” and the “categorical imperative”), serve as a prerequisite for the emergence of specific prescriptions underlying normative ethics. The absolute nature of moral requirements proposed by deontological theories, which insist on the inadmissibility of deviation from moral imperatives, sometimes borders on rigorism and comes into conflict with the actual practice of intersubjective interaction, which is normally a rational and – on the Internet – a technological objective.

The deontological approach, on the other hand, must be able to give moral standards a universal and binding nature.

Four points of information ethics are considered the fundamental deontological steps that govern the sphere of virtual communication, i.e. the principle of privacy; the principle of accessibility; the principle of inviolability of private property, and the principle of information accuracy.

As can be inferred, these are the preferred principles of liberalism (at least the first three), and they are quite consistent with the spirit of web ideology. Moreover, an approach has become widespread in web ethics that regards respect for human rights as the main deontological principle of virtual communication. These inalienable moral rights are based on our status as intelligent beings, worthy of respect, representing an intrinsic value, and result from the second formulation of the categorical imperative, which emphasises that human beings are a goal in themselves. Human rights record the most significant behavioural patterns that should be applied in relation to human beings. The fundamental moral rights relating to the information sphere include the right to receive information, the right to express one’s opinion and the right to privacy.

At the same time, in the process of virtual communication, situations in which various moral rights and obligations come into conflict are not uncommon. Suffice it to mention the contradiction between freedom of speech and the desire to protect minors’ morality; between the inviolability of private life and society’s right to security; between the right to private property and the principle of accessibility, information, etc.

Here the most delicate moral dilemmas arise, thus showing that web ethics cannot be reduced to a set of few universal rules applicable to any situation. It rather involves conflicting rules that need to be reconciled and balanced. This undermines the feasibility of a strictly deontological approach that does not communicate anything about the conflict of moral obligations.

The German school-style concept of “discourse ethics” is called upon to overcome the shortcomings of the two previous approaches. The discourse ethics, on the one hand, establishes formally universal rules, thanks to which moral rules can be substantiated. This requires it to take account of the possible consequences of the introduction of such rules, so that it can bridge the gap between deontological ethics and consequential ethics, thus combining the principle of duty with the principle of responsibility. At the same time, the guiding principle of discourse ethics – rational consent – implicitly assumes that anyone who enters into communication in order to achieve mutual understanding cannot fail to grant other communicators the same rights that he or she claims, thus recognising all people as equal partners. Thanks to this, disagreements must be overcome exclusively in an argumentative manner. In this sense, discourse ethics makes it possible not only to describe the procedure for reaching agreement on moral issues, but also to derive universal justice and equality metarules, and not as mere empirical behavioural rules.

The fundamentally dialogical nature of discourse ethics makes it more suitable for the moral and philosophical analysis of modern communication processes (including those mediated by computers), since its fundamental principle can, on the one hand, be used to describe the “ideal communication situation”, thus establishing a moral benchmark to which any practical discourse should tend. On the other hand, it serves as a criterion for the moral evaluation of this discourse. Thanks to this, discourse ethics can be regarded as a universal tool of communication, which can (and should) apply to all persons interacting in a situation of conflict of interest, regardless of the environment in which their interaction takes place. Therefore, discourse ethics serves not only as a tool for clarifying and corroborating moral rules, but also as a tool for justifying and legitimizing them in the information society. (3. continued)  

The ethos of web culture is based on the principles of: unlimited and unrestricted freedom of information, privacy, general availability, quality of information, no harm, limitation of the excessive use of web resources and the principle of inviolability of intellectual property.

The actual implementation of these principles is possible through a number of institutional measures: the formulation of various codes of ethics, which endorse the rights and obligations of participants in virtual interaction, and the creation of an institution of intranet self-regulatory bodies. The intranet is the private company network that is completely isolated from the external network (the Internet) in terms of services offered (e.g. via LAN), thus remaining for internal use only, possibly communicating with the external network and with other networks through appropriate systems (TCP/IP protocol, also extending with WAN and VPN connections) and related protection (e.g. firewall).

The relevance of the research topic characterises the degree of its scientific development; determines the subject of research; formulates aims and goals; reveals the scientific novelty, as well as the theoretical and practical significance of the ethical aims of communication, and provides data on the approval of the results obtained.

Virtual communication as a subject of philosophical and ethical analysis reveals the essence and specificity of the regulation of virtual communication.

Virtual communication can be defined as a special form of channel-based interaction for receiving and transmitting information. Consequently, its main distinguishing feature is mediation and depends to a large extent on its functionality, which determines its qualitative originality.

Unlike most traditional forms of communication, virtual communication is characterised by distance and by a high degree of permeability: a person located anywhere in the world can become a participant. Virtual communication has therefore a global intercultural nature, and inevitably leads to a collision in the process of interaction of the value-normative orientations of different cultures and, consequently, to the unification of the rules and norms governing communication processes.

The ability to provide information to a very large audience all over the world makes virtual communication be close to mass information communication. This means that any user can take an active part in it, thus becoming not only a receiver, but also a sender of messages.

Because of the machine mediation most forms of virtual communication are characterised by features such as anonymity (understood as the anonymity of a dialogue in which the subjects do not introduce themselves to each other), which, combined with the ability to disconnect at any time, leads to a decrease in the psychological risk in the ordinary communication process in which there is a maior dictated by circumstances of work, wealth, class, public celebrity and fame, age, etc. Consequently, in the process of virtual communication, it becomes possible to satisfy usually repressed urges and impulses, which, so to speak, cause marginal behaviours. Faced with a subject we do not know and do not look into the eyes, there are more possibilities to pass a judgement without conditioning mediation.

Moreover, the consequence of anonymity is also the risk of a lack of reliable information about each other between the communicants. Therefore, during virtual communication, there is an ongoing construction of the image of the virtual counterpart (often attributing to him/her characteristics that he/she does not actually possess), and of the rules of interaction with him/her. In the process of virtual communication, there is an ongoing construction of the communicator’s personality: the specificity of virtual interaction enables a person to create any impression of himself/herself, to wear any mask and play any role – in other words, to experiment (play with others) by passing off an identity he/she does not possess or by imposing one that is capable of asserting itself. It is no coincidence that most participants in virtual interaction use pseudonyms (“nicknames”): the change of name marks a symbolic rejection of a real person and an exit from real everyday society.

Since in a situation of virtual interaction the factors that form and maintain social inequality in the real world are initially absent (virtual subjects have no body, which means they have no gender, age, ethnicity, nationality), virtual communication is basically a non-status in nature, virtual communication is fundamentally a non-status in nature – and the only criterion of social effectiveness on the Internet are the personal qualities and communication skills of the participant in the interaction (first and foremost, mastery of written speech, but not only written if some people regard audio-conferencing as communication, since video generally frightens those who should be shown).

The blurring of real roles and statuses, the elimination of space barriers and geographical boundaries and, finally, the deconstruction of the subjects of interaction themselves make it difficult for some social institutions to control virtual communication. Another significant feature of virtual communication is therefore its non-institutionalism, which is inevitably accompanied by the uncertainty of the social rules and norms governing people’s behaviour in this domain.

The above characteristics leave an imprint in the social relations established in a virtual environment, thus contributing to the creation of a special ethos of cyberspace, and largely predetermine both the nature of the web ethos and the problems it has to face.

The main assumption of the Internet ideology is the proclamation of the cyberspace’s independence from any State structure and institution. It is argued that the global network is a completely self-regulating environment that resists all external influences and is not subject to coercive control and regulation and, therefore, should only be constructed in accordance with the moral laws established by the Internet users, but not with the legal ones recognised in real society. The Internet ideology is therefore extremely liberal and its leitmotif can be considered the slogan proclaimed by hackers: “Information wants to be free”.

The Internet ideology exists in three versions, which can be conditionally designated as radical-anarchist, liberal-democratic and liberal-economic. The followers of the radical-anarchist version of web libertarianism tend to see the Internet as an “electronic frontier”, i.e. the last unregulated area of human life, which, therefore, must be protected from any restrictions, whether external or internal. However, it is obvious that, despite being somehow attractive, the idea of an “electronic frontier” as a space of unlimited and unrestrained freedom seems entirely utopian since, in practice, such freedom can easily turn into arbitrariness or – on the contrary – into a means of power control that, in turn, pretends to fear the aforementioned followers so that they may be left themselves more exposed, so as to better attack and hit them.

According to the liberal democratic version of web ideology, the Internet should be seen as a means to build a new “digital democracy”, i.e. a democracy enriched by the possibilities of information and communication technologies. This vision is reflected in another common metaphor describing the Internet as a kind of “electronic Agora”, i.e. a virtual place where people have the right to express any opinion without fear of censorship. To provide everyone with this unique opportunity, but also – probably even more importantly – to weaken the government’s monopoly on the exclusive decision-making of all important issues relating to the life of society by making political processes open and transparent, so that they are available for analysis, scrutiny and correction. At the same time, the idea of “digital democracy” is contradicted by the fact that the Internet is currently far from being generally available. Even in rich industrial countries, there are various economic, socio-cultural, gender and educational restrictions that make access to the Internet a privilege for the few (this phenomenon is called “digital divide”). It would therefore be too early to consider the Internet as an environment for the functioning of digital democracy: the Internet has great democratic potential, which, however, has not yet been fulfilled completely.

Finally, the supporters of the liberal-economic version of web ideology, which is the closest to classical liberalism, argue that the development of information and communication technologies should lead, first and foremost, to the creation of an “electronic market” that is absolutely free of any State regulation. It is in economic independence from the State that the theorists of this approach see the guarantee for the development of fair market competition and private initiative. However, on closer inspection, it turns out that the idea of establishing fair market competition in global IT networks is nothing more than a common myth. In reality, the Internet rather creates single and oligopolistic economic structures that have little in common with a free “electronic market”. Moreover, the very logic of the Internet development contradicts the ideology of the “electronic market”, which is at the mercy of private entrepreneurs. This shows that the liberal-economic version of web libertarianism is internally contradictory: it is obvious that the key principle of web ideology – the principle of unlimited and unrestrained freedom of information – is scarcely compatible with the principle of inviolability of private property that underlies economic liberalism.

An analysis of the modern versions of the Internet ideologies therefore shows that all of them – as is characteristic of all “-ism” ideologies, are in one way or another utopian, since they tend to over-idealise cyberspace. At the same time, their importance should not be underestimated: they quite adequately express the attitude of the virtual world’s inhabitants. This enables us to state that the only “real” basis of the Internet ethics is the inviolability of the personal information freedom proclaimed by web libertarianism, which acquires the status of an unconditional moral imperative in this system of opinions. (2. continued)

Active research on virtual communication has been conducted relatively recently – since the early 1990s – and is becoming increasingly intense. The growing interest of representatives from different humanitarian subjects (philosophers, sociologists, psychologists, culturologists, linguists) in this topic is explained not so much by the unprecedented dynamics of the development of the subject matter of research, but rather by the fundamental role that communication plays in the 2000s.

The current telecommunication technologies and, first of all, the global IT network Internet and the related cyberspace, are one of the most important factors in the development of the world community, as it has a decisive impact on the public, political, economic and socio-cultural spheres. There is therefore a clear need for a comprehensive philosophical understanding of the consequences of global computerisation and today’s society, which makes it possible to synthesise the varied data of applied sciences.

Since virtual communication is a relatively new cultural phenomenon, no comprehensible, distinct and effective system of moral regulation has yet emerged in this area. Furthermore, virtual communication has such characteristics that it can be regarded as the embodiment of a libertarian, even anarchist or apparently anarchist ideal, so that third parties are allowed to express themselves in order to control those who do so on the part of the establishment.

Virtual communication offers people unprecedented opportunities for fulfilling personal freedom, challenging its moral nature, which gives rise to many ethical problems of both a theoretical and applied nature that generally require an adequate solution.

The relevance of the problem is therefore determined, on the one hand, by the scientific and theoretical need for a holistic and systematic study of the ethical aspects of virtual communication, and, on the other hand, by the practical social need to bridge a regulatory gap in this area.

Research is mainly focused on the individuals’ activity and behaviours during computer-mediated communication, but more so directed by the web in its essence. That is, the set of rules and principles governing this communication, i.e. the morality and/or immorality of cyberspace.

There is a need for moral and philosophical reflection and an objective assessment of the virtual communication processes and their impact on society. To achieve this goal, the following tasks need to be addressed:

– to consider the key ideas of the “library” available;

– to analyse the degree of influence of these ideas on the creation of an ethos specific to cyberspace;

– to determine the status of morality in the system of normative regulators of virtual communication;

– to identify the fundamental moral principles that regulate behaviour in this sphere;

– to describe and analyse the rules that are or should be at the basis of codes of ethics in cyberspace;

– to identify the specificities of netiquette (the civilised behaviour we should have when communicating), and determine what role citizens themselves should play in their own desirable self-regulation on the Internet;

– to consider and analyse the main ethical and philosophical dilemmas generated by the emergence of the new information and communication technologies.

Hence de-anarchisation is subject to the solution of these problems.

The ethics of virtual communication or – as is commonly called – the ethics of the cybernetic network, as a field of practical philosophy is just beginning to emerge. In spite of the fact that a fairly large number of publications on the problems of human interaction with global IT networks have appeared in recent years, especially in English-speaking countries, only a small amount of these works is dedicated to the ethical aspects of such interaction, since in those countries the efforts are unscrupulously underpinned by profit and far outweigh the production of essays devoted to human and moral values.

The ethics of virtual communication is very often regarded as a continuation and development of the academic sphere of computer ethics, which is a field of applied ethics that studies the moral problems created by information technologies.

This approach seems entirely legitimate if we pay primary attention to the indirect nature of virtual interaction.

At the same time, a number of researchers believe that all computer-mediated actions, without exception, have an information nature. This means, in one way or another, having a significant impact on the infosphere, the consequences of which are only subject to moral evaluation. As a result, information becomes a completely independent subject of moral relations, and hence the ethics of computers and virtual communication acquires a status that is philosophically more significant than the ethics of information tout court, which has been developed until “recently”.

According to another viewpoint, the ethics of virtual communication should be considered one of the varieties of professional ethics, significantly closer to that of librarians and communicators (media codes of ethics, journalists’ charters, etc.). This approach is based on the analysis of the most common and socially relevant types of activities by Internet users, and hence, although with some reservations, they become representatives of different professional groups that have not only the right to exist, but also to put themselves on an equal footing with similar existing national or international institutions.

There are two main strategies to justify the web ethics: the Anglophone (mainly in the United States of America) and the German-speaking one. The Anglophone authors focus on the cultural and axiological aspects of web ethics, considering the moral problems of virtual communication within the framework of normative ethics and, as a rule, on the basis of the application of classical ethical concepts to them (primarily deontology, utilitarianism, economism, business practices). The German-speaking authors, instead, focus their attention on the communication aspects of web ethics and on a more theoretically significant but too abstract issue – whether ethics, in general, and web ethics, in particular, can be substantiated – and conduct research primarily on the basis of discourse ethics.

The methodological basis of the study is a synthetic interdisciplinary approach, as well as a comprehensive and systematic analysis of the phenomenon being studied. The proposed methodology combines the analysis of value, structural-functional and historical-genetic criteria and judgements with the main ideas of the anthropological and hermeneutic schools, as well as with the achievements of scientific disciplines such as political science, sociology, cultural studies, psychology and communication theory.

The novelty of these results consists:

– in identifying the specificities of the ethical discipline of virtual communication;

– in the thematisation and systematisation of the main ethical regulators of virtual communication;

– in the theoretical validity of the moral norms, rules and principles governing behaviours in this field.

The theoretical significance of this lies in the systematic presentation of the virtual communication processes from an ethical viewpoint, which not only makes it possible to explore the practice of cyberspace, but also serves as a prerequisite for the creation of effective mechanisms to ensure the implementation of common morals with relevant norms, rules and principles.

The results obtained can be used for further research on the problem of the influence of virtual communication on society and personality within the framework of theoretical disciplines such as ethics, pedagogy, sociology and psychology. The methodology for

 analysing communication processes can find wide application in modern mass communication theory and practice.

In most cases, virtual communication is characterised by distinctive features such as mediation, interactivity, distance and global intercultural nature. The participants’ anonymity provides ample opportunities for the construction of a personal identity as there is no status hierarchy, while their extra-institutionality, the non-development and uncertainty of social rules (including legal and moral ones), can lead to marginalisation and mockery of communication processes, which are sectarianly concentrated in a restricted group of Internet users who gradually lose contact with earthly reality.

The aforementioned characteristics, together with the imperfection of modern IT regulations, considerably limit the possibilities of organisational and legal regulation of this area, which enables participants in virtual communication to consider it “the last territory of freedom”, a new res nullius, in which to take refuge from State control. Consequently, the above mentioned voluntary moral self-regulation, which is largely spontaneous and performs compensatory functions, begins to play a priority role in the normative regulation of virtual communication. Or rather, law-makers follow their example to produce rules. Or the lawmakers themselves act as Internet users so that they can better understand the environment by entering it with anonymous roles. (1. continued)

Condolences and prayers for you and your nation and for the world  for the passing of Archbishop Desmond Tutu our...

Two years into our last review on state of the artt in the area of artificial intelligence, there has been...

The coming of China’s chairmanship in the BRICS grouping in 2022 is likely to provide a fresh impulse to the...

Long banned, Christmas has finally, at least tacitly, arrived in Saudi Arabia; just don’t use the name in marketing or...

India — like the USA that used-to-be — was born out of a revolution (in 1776 in U.S.; in 1947...

Strictly speaking, Russia is not an Eastern Mediterranean country. It does not have direct access to the Mediterranean Sea; its...

The latest Putin-Modi Summit was a global geostrategic game-changer unlocking the potential for the two great powers to jointly assemble...

The repercussions of the American presence in the Middle East on China and Russia

The benefits of China and Russia from the United States’ disengagement from the Middle East

Santa Claus rueful at imminent Christmas in India

The positions of the Council on American-Islamic Relations within the American society

The Frenzied Economy of Turkey: Erdogan Unfazed by the Ineluctable Financial Collapse

Austria: Boost labour supply and foster green and digital transitions

China’s success against covid dwarfs The West’s. Here’s how & why