DOI: 10.25136/2409-8728.2022.10.39011
EDN: HSXQXS
Дата направления статьи в редакцию:
23-10-2022
Дата публикации:
31-10-2022
Аннотация:
Этическая экспертиза современных технологий, особенно тех, которые используют искусственный интеллект, может быть основана на подходе, который фокусируется не на самой технологии, а на субъекте как на агенте-человеке, который может быть разработчиком и пользователем данной технологии. В этой статье мы рассматриваем концепцию этического субъекта как агента, обладающего фундаментальной способностью делать этически значимый выбор. Особенность такого выбора в том, что он значим не только для общества, но и для самих этических субъектов, которые всегда являются не только средством, но и целью нравственных поступков. Вот почему такими этическими субъектами могут быть только отдельные лица или группы людей, но не технологии, даже самые передовые – в противном случае последние были бы целью этического выбора для себя и для человеческих индивидов. Этическая экспертиза технологии, способной действовать независимо от человека, возможна, как мы полагаем, благодаря «обучению» данной технологии распознавать ситуации применения тех решений этических дилемм, которые предоставляет человек, и оценке результатов такого распознавания. Часть такого «обучения», возможно, стоит предоставить не только разработчикам, но и пользователям, помня о том, что все эти решения также должны быть приведены в соответствие с местными законодательными нормами.
Ключевые слова:
этическая экспертиза, технология, искусственный интеллект, утилитаризм, консеквенциализм, деонтология, этика добродетели, этический субъект, метасубъект, социальные отношения
ACKNOWLEDGMENTS This work is funded by Russian Science Foundation, project 21-18-00184 “Social and humanitarian foundations of criteria for evaluating innovations using digital technologies and artificial intelligence”.
БЛАГОДАРНОСТИ. Исследование проведено при финансовой поддержке научного проекта РНФ (гранта) № 21-18-00184 «Социогуманитарные основания критериев оценки инноваций, использующих цифровые технологии и искусственный интеллект».
Abstract: The ethical examination of modern technologies, especially those that use artificial intelligence, can be based on an approach that focuses not on the technology itself, but on the subject as a human-agent who can be the developer and user of this technology. In this article, we consider the concept of an ethical subject as an agent with the fundamental ability to make ethically significant choices. The peculiarity of such choices is that they are significant not only for society, but also for the ethical subjects themselves, who are always not only a means, but also the goal of moral actions. That is why such ethical subjects can be only persons or groups of people, but not technology, even the most advanced – otherwise it would be the goal of ethical choices for itself and for people. Ethical expertise of a technology capable of acting independently of a person is possible, as we believe, through its «training» to recognize situations of applying those solutions to ethical dilemmas that are provided by a person, and valuation of the results of such recognition. A part of such «training» may be worth providing not only to developers, but also to users, remembering the fact that all these solutions should also be brought into compliance with local legislative norms.
Keywords: ethical expertise, technology, artificial intelligence, utilitarianism, consequentialism, deontology, virtue ethics, ethical subject, metasubject, social relationships
1. INTRODUCTION
Ethical expertise of technologies has become one of the topical issues of the modern scientific agenda, of course, not yesterday, but also recently enough that the pluralism of opinions characteristic of newly emerging fields of research still prevails, without giving way to one or several few generally accepted paradigms.
The relevance of this topic is evidenced, in particular, by the fact that already in 2020 there were more than 80 documents considering ethical standards related to the use of such technology as artificial intelligence (hereinafter AI) [1]. Since then, the number of such documents accepted both by corporations and at the state level in a number of countries has only continued to grow. The «Code of Ethics in the field of Artificial Intelligence» was adopted in 2021 in Russia [Code of Ethics in the field of Artificial Intelligence, 2021], shortly before a similar document was published in China [The Ethical Norms for the New Generation Artificial Intelligence, 2021]. UNESCO also gave its recommendations on these issues [Recommendation on the Ethics of Artificial Intelligence, 2022].
The rapid development of digital technologies and their implementation in a number of areas: industrial production, transport, education, healthcare, etc. – forces us to pay attention to the development of norms regulating the processes of creation and use of these technologies. These norms must necessarily be both legal and ethical, since not all possible forms of activity in general can become the subject of legal regulation, while ethical regulation is potentially capable of covering all possible cases, while, however, often sacrificing clarity and unambiguity of assessments [2, 3].
That is, as soon as we were able to observe such a level of technology development at which technology in some cases became capable of performing actions independently of a person, questions inevitably began to arise regarding the ethical assessment of these actions, which in itself is quite justified [4]. However, other trends cause concern: namely, the very frequent raising of questions about the «ethics of algorithms», «ethics of AI». Often such formulations are not only a desire for brevity. Often, those who use them assume the possibility for machines, programs, robots to act ethically or unethically by themselves and give solutions to some ethical dilemmas [5]. Such an approach seems to us a bit excessive for a number of reasons, which will be discussed later in the work.
We will also try to highlight the following rather important question: does every agent acting in society automatically become an ethical subject, and what qualities should the latter possess in general.
All of the above is intended to serve the achievement of the main goal of this work – by using a subject-oriented approach to create a methodological basis for the ethical examination of modern technologies, especially those that use AI for their work.
2. METHODOLOGY
In order to be able to speak about an ethical subject and raise the question of whether it can be a technology, it is necessary to clarify some basic points for this study on which such reasoning can in principle be based.
First, it is the concept of the subject. Secondly, consideration of this subject’s ability (or inability) to make ethical choices based not only on the social environment, but also on his/her/its own properties, capabilities and needs. Thirdly, finally, the peculiarities of the subject's attitude to the surrounding reality and the ability to make choices based on short-term, medium-term and long-term goals.
So, the subject here means, first of all, an agent or actor endowed with a number of specific properties, not all of which, as we will see, can in principle be attributed to technology, no matter how developed one.
The concept of the subject is well illustrated by the approach of V.A. Petrovsky [6], who identifies four main characteristics of the subject (as a person or a group of people):
1) purposefulness as the ability to consciously and independently set goals. For us, this point is very important, as it allows us to clearly identify those who are able to be subjects in digital societies of both the present and the foreseeable future. Moreover, it can be assumed that this feature of the subject is in some way manifest and «collective», since it can be fully implemented only if there are three subsequent characteristics.
2) reflexion as the ability of the subject to form an image of himself/herself. This feature is also fundamental for the possibility of constant assessment of the formed image through the use of socially conditioned normative (including ethical) systems.
3) the freedom of will of the subject, without which there can be neither independent conscious goal-setting, nor an assessment by the subjects of both themselves and the results achieved in the course of activity.
4) the ability of the subjects to develop themselves as an improvement of an adaptation to constantly changing inner and outer conditions.
If we speak about the first point, then we certainly recognize that individuals and consolidated human collectives have the ability to set goals. Goal-setting is directly related to the subject's ability to form a holistic worldview, where both the image of the subject and the image of the rest of the universe are present – and all these images are not just known, but what is often even more important, are evaluated in a certain way and, in accordance with this, are ranked and lined up in certain rows, including possible/desirable/achievable goals.
The ability of a subject to free goal-setting and self-development directly related to this is realized if the subject has a worldview as a system of views on himself/herself and the rest of reality, ranked in a certain way based on such a fundamental factor as the value system of this subject.
The subject is able to build a relationship with the outside world, depending on how he/she ranks the goals, assessing, accordingly, the efforts and means to achieve them as more or less legitimate.
Here we consider on what basis the subjects are able to make assessments, and how they can see themselves and their relationship with the objects of assessment, depending on the chosen system in which this interaction occurs. These systems, as we believe, can and even should be considered simultaneously from two positions: 1) as from the point of view of the subjects and their intentions, determining the possibility of a particular value relationship; 2) so from the point of view external to the subjects, but allowing to determine their position in the emerging system of relations and characteristics of the system itself. The first position makes it possible to use philosophical concepts from the field of ethics as a methodological basis, while the second allows to turn to the paradigms of cybernetics as a science of managing complex systems [7].
In order to build a complete scheme that includes a range of such assessments, we propose to use existing cybernetic approaches, specifically, third-order cybernetics [8]. This approach includes cybernetics of the first and second orders, not denying, but complementing them.
The cybernetics of the first order is mainly about the influence of the subject (actor) on a certain object (the one that is undergoing action), and the latter is not considered as an actor equal to the subject, even if they both have consciousness. The interaction in this case will be unidirectional – from the subject to the object.
In second-order cybernetics, not only subject-object interaction is considered, but also subject-subject interaction, when mutually directed interaction is possible between really or potentially equal participants, in which both of them need to take into account not only their own goals, but also the goals and probable interests of the other side.
In third-order cybernetics, not only the two types of interactions mentioned above are possible, but also the interaction of the subject-meta-subject type, where the second part can mean an unlimited number of individual and collective subjects, the number and intentions of which can constantly change and not always be predictable.
3. ETHICAL EXPERTISE OF TECHNOLOGIES IN THE MIRROR OF A SUBJECT-ORIENTED APPROACH
3.1 Subject-object interaction
In this case, the subject uses technology to achieve a simple, clearly defined goal, which corresponds to first-order cybernetics – the subject instrumentally works with technology and evaluates it from the point of view of applied efficiency.
From the point of view of ethics, utilitarianism as a consequentialist approach [9] corresponds to this line of thought. The assessment is based on the compliance of the result with the stated goal. From the point of view of cybernetics, one simple connection is formed between the subject and the object. Other possible connections that appear in any social system may not be taken into account, which in the future may lead to the absence or destruction of the accompanying parts of the system, represented by additional social consequences. This is why such a strategy is designed for the short term and can be effective only in it. In the medium and long term, it reveals shortcomings, mainly due to the fact that additional relationships have not been built and/or taken into account, which causes difficulties with embedding the result into a more general social context.
Utilitarian concepts in ethics evaluate an act as ethical or unethical, depending on how much it corresponds to the stated goal, what consequences it leads to. Utilitarianism, as one of perhaps the most influential ethical teachings, is often called one of the directions of consequentialism, which considers actions primarily from the point of view of consequences. Such an approach, in principle, is intuitively understandable to everyone, regardless of the fact of acquaintance with certain philosophical concepts.
Classical utilitarianism was based on the concept of maximizing the good (or happiness, or pleasure) for the maximum number of people. And with slight variations, this approach has been preserved in utilitarianism up to this day, as well as the inevitable and difficult-to-resolve issues that have arisen along with it.
First, of course, there is the question of what exactly should be understood by the good, which is often identified with practical benefits, which does not make it any easier to solve the problem. In the absence of clear criteria for the good, happiness, usefulness and even pleasure, utilitarianism was often subjected to criticism [10] because it gave scope for an arbitrary and voluntaristic interpretation of all of the above.
With all the positive features of this approach, legitimate questions also arise both about how much we are able to assess the consequences, especially long-term ones, and to what extent the ends can justify the means. In addition, if the goals change in this case, the assessments of certain actions will inevitably have to change, which can lead to ethical relativism, where potentially any action can be justified.
Thus, as we can see, utilitarianism is experiencing understandable problems precisely with the development of criteria for evaluating actions as a kind of «coordinate system» within which a particular solution can only be evaluated in terms of its benefit or harm. Nevertheless, the utilitarian approach may be quite legitimate if we are talking about evaluating not all actions in general based on some idea of utility that claims to be universal, but about certain specific actions that are considered through the prism of previously established norms and criteria. At the same time, it would probably even be possible to algorithmize such estimates, which may be important precisely for considering technologies (such as AI) that interest us.
3.2. Subject-subject interaction.
In this case the subject (as an actor) uses technology to communicate with other subjects, takes into account their participation in the general process of interaction with technology, can assess the impact exerted on them by both technology and forms of interaction. The technology is evaluated by the subject not only from the point of view of achieving the set short-term instrumental goals, but also from the point of view of its impact on the construction of additional social relationships and/or the preservation of existing ones.
From the point of view of ethics, this is more consistent with deontological approaches [11] that evaluate not so much the goals as the means to achieve them. At the same time, it is assumed that the assessment of the goals as positive has already been made, and only after that we evaluate the means. Therefore, the consequentialist and deontological approach in this scheme do not contradict each other, and the second from the standpoint of building a value relationships will include and complement the first.
From the point of view of second-order cybernetics, a more complex system of relationships is being formed. Other subjects are added to the subject-technology bundle, for interaction with which the technology itself acts as a means, therefore it is designed to create new and not destroy existing connections between subjects. Technology is beginning to be evaluated not only instrumentally, but also in terms of influence on subjects. Such a shift in emphasis is caused, among other things, by the massive spread of technology and the building of more complex chains of relationships between subjects with its help. This approach works quite well in the medium term.
In the deontological approach, the emphasis is placed on the presence of a certain set of unchangeable moral norms that must be followed (both common to all mankind and specific to individual groups and professions). This is the complete opposite of ethical relativism. Again, with all the need for such generally valid norms, problems arise, both in cases when the norms are too generalized, and when they are formulated too narrowly and inflexibly.
Although the term «deontology» itself is attributed to Bentham, one of the key figures here was Kant with his idea about the categorical imperative as a fundamental moral norm, which should be followed for its own sake, regardless of pleasure or benefit.
The vagueness of criteria for such maximally general norms within deontology as the categorical imperative or the golden rule of morality, again, has repeatedly caused criticism for its isolation from, often, the very possibility of practical application. In addition, deontological constructions, as a rule, did not take into account either intentions or those «immediate» consequences that may occur with the direct application of certain norms.
However, as in the case of utilitarianism, these problems are removed if we are not talking about duty in general, dictating to act, as Kant stated, «according to a maxim that can itself become a universal law», but about private areas that need to develop clearly defined norms designed to limit the possibility of any relativistic interpretations as much as possible. One of the best examples, perhaps, is given here by medical deontology [12].
Thus, the deontological approach, as we believe, may well be used to develop specific norms for the use of technologies (including AI) in areas where relativism is undesirable, provided that this approach does not claim to be universal and is deployed within the framework of an already existing stable structure of relations with predetermined values.
3.3. Interaction of the subject-metasubject type.
In this case the subject uses technology to interact with a potentially unlimited number of other diverse subjects (individuals, groups) on a planetary scale. Technology can at this stage not only be used as a means, but also act on behalf of a real, but not always known subjects, imitating certain aspects of their activity. The technology is assessed by the subjects as affecting themselves, as well as all other involved (including potentially, in the future) known and unknown subjects, the number of which may also be unknown. Technology is assessed as one of the key factors influencing the construction and maintenance of diverse social relationships.
From the point of view of ethics, such an approach as the ethics of virtue can be used here, focusing primarily on the desired properties and qualities of the subjects, which will allow them to best choose goals and means in situations of increased complexity and uncertainty.
From the point of view of third-order cybernetics, the most comprehensive, complex and constantly changing system of interrelations is being formed here. In it, as subsystems of the lowest order, two previously considered ones are presented, which it includes. Since technology is able to form an unlimited and unpredictable number of connections for the subjects, it begins to be evaluated as influencing them and even partially forming them, therefore, its role is considered for the emergence of those key qualities in the subjects that will further contribute to their well-being, including successful interaction in this environment with other subjects and with the technology itself. In this case, the subject becomes a certain constant, the center of an ever-changing system of diverse relationships formed with the help of technology. This approach is aimed at a long-term perspective.
One of the most famous authors who developed the ethics of virtue was A. MacIntyre [13]. He believes that any attempts to substantiate moral norms in modern society are doomed to failure, if only because they are undertaken in isolation from the social, and even more so, from the ontological context. As for the last consideration, which seems almost impossible to modern consciousness, it is enough to recall Descartes with his metaphor of a tree, where branches originating from a single trunk and roots serve such areas as medicine, mechanics and ethics. Thus, back in the XVII century, we see attempts to build a holistic picture of the world and at the same time knowledge about the world, which is now completely gone, leaving us both a gap between natural science and humanitarian knowledge, and the isolation of ethics, in fact, from both. After all, as we have seen, attempts to justify ethics through social utility and effectiveness of actions lead, at best, to utilitarianism, at worst – to relativism, which sooner or later usually degenerates into ethical nihilism.
MacIntyre pays special attention to the ancient concept of virtue, which, in his opinion, was conceived as the ability to perform certain actions and at the same time understand the essence, meaning and purpose of these actions. According to MacIntyre, we can turn to the legacy of Aristotle, who considers not norms (which are so difficult to justify), but virtues as a condition of moral behavior both from a personal and social point of view. Virtues, on the other hand, can be realized and manifested only through practices, thereby forming a person's life narrative as a movement towards the maximum realization of available opportunities for personal development. However, in order for an individual to perceive some primary ideas about these virtues in the course of education and training, it is necessary to have a moral tradition in which these ideas can be fixed and transmitted.
From this point of view, «good» technology in general and, in particular, «good» AI can be considered such if they are evaluated from the position of not only specific (often situational) benefits or compliance with strictly prescribed norms, but, first of all, from the position of how much they are able to help the implementation of personal and social development of individual and collective subjects – people and groups.
4. RESULTS AND CONCLUSION
A subject-oriented approach to the ethical expertise of modern technologies can help to identify the levels of such expertise.
If we are talking about short-term goals, then we can use an utilitarian assessment: actions are considered correct based on their compliance with the interests and goals of a specific group of people. Both the goals themselves and the criteria for compliance with them can and even should be clearly defined in form of instructions. Criteria such as the interests of the group can be determined through legislative and other documents, and this may be sufficient in this case. The expertise can be carried out by developers at the testing stage, then, if there is negative feedback, the product is modified.
If we talk about the medium term, then in this case it is important not only to achieve goals, but also to preserve and maintain stable relations between subjects as actors. That is, the use of technology should, as in the previous case, comply with laws and regulations, but at the same time also not damage both people's communication with each other and their ability to interact with the technology itself. A deontological approach can be used here, but the focus of ethical expertise can only be the decision of a subject – a person or a group of people, but not technology. If we once again remember Kantian approach – a human should be seen not as means, but as an ultimate goal of an ethical behavior. The human here can act as an ethical subject capable of making ethically significant choices. Such choices can serve to «teach» a technology such as AI (for example, neural networks) in order to match its «actions» to the ideas of a real ethical subject about correct behavior. There is no need to use intuitionistic approaches here, just giving AI information as a mass of examples and, in fact, giving the conclusion from them to technology. This conclusion should be made by the ethical subject – a human him/herself and provided to the AI in a ready-made form.
A subject who is a technology user, and not necessarily a developer, may be able to independently «teach» AI his/her own ideas about correct behavior and choices. At the same time, there must be a strict ban on the behavior of AI that violates legislative and other documented norms, the situations of application of which the AI must be «trained» in advance to recognize. So ethical expertise of a technology capable of acting independently of a person is possible through its «training» to recognize situations of applying those solutions to ethical dilemmas that are provided by humans, and valuation of the results of such recognition. We believe that «ethical» technology is not the one that solves ethical dilemmas, but the one that recognizes the cases where to apply the solutions that humans considered to be the best ones. There is no need to try to create and apply some ethical theory that technology can use – even we as humans still argue about the possibility of the existence of such theories. And we should not try to avoid our responsibility of ethical decision-making by shifting this responsibility to technology. If we meet the case where there are no «good» solutions, but only «bad and even worse» as in the famous trolley problem [14], still some solution should be provided for technology, and not by it.
In the long term, an ethical examination of technologies capable of acting autonomously is hardly possible, but it is from long-term goals that medium- and short-term ones follow. For such a long-term perspective, for example, ethical codes in the field of AI can work – they have not yet been created for AI itself, but for directing the actions of developers and users. Ethical subjects build their goals in the long term, not always knowing with whom exactly and in what situations they will interact, that is, this is a subject-metasubject type of interaction, in which the focus of attention inevitably shifts from the goals and means of interaction to the subjects themselves. Ethical expertise here can only be carried out by subjects based on their own ideas about the desired vectors of their development. That is why the approach from the point of view of the virtue ethics seems to be the most productive here. This approach evaluates not technologies, but how human subjects would like to see themselves when, in order to achieve these goals of their own development and improvement, they choose the direction of technology development as a means.
Thus, raising the question of the ethics of technology (such as AI) itself, in our opinion, is premature not only for purely technical reasons. The ethics of AI, for example, is often understood as its ability to autonomously assess certain facts of reality [15], which raises questions about how to «teach» AI to do this – through the introduction of ready-made regulatory systems or through the gradual assimilation of large arrays of diverse information by analogy with the upbringing and training of a person, as P. Railton [16] writes. Nevertheless, ethical intuitionism [17], which is often relied on by authors who suggest «training» AI in the same way as teaching and educating a person, unfortunately does not provide any clear answers to the fundamental questions: on what basis we conclude who is capable of acting as an ethical subject, and why would such a subject commit ethically meaningful choices freely and consciously.
An approach based on aretology, could probably be more productive, since it considers the humans as ethical subjects par excellence, making a choice in favor of what they consider virtuous not only for the sake of an external goal, but also for the sake of self-improvement. Such an approach necessarily presupposes the presence of self-awareness and reflection in these ethical subjects, as well as recognition of it not only as a means, but also as a goal of ethical choices.
Human individuals, as well as highly organized stable social groups, can undoubtedly possess similar properties of an ethical subject, while any technology, including AI, will be deprived of these properties – and, perhaps, not only now, but also in the future, because otherwise it will not only have to acquire self-awareness but also to become the goal of ethical choices both for itself and for the rest of the actors [18]. Such a possibility seems very doubtful.
However, the well-developed technology, even the weak AI, even if it cannot become an ethical subject, is nevertheless able to make assessments based on existing social norms, regardless of how it received them – which puts it, firstly, in direct dependence on certain specific regulatory systems, ethical and legislative, secondly, it suggests that this technology needs to be «taught» to recognize the situations of using these systems. Ethical subjects (persons or groups of people) have the predominant ability to evaluate the norms [19, 20], since the subjects’ ability to improve depends on it, while technology, which is always a service and a means, not an end, is not at all obliged and should not have such an opportunity.
Библиография
1. Jobin, A.; Ienca, M.; Vayena, E. The global landscape of AI ethics guidelines // Nature. 2020.1 (9): 389–399. doi:10.1038/s42256-019-0088-2.
2. Gonzalez, W. On the Role of Values in the Configuration of Technology: From Axiology to Ethics // Gonzalez, W.. New Perspectives on Technology, Values, and Ethics, Boston Studies in the Philosophy and History of Science, 2015. Vol. 315, Springer, Cham, pp 3–27, https://doi.org/10.1007/978-3-319-21870-0_1.
3. Kroes, P., Meijers, A.W.M. Toward an Axiological Turn in the Philosophy of Technology // Franssen, M., Vermaas, P., Kroes, P., Meijers, A. Philosophy of Technology after the Empirical Turn, Philosophy of Engineering and Technology, 2016. Vol 23, Springer, Cham, https://doi.org/10.1007/978-3-319-33717-3_2.
4. Малахова Е. В. Аксиология техники – на пути к человекоразмерности сложных технических систем // Вопросы философии. 2022. Т. № 10. С. 218–222.
5. Tsamados, A., Aggarwal, N., Cowls, J. et al. The ethics of algorithms: key problems and solutions // AI & Society. 2022, 37, 215–230. https://doi.org/10.1007/s00146-021-01154-8.
6. Петровский В. А. Индивидуальность, саморегуляция, гармония // Московский психотерапевтический журнал, 2008, № 1, с. 64-90.
7. Espejo R, Lepskiy V. An agenda for ontological cybernetics and social responsibility // Kybernetes, 2021. Vol. 50 No. 3, 694-710.
8. Лепский В.Е. Вызовы будущего и кибернетика третьего порядка // Проектирование будущего. Проблемы цифровой реальности: труды 2-й Международной конференции (7-8 февраля 2019 г., Москва). — М.: ИПМ им. М. В. Келдыша, 2019. — с. 64-70.
9. Sinnott-Armstrong, W. Consequentialism. // The Stanford Encyclopedia of Philosophy (Fall 2021 Edition), URL: https://plato.stanford.edu/archives/fall2021/entries/ consequentialism.
10. Briggs, R. A. Normative Theories of Rational Choice: Expected Utility. // The Stanford Encyclopedia of Philosophy (Fall 2019 Edition), URL: https://plato.stanford.edu/archives/fall2019/entries/rationality-normative-utility.
11. Waller, B. N. Consider Ethics: Theory, Readings, and Contemporary Issues. New York: 2005, Pearson Longman.
12. Barrow J.M., Khandhar P.B. Deontology. StatPearls Publishing. 2021.
13. MacIntyre, A. After Virtue. London: 1985. Duckworth, 2nd Edition.
14. Jarvis Thomson, J. The Trolley Problem. // Yale Law Journal. 1985. 94 (6): 1395–1415. doi:10.2307/796133.
15. Leben, D. Ethics for robots: how to design a moral algorithm. Abingdon, Oxon; New York, NY: 2018. Routledge.
16. Railton, P. Ethical Learning, Natural and Artificial. // In Ethics of Artificial Intelligence, edited by S. Matthew Liao. New York: Oxford University Press, 2020. Oxford Scholarship Online. doi: 10.1093/oso/9780190905033.003.0002.
17. Артемьева О.В. Интуитивизм в этике (из истории английского этического интеллектуализма Нового времени) // Этическая мысль, 2010, Выпуск 10, М., ИФРАН, с. 90-113.
18. Kingwell, M. Are Sentient AIs Persons? // in The Oxford Handbook of Ethics of AI, Edited by Markus D. Dubber, Frank Pasquale, and Sunit Das, Oxford University Press, 2020.
19. Каган М.С. Философская теория ценности. — Санкт-Петербург, «Петрополис», — 205 с.
20. Шохин В.К. Философия ценностей и ранняя аксиологическая мысль: Монография. — М.: Изд-во РУДН, 2006. — 457 с.
References
1. Jobin, A.; Ienca, M.; Vayena, E. (2020). The global landscape of AI ethics guidelines. Nature. 1 (9): 389–399. doi:10.1038/s42256-019-0088-2.
2. Gonzalez, W. (2015) On the Role of Values in the Configuration of Technology: From Axiology to Ethics, in Gonzalez, Wenceslao, ed., New Perspectives on Technology, Values, and Ethics, Boston Studies in the Philosophy and History of Science, Vol. 315, Springer, Cham, pp 3–27, https://doi.org/10.1007/978-3-319-21870-0_1.
3. Kroes, P., Meijers, A.W.M. (2016) Toward an Axiological Turn in the Philosophy of Technology, in Franssen, M., Vermaas, P., Kroes, P., Meijers, A., eds., Philosophy of Technology after the Empirical Turn, Philosophy of Engineering and Technology, Vol 23, Springer, Cham, https://doi.org/10.1007/978-3-319-33717-3_2.
4. Malakhova, E. V. (2022) The Axiology of Technology – on the Way to the Human Dimension of Complex Technical Systems, Voprosy Filosofii, Vol. 10 (2022), pp. 218–222. (in Russian)
5. Tsamados, A., Aggarwal, N., Cowls, J. et al. (2022) The ethics of algorithms: key problems and solutions. AI & Society 37, 215–230. https://doi.org/10.1007/s00146-021-01154-8.
6. Petrovsky V.A. (2008) Individuality, self-regulation, harmony. Moscow Psychotherapeutic Journal, No. 1, pp. 64-90. (in Russian)
7. Espejo R, Lepskiy V. (2021) An agenda for ontological cybernetics and social responsibility, Kybernetes, Vol. 50 No. 3, 694-710.
8. Lepskiy V.E. (2019) Challenges of the future and third-order cybernetics. Designing the future. Problems of Digital Reality: Proceedings of the 2nd International Conference (February 7-8, 2019, Moscow). — Moscow: IAM named after M.V.Keldysh — pp. 64-70. (in Russian)
9. Sinnott-Armstrong, W. (2021) Consequentialism. The Stanford Encyclopedia of Philosophy (Fall 2021 Edition), URL: https://plato.stanford.edu/archives/fall2021/entries/ consequentialism.
10. Briggs, R. A. (2019) Normative Theories of Rational Choice: Expected Utility. The Stanford Encyclopedia of Philosophy (Fall 2019 Edition), URL: https://plato.stanford.edu/archives/fall2019/entries/rationality-normative-utility.
11. Waller, B. N. (2005) Consider Ethics: Theory, Readings, and Contemporary Issues. New York: Pearson Longman.
12. Barrow J.M., Khandhar P.B. (2021) Deontology. StatPearls Publishing.
13. MacIntyre, A. (1985) After Virtue. London: Duckworth, 2nd Edition.
14. Jarvis Thomson, J. (1985). The Trolley Problem. Yale Law Journal. 94 (6): 1395–1415. doi:10.2307/796133.
15. Leben, D. (2018) Ethics for robots: how to design a moral algorithm. Abingdon, Oxon; New York, NY: Routledge.
16. Railton, P. (2020) Ethical Learning, Natural and Artificial. In Ethics of Artificial Intelligence, edited by S. Matthew Liao. New York: Oxford University Press, 2020. Oxford Scholarship Online. doi: 10.1093/oso/9780190905033.003.0002.
17. Artemyeva O.V. (2010) Intuitionism in Ethics (from the history of English ethical intellectualism of Modern Times). Ethical Thought. Issue 10. M., IPHRAS. pp.90-113. (in Russian)
18. Kingwell, M. (2020) Are Sentient AIs Persons? / in The Oxford Handbook of Ethics of AI, Edited by Markus D. Dubber, Frank Pasquale, and Sunit Das, Oxford University Press.
19. Kagan, M. S. (1997) Philosophical Theory of Value, St. Petersburg: Petropolis. (in Russian)
20. Shokhin, V. K. (2006) Philosophy of Values and Early Axiological Thought, RUDN University Publishing House, Moscow. (in Russian)
Результаты процедуры рецензирования статьи
В связи с политикой двойного слепого рецензирования личность рецензента не раскрывается.
Со списком рецензентов издательства можно ознакомиться здесь.
Рецензируемая статья посвящена исключительно актуальной социально-философской проблеме экспертизы технологий искусственного интеллекта, вставшей перед человечеством в связи с бурным развитием науки. Потребность в обращении к этической составляющей регулирования использования технологий обусловлена, по мнению автора, тем, что далеко не все частные случаи поддаются правовому регулированию, тогда как нравственное сознание, изначально определяя универсальные этические нормы как конкретные в себе, способно указать исследовательскому сообществу (и обществу в целом) лейтмотив отношения к проблемам, возникающим в связи с развитием технологий искусственного интеллекта. (Автор, правда, при этом поддерживает точку зрения исследователей, которые утверждают, что «расплатой» за подобный универсализм этического регулирования в сравнении с правовым оказывается отказ от ясности и однозначности оценок, что, на взгляд рецензента, не вполне точно, поскольку речь должна идти, скорее, о неизбежности принятия на себя субъектом ответственности за принимаемые решения, что отнюдь не эквивалентно «неясности».) Ступень развития технологий, на которой оказывается реально востребована практика этической оценки работы технических устройств, предполагает достижение ими способности «совершать действия независимо от человека». Однако достижение этой стадии «субъектности техники» предполагает перенос нравственной ответственности на её создателя, закладывающего в её «поведение» некий «алгоритм» (как бы широко ни рассматривать здесь это понятие). Осознание социальной значимости проблемы появления новых (квази)субъектов деятельности побуждает автора статьи ставить задачу разработки «методологической основы этической экспертизы современных технологий, особенно тех, которые основываются на использовании искусственного интеллекта». Таким образом, автор подходит к описанию различных вариантов понимания роли субъекта деятельности, выделяя среди них «субъект-объектное взаимодействие», «субъект-субъектное взаимодействие» и «субъект-метасубъектное взаимодействие», под которым понимается взаимодействие с «потенциально неограниченным числом других разнообразных субъектов (индивидов, групп) в планетарном масштабе». В этом последнем случае технология выступает не только в качестве средства, но и «действует от имени реальных, но не всегда известных субъектов, имитируя определенные аспекты их деятельности» и оказывая влияние на «построение и поддержание разнообразных социальных отношений» в глобальном масштабе (поскольку даже устанавливаемые для распространения технологий границы неизбежно снимаются). В заключительной части статьи автор указывает на связь разрабатываемого им подхода к новой проблеме в области этики с этикой «классической» – с категорическим императивом, в соответствии с одной из формулировок которого человек должен рассматриваться не только как средство, но всегда и как цель действия, если оно претендует на оценку в качестве действия нравственно значимого (не исчерпывающегося утилитарными мотивами). В этой связи автор говорит об «обучении» искусственного интеллекта как задаче создающего его человека. При этом, естественно, ответственность продолжает оставаться на стороне человека, «учителя»: «этическая экспертиза технологии, способной действовать независимо от человека, возможна благодаря ее «обучению» распознавать ситуации применения тех решений этических дилемм, которые предоставляются людьми, и оценке результатов такого распознавания». Техника как таковая, заключает автор, призвана не решать «этические дилеммы», а «распознать случаи применения решений, которые люди считают наилучшими». Наконец, автор оправданно, на наш взгляд, утверждает, что подобный подход применим только в случае «среднесрочных» перспектив, – невозможно при разработке техники и технологий предусмотреть отдалённые последствия их деятельности, техника должна оставаться «под присмотром человека», её создателя и «учителя». С точки зрения разрабатываемого автором подхода, этической оценке подлежат не технологии, а то, каким человек хотел бы видеть себя, развивая и совершенствуя те или иные технологии в качестве средства достижения целей собственно человеческого развития. Как сам анализ проблемы, так и сделанные на основе его выводы представляются оправданными и хорошо обоснованными с теоретической точки зрения. Анализ предпринимается на основании широкого круга источников, как отечественных, так и зарубежных авторов. Изложение последовательно, автор вполне определённо выражает свою позицию. Единственное замечание (которое, правда, относится не к непосредственно рассматриваемой в статье, а к «параллельной» теме), связано с пониманием упомянутого выше соотношения этики и права, поскольку принимаемая автором во внимание позиция, согласно которой они различаются широтой сферы действия, является далеко не единственной в истории философии и этики. Убеждён, что статья может быть интересна широкому кругу читателей, рекомендую опубликовать её в научном журнале.
|