% IMPORTANT: The following is UTF-8 encoded. This means that in the presence
% of non-ASCII characters, it will not work with BibTeX 0.99 or older.
% Instead, you should use an up-to-date BibTeX implementation like “bibtex8” or
% “biber”.
@INPROCEEDINGS{Heinrichs:1025942,
author = {Heinrichs, Jan-Hendrik},
title = {{AMA}s, function creep and moral overburdening},
reportid = {FZJ-2024-03220},
year = {2023},
abstract = {AMAs, function creep and moral overburdeningDesigning
artificially intelligent systems to function as moral agents
has been suggested as either a necessary or at least the
most effective way to counter the risks inherent in
intelligent automation [2]. This suggestion to generate
so-called Artificial Moral Agents (AMAs) has received ample
support [see the contributions in 7] as well as critique
[9]. What has yet to be sufficiently taken into account in
the discussion about AMAs is their effect on the moral
discourse itself. Some important considerations in this
regard have been presented by Shannon Vallor [8] and by
Sherry Turkle [3] under the heading of moral deskilling. The
core claim of the moral deskilling literature is that the
employment of artificially intelligent systems can change
spheres of activity which used to require moral skills in
such a way that these skills lose their relevance and thus
cease to be practiced.This contribution will argue that
deskilling is just one among several changes in the moral
landscape that might accompany the development and
dissemination of AMAs. It will be argued that there are two
complementary trends which AMAs might initiate, both of
which might fall under the heading of functions creep as
defined by Koops: „Based on this, function creep can be
defined as an imperceptibly transformative and therewith
contestable change in a data-processing system’s proper
activity.” [4, 10]. These developments are a) the
moralization of spheres of action previously under little or
no moral constraints and b) changing – maybe rising –
and increasingly complex standards of moral conduct across
spheres of action.a) The former trend – moralization of
additional spheres of action – occurs when in the process
of (partially) automating a certain task moral precautions
are implemented, which have not been a part of previous
considerations. A common example is the explicit, prior
weighing of lives which automated driving systems are
supposed to implement, but which typically do not play a
role in a real driver’s education, much less their
reaction during an accident. Automatization – be it of
cars, government services or any other sphere of activity
– typically requires actively revising or maintaining the
structures of a given process and therefore generates the
option to include moral considerations where there
previously were none or few. The availability of established
AMA-techniques is likely to influence stakeholders to
include such moral considerations, whether for product
safety reasons or for mere commercial advantage.b) The
latter trend – the change of standards of moral conduct
– is an effect of the requirements of intelligent
automatization on the one hand and of professionalism on the
other. Compared to humans, AMAs and their algorithmic
processes employ different processes of behavioural control
in general and of observing moral constraints in particular
[6]. Thus, even if automation of tasks tries to mimic
pre-established moral behaviour and rules there will be
differences in the representation, interdependence, and
acceptable ambiguity of moral categories in the human
original and the AMA implementation. Furthermore, the
implementation of moral constraints in automated systems
already is a professional field which tries to live up to
extremely high standards – as exemplified by the
increasingly complex approaches to algorithmic fairness
[11]. Accordingly, it is to be expected that the field will
aim to implement high moral standards in their products,
sometimes beyond what would be expected of human agents in
the same sphere of activity.While both trends seem to be
positive developments at first hand, there is relevant risk
that changing and complex moral standards in more and more
spheres of action overburden an increasing number of people,
making them increasingly dependent on external guidance or
setting them up for moral failure and its consequences. This
development combines two effects which have previously been
identified in literature. On the one hand, it seems to be a
case of what Frischmann and Selinger called ‘techno-social
engineering creep’ [4], that is the detrimental change of
our collective social infrastructure and standards through
individual, seemingly rational choices. By individually
implementing AMA-algorithms in appliances in several spheres
of activity for reasons of safety (or market share), we
change the moral landscape throughout. On the other hand, it
is similar to what Daniel Winkler has identified as one of
the threats of human enhancement, namely that standards
suited for highly functioning individuals will overburden
the rest of the community[5]. The present trends differ from
Winkler’s analysis insofar as it might affect all human
agents and not just those who forgo some form of cognitive
improvement.Clearly, the proliferation and change of moral
standards does not merely carry risks. It has the potential
to generate relevant social benefits, which need to be
considered in a balanced account. However, a purely
consequentialist analysis of these trends would lack regard
for the important dimension of procedural justification.
While changes in the moral landscape often occur without
explicit collective attention or explicit deliberation,
there seem to be minimal conditions of the legitimacy, such
as not providing any affected party with good reasons to
withhold their consent [1]. The current contribution will
sketch the consequentialist and procedural constraints on
the trends identified above and try to spot a path between
relinquishing the design of AMAs and alienating human agents
from moral discourse.References:1. Scanlon, T., What we owe
to each other. 1998, Cambridge, Mass.: Belknap Press of
Harvard University Press. ix, 420 p.2. Wallach, W. and C.
Allen, Moral machines. Teaching robots right from wrong.
2009, Oxford ; New York: Oxford University Press. xi, 275
p.3. Turkle, S., Alone Together: Why We Expect More from
Technology and Less from Each Other. 2011, New York: Basic
Books.4. Frischmann, B. and E. Selinger, Re-Engineering
Humanity 2018, Cambridge: Cambridge University Press.5.
Wikler, D., Paternalism in the Age of Cognitive Enhancement:
Do Civil Liberties Presuppose Roughly Equal Mental Ability?,
in Human Enhancement, J. Savulescu and N. Bostrom, Editors.
2010, Oxford University Press: Oxford / New York.6. Liu, Y.,
et al., Artificial Moral Advisors: A New Perspective from
Moral Psychology, in Proceedings of the 2022 AAAI/ACM
Conference on AI, Ethics, and Society. 2022, Association for
Computing Machinery: Oxford, United Kingdom. p. 436–445.7.
Anderson, M. and S.L. Anderson, eds. Machine Ethics. 2011,
Cambridge University Press: Cambridge.8. Vallor, S., Moral
Deskilling and Upskilling in a New Machine Age: Reflections
on the Ambiguous Future of Character. Philosophy $\&$
Technology, 2015. 28(1): p. 107-124.9. van Wynsberghe, A.
and S. Robbins, Critiquing the Reasons for Making Artificial
Moral Agents. Science and Engineering Ethics, 2019. 25(3):
p. 719-735.10. Koops, B.-J., The Concept of Function Creep.
Management of Innovation eJournal, 2020.11. Fazelpour, S.
and D. Danks, Algorithmic bias: Senses, sources, solutions.
Philosophy Compass, 2021. 16(8): p. e12760.},
month = {Dec},
date = {2023-12-15},
organization = {5th Conference on "Philosophy of
Artificial Intelligence" PhAI 2023,
Erlangen (Germany), 15 Dec 2023 - 16
Dec 2023},
subtyp = {Invited},
cin = {INM-7},
cid = {I:(DE-Juel1)INM-7-20090406},
pnm = {5255 - Neuroethics and Ethics of Information (POF4-525)},
pid = {G:(DE-HGF)POF4-5255},
typ = {PUB:(DE-HGF)6},
url = {https://juser.fz-juelich.de/record/1025942},
}