Hauptseite > Publikationsdatenbank > Normativity and AI moral agency |
Journal Article | FZJ-2024-05644 |
2025
Springer
[Cham]
This record in other databases:
Please use a persistent id in citations: doi:10.1007/s43681-024-00566-8 doi:10.34734/FZJ-2024-05644
Abstract: The meanings of the concepts of moral agency in application to AI technologies differ vastly from the ones we use for humanagents. Minimal definitions of AI moral agency are often connected with other normative agency-related concepts, such asrationality or intelligence, autonomy, or responsibility. This paper discusses the problematic application of minimal conceptsof moral agency to AI. I explore why any comprehensive account of AI moral agency has to consider the interconnections toother normative agency-related concepts and beware of four basic detrimental mistakes in the current debate. The results ofthe analysis are: (1) speaking about AI agency may lead to serious demarcation problems and confusing assumptions aboutthe abilities and prospects of AI technologies; (2) the talk of AI moral agency is based on confusing assumptions and turnsout to be senseless in the current prevalent versions. As one possible solution, I propose to replace the concept of AI agencywith the concept of AI automated performance (AIAP).Keywords AI agency · AI moral agency · Artificial moral agents · Philosophy of artificial intelligence
![]() |
The record appears in these collections: |