Beyond Responsibility: New Gaps with AI Systems?
Workshop Dates
Thusday, 23 April 2026
9:30 ― 16:15
Friday, 24 April 2026
10:30 ― 15:45
Location
Nürnberger Str. 74, Erlangen
Room: CIP-Pool 1
This is an in-person workhop with limited seats.
Please register with Eleonora Catena.

Overview
Few ideas have gained as much traction in the philosophy and ethics of Artificial Intelligence as the claim that artificial systems create responsibility gaps, i.e., situations where no one can be held responsible for their morally harmful outcomes (Matthias, 2004). Following the rapid and recent advancements in AI, the debate has progressively expanded to analogous “gaps”, including retribution (Danaher, 2016), achievement (Danaher & Nyholm, 2021), meaningfulness (Rüther, 2024), authorship (Nawar, 2024), and testimony (Sparrow & Flenady, 2025). These concepts share a core insight: AI systems disrupt established social, epistemic, and moral properties and practices. But despite the widespread adoption of “gap” concepts across diverse phenomena, their precise definitions and existence remain contested (e.g., Oimann & Tollon, 2025; Veluwenkamp, 2025), and a unified framework identifying common characteristics of various gaps is missing. This represents a critical shortcoming in our understanding of these gaps, since effective mitigation or compensation strategies depend on a thorough grasp of the underlying phenomenon.
The workshop “Beyond responsibility: new gaps with AI systems?” aims to unify and systematize this growing debate, starting with but moving beyond the well-known case of responsibility gaps. This workshop features researchers with expertise in the philosophical and ethical dimensions of AI to collaboratively explore the causes and defining characteristics of, as well as potential remedies to, gaps created by artificial systems.
Program
Day 1
| 9:30 to 10:00 | Registration |
| 10:00 to 10:30 | Welcome and Openings |
| 10:30 to 11:15 | Sven Nyholm ― Authorless Texts? Generative AI, Authorship, and the Future of Writing |
| 11:15 to 11:30 | Coffee Break |
| 11:30 to 12:15 | Ann-Katrien Oimann ― Seeing the forest for the trees: responsibility beyond the ‘gap’ in autonomous warfare |
| 12:15 to 14:15 | Lunch Break |
| 14:15 to 15:00 | Giulio Mecacci & Lotte van Elteren ― From Shared Control to Shared Confusion. Delusions of Agency and Responsibility Gaps in Human-AI Systems |
| 15:00 to 15:15 | Coffee Break |
| 15:15 to 16:15 | Discussion & Closing |
| 19:00 | Dinner |
Day 2
| 10:00 to 10:30 | Welcome |
| 10:30 to 11:15 | Eleonora Catena & Björn Lundgren ― Defining AI-based Autonomy Gaps |
| 11:15 to 11:30 | Coffee Break |
| 11:30 to 12:15 | Daniela Vacek ― AI achievement gaps: how (not) to deal with them |
| 12:15 to 14:15 | Lunch Break |
| 14:15 to 15:15 | Panel Discussion |
| 15:15 to 15:45 | Closing |
Abstracts
Authorless Texts? Generative AI, Authorship, and the Future of Writing
My presentation will be about how to think about – and whether we need to rethink – the idea of authorship as many people (students, researchers, artists, novelists, etc.) start to rely more and more heavily on large language model (LLM)-based generative AI technologies when they produce texts. In the small but growing literature on this topic, the following views are all represented: (1) LLMs can be authors of texts; (2) LLMs can be co-authors of texts; (3) when people use LLMs, they are always the authors of the resulting texts; (4) when people use LLMs, they are co-authors, but not sole authors; and (5) texts whose production has relied heavily on LLMs should be considered as authorless texts. I will discuss these different views, doing so with reference to the ideas that authors are accountable for their texts, that writing is a form of agency, and that if texts are good, authors deserve credit for their texts.
Seeing the forest for the trees: responsibility beyond the ‘gap’ in autonomous warfare
The increasing deployment of artificial intelligence (AI) across domains, including the military, has intensified debates about the responsible use of autonomous systems. Central to these discussions is the question of moral responsibility: when highly autonomous systems make decisions with grave consequences, who—if anyone—can be held responsible? Some scholars argue that such systems generate a “responsibility gap,” in which traditional frameworks of moral and legal accountability appear to break down.
This presentation offers a critical re-examination of the responsibility gap debate. Rather than treating the problem as a wholly novel consequence of technological advancement, it situates contemporary discussions within the longer history of responsibility theory. By disentangling the existing literature, the talk identifies three key fault lines: between those who affirm or deny the existence of responsibility gaps; those who view such gaps as a fundamentally new moral problem versus those who do not; and those who are optimistic about closing these gaps versus those who regard them as unsolvable. Building on this analysis, the presentation argues that much of the apparent novelty of the responsibility gap dissolves when examined through the lens of longstanding philosophical questions—particularly concerning the role of reactive attitudes and the metaphysical grounding of responsibility. On this view, the challenge posed by autonomous systems lies less in the technology itself and more in tensions within our existing conceptual frameworks. Ultimately, the presentation argues that progress in the field of responsibility in the age of autonomous systems does not depend on abandoning existing theories, but on clarifying and, where necessary, revising the conceptual foundations of traditional theories of responsibility.
From Shared Control to Shared Confusion. Delusions of Agency and Responsibility Gaps in Human-AI Systems
Shared control between humans and automated intelligent systems can give rise to responsibility gaps: situations in which it is unclear whom to assign responsibility. Current debates often focus on issues of distribution: who had control, who had knowledge, and how systems can be made explainable or contestable. Recent accounts of so-called “Meaningful Human Control” have been trying to redefine form of control based on normative requirements that would at least partially amend those gaps. However, we argue that even such theories fall short of accounting for the psychological experience of agency, and its effect on assumed and attributed responsibility. In fact, the interaction between human controllers and AI-automated systems can affect a controller’s sense of agency (SoA): the experience of being in control over one’s actions and their effect on the surrounding world. We show that in shared-control contexts, a controller’s SoA can lead to unfair (excessive/insufficient) responsibility (self-)attributions by further aggravating five distinct types of responsibility gaps: culpability, moral accountability, public accountability, active responsibility and vulnerability. As a consequence, it is harder for both individuals and institutions to meaningfully assign and assume responsibility. We conclude that SoA must be studied and included in a normative set of requirements for meaningful human control and responsibility. In order to facilitate this, we outline a preliminary set of requirements, in the form of design-oriented and institutional directions for supporting calibrated agency in collaborative AI environments.
Defining AI-based Autonomy Gaps
Starting with responsibility gaps, there has been an evolving discussion around various gaps related to AI technology, concerning control, meaningfulness, retribution, achievement, and authorship. This raises the need to define AI-based gaps, more generally, and to identify other kinds of gaps, more specifically. Among the growing set of AI-based gaps, there is one case that has remained mostly neglected—namely, so-called “autonomy gaps.” Given the importance of human autonomy in AI ethics, autonomy gaps demand further scrutiny. This paper aims to remedy these shortcomings by: First, providing a definitional schema of AI-based gaps. Second, proposing a definition of autonomy gaps based on the proposed definitional schema. Third, showcasing the existence of autonomy gaps, while distinguishing them from other related impacts of AI on human autonomy. Fourth and finally, further motivating autonomy gaps’ conceptual and ethical importance for key AI ethics debates. In so doing, this paper also contributes to clarifying the challenges of AI for human autonomy, as well as the emergence of other AI-based gaps.
AI achievement gaps: how (not) to deal with them
In this talk, I will present the problem of AI achievement gaps and illustrate some of its troubling consequences (Danaher & Nyholm 2021). I will examine some problematic ways of dealing with this problem: analysing the problem away (Tigard 2021), ignoring it, or filling the gap with responsibility of AI systems (Kieval 2024, criticised in Vacek 2025a). I will then propose some promising ways forward: filling it with collective, vicarious, or even proxy responsibility of human stakeholders (the former two are defended in Vacek 2025b).
References
Danaher, J., & Nyholm, S. (2021). Automation, work and the achievement gap. AI and Ethics, 1(3), 227–237.
Kieval, P. H. (2024). Artificial achievements. Analysis, 84(1), 32-41.
Tigard, D. W. (2021). Workplace automation without achievement gaps: A reply to Danaher and Nyholm. AI and Ethics, 1(4), 611–617.
Vacek, D. (2025a). Against artificial achievements. Analysis, 85(3), 690–698.
Vacek, D. (2025b). Meeting the AI achievement challenge: collective and vicarious achievements. Ethics and Information Technology, 27(2), 25.
Organization
This event is organized by Eleonora Catena, Miriam Gorr and Eleftheria Tsouika.
Support
The event is co-sponsored by the Society for Philosophy and Technology (SPT).
Any questions?
Reach out to Eleonora Catena.