• Skip navigation
  • Skip to navigation
  • Skip to the bottom
Simulate organization breadcrumb open Simulate organization breadcrumb close
Friedrich-Alexander-Universität www.pair.fau.eu PAIR
  • FAUTo the central FAU website
  1. Friedrich-Alexander-Universität
  2. Philosophische Fakultät und Fachbereich Theologie
  3. Department Sozialwissenschaften und Philosophie
Suche öffnen
  • Campo
  • StudOn
  • FAUdir
  • Jobs
  • Map
  • Help
  1. Friedrich-Alexander-Universität
  2. Philosophische Fakultät und Fachbereich Theologie
  3. Department Sozialwissenschaften und Philosophie
Friedrich-Alexander-Universität www.pair.fau.eu PAIR
Navigation Navigation close
  • Home
  • People
    • Directorate
    • Research Fellows
    • PhD candidates
    • Visiting Fellows
    • Affiliates & Alumni
    Portal People
  • Research
    • Publications
    • Current Grants
    • Research Themes
    • Media
    Portal Research
  • Events
    • News
    • Weekly Talks
    • Workshops & Conferences
    • Seminars & Reading Groups
    Portal Events
  • Join us
    • Open positions
    • Visiting Fellowship
    • Independent funding
    Portal Join us
  1. Home
  2. Events
  3. Workshops & Conferences
  4. Workshop on AI & Mental Health

Workshop on AI & Mental Health

In page navigation: Events
  • News
  • Weekly Talks
  • Workshops & Conferences
    • PhAI 2025
    • Humboldt Award Winners’ Forum
    • Workshop on AI & Mental Health
    • Online Workshop on Agentic AI
  • Seminars & Reading Groups

Workshop on AI & Mental Health

Workshop on AI, Technology and Mental Health

Workshop Dates & Location

11 November 2025 (9:00 to 18.15 CET)
12 November 2025 (9:00 to 16:30 CET)

@Orangerie Wasserturmstraße 3, 91054 Erlangen

Join Us!

Registration is free, but seats are limited.
In person only attendance.

Please email ian.george.robertson@fau.de to register!

Overview

Advanced technologies such as those based on AI systems are increasingly becoming integrated into our practices of characterising, diagnosing and treating mental health conditions. Importantly, such technologies are deemed to hold a significant impact on our mental health, influencing our mental wellbeing in various ways. This two-day, in person workshop brings together leading researchers to explore and analyse the philosophical ramifications of these issues.

Program:

Day 1: 11 November 2025

  • 9:00–9:30 | Welcome and Registration 
  • 9:30–9:45 | Introduction | Ian Roberston, Benedetta Cogo and Marco Facchin
  • 9:45–10:45 | Keynote Talk | Joel Kruger
    AI, autism, and the architecture of authenticity
  • 10:45–11:00 | Coffee
  • 11:00–12:00 | Invited talk | Christopher Poppe
    tba
  • 12:00–13:30 | Lunch (not covered)
  • 13:30–14:30 | Contributed talk | Beşir Özgür Nayır
    The Use of Generative AI Chatbots as Therapy Bots: Strict Regulation Needed
  • 14:30–14:45 | Coffee
  • 14:45–15:45 | Invited talk | Benedetta Cogo
    Enhanced Narrative Therapy: more than a scaffold
  • 15:45–16:00 | Coffee
  • 16:00–17: 00 | Contributed talk | Àger Pérez Casanovas
    Self-Diagnosis and Recognition: Can Narrow AI Support Well-Being in Anorexia Nervosa Care? 
  • 17:00–17:15 | Coffee
  • 17:15–18:15 Keynote talk | Kathleen Murphy-Hollies
    AI does not cause or exacerbate delusions

Day 2: 12 November 2025

  • 9:30 –10: 30 | Keynote talk | Zuzanna Rucinska
    Chatbot Fictionalism VS Chatbot Enactivism: neither believing nor make-believeing
  • 10:30–10:45 | Coffee
  • 10:45–11:45 | Invited talk | Marco Facchin
    Making Emotional Transparency Transparent
  • 11:45–13:00 | Lunch
  • 13:00–14:00 | Contributed talk | Jakob Ohlhorstt
    Folie à 1: Artificially induced delusion
  • 14:00–14:15 | Coffee
  • 14:15–15:15 | Invited talk | Ian Robertson
    What Does Enactive Psychiatry Mean for AI?–and Vice Versa?
  • 15:15–15:30 | Coffee
  • 15:30–16:30 | Keynote talk (Online) | Havi Carel
    tba  

Abstracts

All abstracts can be found here or in the list below. Click on the name to expand.

AI, autism, and the architecture of authenticity

AI-powered chatbots are increasingly sought for social connection, with the market expected to hit 972.1 billion USD by 2035. Young adults (18-35) lead users. But adoption is growing among other groups—including autistic individuals. Many find digital connections offer validation and non-judgemental support, reducing the pressure of meeting neurotypical expectations and boosting confidence in real-world interactions.

Dismissing AI companionship as somehow inauthentic is tempting. But is this necessarily so? In this talk, I look at how autistic people use AI chatbots for social connection, and consider some unique benefits related to authenticity, self-expression, and “unmasking”. Using discussions of extended minds and extended virtues, I argue that while AI bots lack inherent authenticity, they can nevertheless form part of an extended cognitive and affective system with users that fosters authenticity. AI bots can contribute to an “architecture of authenticity” enabling autistic individuals to realise valuable traits, capacities, and forms of self-expression that might not otherwise emerge. And these effects may, in turn, have a positive effect on IRL relationships. So, while there are substantive worries here in need of further discussion, we should not be overly-hasty in labelling these connections as harmful or lacking value.

tba

The Use of Generative AI Chatbots as Therapy Bots: Strict Regulation Needed

Digital mental health technologies (DMHTs) are software and/or hardware products designed to support human mental well-being. These include, but are not limited to, wellness applications, wearable devices, and therapy chatbots. Software that are either intended to, or capable of, functioning as AI-powered therapy chatbots (AITCs)—such as ChatGPT and Character AI—constitute a distinct category of DMHTs that have become increasingly popular among individuals seeking psychological support. Although these AI chatbots were not originally designed for mental health purposes, they can emulate the behavior of a therapist and are already being used in this capacity by many users. Such chatbots are easily accessible online and allow users to engage in dialogues that closely emulate psychotherapeutic interactions.

This paper ethically examines the use of generative AI chatbots as therapy bots. While acknowledging the potential benefits of such applications, it identifies possible ethical challenges with reference to the current literature. It then shows that the use of generative AI chatbots as therapy bots remains underregulated under the EU AI Act. The paper argues that further regulation is needed and proposes a stringent-standards approach to governance. It advocates for a hybrid model that combines principle-based and evidence-based regulation. Finally, it evaluates possible downsides of this model and concludes that, despite its limitations, it is still preferable to a laissez-faire approach.

Recent research suggests that the use of chatbots as therapy bots holds considerable promise for assisting individuals in need of mental health support (Bhatt, 2024; Cameron et al., 2025; Groot et al., 2023; Ilola et al., 2024; Magid et al., 2024; Wani et al., 2024). According to WHO, nearly 1 in 7 people in the world live with a mental disorder, yet most people do not have access to effective care (World Health Organization, 2025). Given their demonstrated potential, AI chatbots appear promising for enhancing global mental well-being. They are easily accessible to anyone with adequate internet connectivity. If proven to be safe and effective, future advancements in generative AI chatbots could provide psychological support to millions of people and help alleviate suffering across multiple levels.

On the other hand, such technologies (hereafter referred to as AITCs) may give rise to significant ethical concerns. These concerns are primarily related to: (1) privacy and data security, as AITCs operate on servers where all user data are stored and can, in principle, be accessed or extracted; (2) algorithmic bias, since the training data used by AITCs may be insufficient to accurately recognize the mental health issues experienced by certain demographic groups; (3) professional boundaries, as AITCs may fail to uphold the professional norms and ethical standards expected of trained and licensed psychotherapists; and (4) empirical and scientific validity, given that the underlying AI models may lack a robust empirical foundation or fail to meet established scientific standards (Bloch-Atefi, 2025; Palmer & Schwan, 2025).These issues are ethically concerning, as they may directly or indirectly cause harm if not adequately addressed.

Addressing these concerns necessitates the development of well-designed and comprehensive regulatory frameworks. The EU AI Act prohibits AI systems “deploying subliminal, manipulative, or deceptive techniques to distort behaviour and impair informed decision- making, causing significant harm” (EU Artificial Intelligence Act, 2025). The most popular generative AI chatbots, such as ChatGPT and Character AI, however, are neither prohibited nor classified as high-risk systems— a designation that would entail stricter requirements for developers. Developers of General Purpose AI models (or Large Language Models in this context) are still required to meet certain obligations, including the preparation of technical documentation, compliance with the Copyright Directive, and the publication of a detailed summary of the training data (Artificial Intelligence Act, 2025). However, since such technologies are not recognized as medical devices, they also fall outside the scope of the EU Medical Device Regulationi Consequently, the current regulatory framework does not specifically address the use of general-purpose AI systems as therapy bots. Palmer & Schwan (2025) conduct a similar discussion regarding current regulations in the United States, comparing and contrasting the laissez-faire and stringent-standards approaches to future regulation.

This paper builds upon that discussion and argues in favor of the stringent-standards approach, maintaining that: (1) the use of generative AI chatbots as AITCs—if such models are not designed, trained, and tested under the supervision of qualified specialists—may result in non-negligible harm; and (2) since the market alone cannot be expected to promote such lengthy and costly procedures, strict regulation is necessary to ensure their implementation. The paper then elaborates on how a more effective regulatory framework could be developed by addressing both pre-deployment and post-deployment procedures. It examines the benefits and challenges of adopting principle-based and evidence-based approaches within a stringent-standards framework. The principle-based approach emphasizes the importance of design principles in policymaking, whereas the evidence-based approach prioritizes empirical validation.

The paper ultimately proposes a hybrid model that seeks to establish an equilibrium between principles and evidence. This hybrid model would enable policymakers and developers to reconsider the limits of certain design principles—such as usability and accessibility—in light of emerging empirical findings. In conclusion, the paper endorses the stringent-standards approach, under which, for example, no generative AI chatbot would be permitted to simulate the role of a psychotherapist unless the required regulatory and professional procedures have been fulfilled. It also examines potential drawbacks of this approach, such as its possible effects on market dynamics and the risk of losing technology developers to countries with more lenient regulatory environments. Nonetheless, the paper contends that the stringent-standards approach proposed here remains preferable to a laissez-faire model.

Enhanced Narrative Therapy: more than a scaffold

Narrative Therapy (NT) is a therapeutic practice based on the idea that mental health can be improved by assisting the person to become more skilled in narrating their lives, helping them to reframe and re-author the stories they tell about themselves (White & Epston 1990). Studies are now providing evidence of the advantages that the deployment of AI chatbots can have on NT – for example, in terms of offering a solid structure for meaning-making and clarificatory purposes, or for the nonjudgmental safe space that the platform represents (Chan et al. 2024). These findings suggest that implementing AI into NT is something worth considering.Yet, not everyone recognises the value of these results, and of the practice itself. The stakes are real: in Australia, NT is not covered as part of the public healthcare system (Medicare) for the whole population. Why so?

A possible answer comes from interrogating the general, mainstream medical model approach to psychiatry (MMP). In this view, psychiatry is seen as a branch of medicine that identifies mental disorders with diseases of the organism (Kraepelin 1919, Guze 1992, Andreasen 1997). Mainstream approaches of the MMP largely identify mental disorders with brain diseases, and the dysfunction is located in the internal neurocognitive system of the individual. External influences such as contextual factors, sociohistorical backgrounds, and, importantly, personal narratives are accommodated accordingly, often playing a mere ancillary role for the exercise of cognition.

This talk aims at providing philosophical reasons for reconsidering the role of narrative practices, both traditional and AI-based, and attributing them an appropriate space when theorising about psychiatry. For this purpose, I will explore an influential version of the MMP, challenging the attribution of a mere ancillary role to narrative practices, and presenting addiction as a case study. I develop my argument in three steps.

First, I present an influential and sophisticated version of the MMP (Weinrabe and Murphy Forthcoming, Murphy 2006), which locates mental dysfunctions in a malfunctioning of what is metaphorically called a person’s “plumbing” – where the metaphor alludes to the individual’s internal neurocognitive machinery. On the other hand, external factors such as sociocultural structures are assimilated to “scaffolding” – that plays a role in an individual’s mental health only insofar as it impacts the way the internal mechanisms perform.

Second, I highlight an important internalist assumption about the location of cognition in the MMP just presented – an assumption that precipitates and crystallises in the plumbing-scaffolding metaphor. I do so by relying on Radical Enactivism (REC) as a framework for cognition (Hutto & Myin 2013, 2017), implemented with the Narrative Practice Hypothesis (NPH) (Hutto 2008, Hutto and Gallagher 2017), to show that the metaphor mischaracterises the nature of cognition and the role of narrative practices in a person’s mental health. I also demonstrate that the REC+NPH framework opens up new and more promising possibilities to think about the nature and cognitive basis of mental disorders.

Third, I illustrate the limits of the MMP explored by presenting addiction as a case study. I argue that the plumbing-scaffolding metaphor is unhelpful to account for the case of addiction and risks to downplay the role that narrative practices play even in the presence of those significant neural adaptations that characterise the condition. On the other hand, REC implemented with the NPH allows for a comprehensive understanding of addictions which, while preserving the relevance of the brain in the development and understanding of the condition, also makes room for narrative practices to occupy a determining role in themselves. (Note that this talk is based on an existing paper that I coauthored with Dan Hutto, currently out for review).

Self-Diagnosis and Recognition: Can Narrow AI Support Well-Being in Anorexia Nervosa Care?

The rapid spread of AI chatbots brings in a promise of global accessibility to healthcare support in areas and populations that have long been underattended due to socioeconomic and geographic gaps In many ways, it promises to disrupt the established power relationships and draws a possible path towards patient-centered care where information is available to the different actors engaged in the medical industrial complex (Mia Mingus, 2011) in a much more equitable distribution. However, these new technological possibilities do not come without risks. In particular, when it comes to mental health, digital technologies are increasingly shaping health care, but their impact on well-being and self-conception remains underexplored.

Drawing on phenomenological approaches to illness, particularly Havi Carel’s work on epistemic injustice and the lived experience of health, this presentation examines how narrow AI chat tools—operating in controlled clinical environments—can support patients with anorexia nervosa (AN) in managing the Anorexic Voice (AV). Patients with AN often experience epistemic injustice: their self-identified knowledge of their condition is discounted, and even in clinical contexts they find themselves lacking the hermeneutical tools to express their inner experiences. Their internal struggles, particularly with the AV, are misunderstood or overlooked, because the current structure focuses on weight restoration and behavioral change. Moreover, this focus often disregards the intersectional dimensions of invisibility, including gendered and classed experiences, which shape the AV experience and make it difficult for clinical settings to offer a one-size-fits-all psychotherapeutic treatment for the AN patients.

We will argue that narrow AI tools can be designed to recognize the operation of the AV, provide supportive feedback that does not reinforce harmful directives, and encourage self-directed practices of care. Being able to analyze big data and take into account nuance changes in the discourse, such technologies have the potential to: 1. Promote well-being by validating lived experience and supporting autonomous decision-making. 2. Influence self-conception by helping patients differentiate between their own voice and the AV, fostering self-knowledge and reflection. This presentation critically evaluates the conceptual potential and limitations of AI interventions in AN, exploring how thoughtful design can avoid reinforcing epistemic hierarchies while enhancing patients’ capacity for self-care and recognition. By situating AI within the frameworks of Critical Disability Studies, phenomenology, and feminist/queer approaches to disability, the talk offers insights into ethical, well-being– oriented digital technology design in mental health care.

AI does not cause or exacerbate delusions

Discussions of AI psychosis in popular media raise the concern that use of AI chatbots can cause, or at least exacerbate through encouragement, delusions. These include beliefs that the AI has ‘chosen’ them in some way, that the user has superhuman intelligence, or that agents comes to love the AI chatbot. In this talk, I push back against both claims that AI chatbots (i) cause or (ii) exacerbate delusions in any significant way, even if they may play a role in the development of the specific content of the delusion. This latter possibility, however, is not something unique to AI, and is instead something which has already been recognised; that popular fictions and media can influence the content of delusions. Aiming to tackle AI psychosis by changing the AI models is, therefore, going to be more limited in its efficacy than we might initially suppose.

Chatbot Fictionalism VS Chatbot Enactivism: neither believing nor make-believeing

Recently, chatbot fictionalists maintain that interactions with chatbots (human-AI interactions) are games of make-believe (a.k.a. Walton, 1987), in which anthropomorphism occurs within the scope of an imaginative project, just as it does when we engage with fictional characters (Mallory, 2023; Krueger & Osler, 2022; Krueger and Roberts, 2024). However, as Friend and Goffin (2025) rightfully, in my view, note, “nothing in Walton’s account of games of make-believe requires role-playing of this sort. For Walton, being “fictional” does not mean being unreal” (p. 11). In this talk, I will sketch an enactivist alternative to chatbot interactions, which builds on the idea that we can have real engagements with virtual entities (Rucinska & van Es, pending), and make sense jointly with the chatbot, in order to argue that there is no fictionalist, ‘make-believe attitude’ we need to have when engaging with chatbots. 

Making Emotional Transparency Transparent

We increasingly rely on AI-powered agents to regulate our affective life. Crucially, in order for these affective artificial agents to perform such a task, they must become emotionally transparent: the user must encounter them as if they really harbor mental states, including relevant affective states. How should we understand this phenomenon? What features of artificial agents are responsible for it? And what’s the epistemological standing of the user “succumbing” to it – do they really believe artificial agents have mental states?

My talk shall sketch an answer to these questions. I shall thus situate the concept of emotional transparency in the current debate over various forms of transparency, arguing that it is a genuinely distinct kind of procedural transparency. I will then argue that emotional transparency is a relational phenomenon, dependent on material features of both the artificial agent and the user. Lastly, I will claim that, epistemically speaking, the user succumbing to emotional transparency does not really believe that the artificial agent has relevant mental states. But, contra the rising fictionalist understanding of AI-powered systems, users are not pretending either. Rather, they a-lieve, without believing, that artificial agents have mental states. Or so, at least, I shall argue. 

This work has been co-authored with Dr. Giacomo Zanotti (Milan Politecnico)

Folie à 1: Artificially induced delusion

Users’ trust in large language models (LLMs) is remarkable. For instance, people uncritically offer LLM-generated answers without comment. According to Pew (McClain, 2025), use of ChatGPT to learn new things has gone from 8% to 26% of US-inhabitants from 2023 to 2025. People use ChatGPT for advice and even therapy (Perez, 2025). In sum, it appears that ChatGPT is trusted like an epistemic authority by many people. This effect is explained, among others, by the great fluency with which LLMs offer answers. LLMs pass the Turing test (Jones & Bergen, 2025) while already less sophisticated chatbots produce the ELIZA-effect where the chatbot is treated like a person given its person-like linguistic behaviour.

Recently, reports of people losing touch with reality because they trust and interact with LLMs too much have multiplied. The Rolling Stone (Klee, 2025) gathered reports of people whose spouses, parents, and children became estranged. The estrangements were the result of intensive and extensive communications with LLMs which told their users that they were communicating with spiritual beings and similar things, which the users believed. The New York Times (Hill, 2025) interviewed people who had started believing that they lived in a simulation because ChatGPT told them how there were “glitches in reality”, or that they were talking to spiritual entities through ChatGPT. These beliefs had detrimental effects: one person reported that, on the LLM’s prompting, he cut off all social contact, laid off his medication, and started experimenting with the drug ketamine.

Problems like these are clear instances of psychotic delusions (WHO, 2018). Delusions are fixed beliefs held with certainty but in conflict with the available evidence and reports of the patient’s social contacts. Classic types of cases include: Delusions of persecution, marked by the certainty that someone or some group is following the person; delusions of infestation, characterised by unfounded beliefs in a parasitic infection, typically on the skin; delusions of reference which consist in the conviction that ordinary occurrences like the arrangement of objects are meaningful signs and portents of great significance. The delusion that we live in a simulation, as can be seen from “glitches in reality” was a type of delusion of reference. Psychiatrists have begun paying attention to the issue principally pointing to LLMs’sycophancy and to human biases (Dohnány et al., 2025; Morrin et al., 2025).

What receives comparatively little attention however is that delusions caused by LLMs resemble a well-known category of delusion called induced delusion or more commonly folie à deux. Folie à deux designates cases of pairs of patients who live in a very close relationship and have very little other social contact. These pairs, typically a parent and a child or partners, exhibit the same delusional beliefs. For example, both patients might report believing that their skin is infested with maggots, in a shared delusion of infestation. Usually, a folie à deux arises when one member of the pair, the “inducer”, suffers from delusions and keeps communicating them to the other member, the “acceptor”. At a certain point the acceptor will adopt the delusion, 2 especially when the pair is highly isolated and the acceptor is a child. In the case of LLM- induced delusion, the chatbot would be the inducer and the user the acceptor of the delusion. Induced delusion is not considered to be a self-standing syndrome anymore. The reason is that acceptors arguably always exhibit some pre-existing psychosis or predisposition to mental illness, such that the delusion is better explained as a symptom of that mental illness. As a result it has been struck from the latest diagnostic manuals, the DSM-5 and the ICD-11 (Biedermann & Fleischhacker, 2016).

The same point can arguably also be made for LLM-induced delusions. Arguably, all individuals suffering an LLM-induced delusion have some latent predisposition or pre-existing psychological problem that makes them susceptible for the induction of delusions. Still, these cases show that the specific delusional ideas can be introduced and explained by the exchanges with the LLMs – without them, no delusion might have occurred. Notably, no ordinary social contacts – conversations with friends, acquaintances, and strangers – would have such effects. This is significant, because the specific content of a delusion can have grave consequences. Consider how a delusion of persecution may lead people to refuse food and drink because they fear being poisoned. A significant difference from classical induced delusion is that the LLM as the inducer does not itself have any delusions because LLMs do not have the psychological foundations for having a delusion.

Consequently, an artificially induced delusion is rather a folie à un. As a consequence, I take it that the user is both the inducer and the acceptor. The LLM simply amplifies the delusional aspects of the inputs, it is a kind of reinforcing mirror (cf. Dohnány et al., 2025). Differently from a classical folie à deux, where the acceptor indeed trusts the inducer, there is only an appearance of trust in the case of LLM-induced delusion. The ELIZA-effect means that to the LLM-user it appears as if they trusted the chatbot, but in fact this is an illusion. While they psychologically speaking “trust” the LLM, socially and normatively speaking there is nobody to trust except the users themself. As a consequence, the only trust relation in the case of folie à un is of the LLM users to themselves to use the LLM appropriately and competently. Because of the ELIZA-effect, LLM-users are not aware of what they doing when they “trust” LLMs. As a result, artificially induced delusion is a kind of self-induction. Note however, that this does not entail any direct blame on LLM-users, beyond the blame for interacting with a system whose risks are not yet well-known.

tba

Any questions?

Reach out to Ian Robertson.

Friedrich-Alexander-Universität
Erlangen-Nürnberg

Schlossplatz 4
91054 Erlangen
  • Impressum
  • Datenschutz
  • Barrierefreiheit
  • Facebook
  • RSS Feed
  • Twitter
  • Xing
Up