Research Fellows
Dr. Brandon Ashby (SENSOR Project)

Brandon Ashby works in the philosophy of perception, consciousness, and cognitive science. He is interested in the structure of perceptual consciousness, broadly construed. He argues that one sort of structure that consciousness has is a grammatical or syntactic structure. His work is informed by research in cognitive neuroscience and machine learning. He also maintains interests in core cognition, primatology, the epistemology of value, and the philosophy of disability. He is currently part of the SENSOR project on sensory extension.
Dr. Björn Lundgren

Björn Lundgren received his PhD 2018 from the Royal Institute of Technology (Stockholm, Sweden) and has since worked at various institutions such as the Institute for Futures Studies, Umeå University, and Utrecht University, on projects ranging from information ethics and risks, to ethics of artificial intelligence and methodological questions of doing philosophy and ethics (of technology) in particular. At PAIR, Lundgren is continuing his work on AI ethics, broadly speaking, but with a focus on aspects relating to information ethics and methods of doing AI ethics.
- Lundgren, Björn. (2025). Teaching Trade-offs in a Digital Ethics Course. Teaching Ethics, 24(2), 257–265. https://doi.org/10.5840/tej202579163
- Lundgren, Björn & Stefánsson, H. Orri. (2025). Can the Normic de minimis Expected Utility Theory save the de minimis Principle?. Erkenntnis 90, 1255–1263. https://doi.org/10.1007/s10670-023-00751-x
- Lundgren, Björn. (2025). Can Deepfakes Violate an Individual’s Moral Right to Privacy?. Ethical Theory Moral Practice. https://doi.org/10.1007/s10677-025-10514-y
- Lundgren, Björn. (2025). How social should AI be? Erkenntnis. https://doi.org/10.1007/s10670-025-01005-8
- Marchiori, Samuela; Hopster, Jeroen K.; Puzio, Aanna.; van Riemsdijk, M. Birna; Kraaijeveld, Steven R.; Lundgren, Björn; Viehoff, Juri; and Lily E. Frank (2025). A Social Disruptiveness-Based Approach to AI Governance: Complementing the Risk-Based Approach of the AI Act. Science and Engineering Ethics 31 (25). https://doi.org/10.1007/s11948-025-00545-0
- Lundgren, Björn. On the Limits of the Data Economy: The Case of Autonomous Vehicles. Science and Engineering Ethics 31 (16). https://doi.org/10.1007/s11948-025-00540-5
- Lundgren, Björn (2025). God and the Possibility of a Moral Right to Privacy. SOPHIA 64, 339–344. https://doi.org/10.1007/s11841-024-01057-3
- Lundgren, Björn (2025). Hobbes’ Ship of Theseus: On the Limits of Surviving a Gradual Replacement of Parts. Theoria, e70013. https://doi.org/10.1111/theo.70013
Dr. Hadeel Naeem (partially COMPAIN project)

Hadeel Naeem works in epistemology and the philosophy of mind. Hadeel is exploring how we can responsibly use AI systems to further our epistemic goals. As part of the COMPAIN project, Hadeel will also investigate the complexities of pain.
- Naeem, Hadeel (forthcoming). Teaching skills and intellectual virtues with generative AI. Episteme. Preprint: https://philarchive.org/rec/NAETSA-3
- Naeem, Hadeel (forthcoming). AI and the complexity of pain. Philosophy and Technology. Preprint: https://philarchive.org/rec/NAEAAT
- Naeem, Hadeel (forthcoming). Integration, epistemic responsibility and seamlessness. In Artificial Intelligence and Big Data Ethics in Military and Humanitarian Healthcare, edited by David Winkler and Bernhard Koch.
- Hauser, Julian and Hadeel Naeem (2024). Phenomenal transparency and the boundary of cognition. Phenomenology and the Cognitive Sciences. https://doi.org/10.1007/s11097-024-10025-8
- Naeem, Hadeel and Julian Hauser (2024). Should We Discourage AI Extension? Epistemic Responsibility and AI. Philosophy and Technology. https://doi.org/10.1007/s13347-024-00774-4
- Naeem, Hadeel (2023). Is a subpersonal virtue epistemology possible? Philosophical Explorations. https://doi.org/10.1080/13869795.2023.2183240
Dr. Ian Robertson

Ian Robertson’s primary research interests intersect epistemology and the philosophy of cognitive science. He is presently focussed on characterising how the way we learn from AI structurally parallels with the way we learn from human experts. In this way, Robertson interrogates AI integration from the vantage point of social epistemology.
- Robertson, Ian (under contract). AI and Expertise. Cambridge University Press.
- Robertson, Ian (2025). AI, Trust and Reliability. Philosophy & Technology 38 (3), 94. https://doi.org/10.1007/s13347-025-00924-2
- Kirchhoff, Michael David; Julian Kiverstein; and Ian Robertson (2025). The Literalist Fallacy and the Free Energy Principle: Model Building, Scientific Realism, and Instrumentalism. The British Journal for the Philosophy of Science 76 (3), 639-662.
- Robertson, Ian (2025). A Problem for Autonomous Know-How. Erkenntnis 90 (4), 1683-1691. https://doi.org/10.1007/s10670-024-00849-w
- Hutto, Daniel D.; and Ian Robertson (2024). What Comes Naturally? Relaxed Naturalism’s New Philosophy of Nature. In Naturalism and Its Challenges, edited by Gary N. Kemp, Ali Hossein Khani, Hossein Sheykh Rezaee and Hassan Amiriara, 19-35. https://doi.org/10.4324/9781003430568
- Robertson, Ian (2024). In defence of radically enactive imagination. Thought: A Journal of Philosophy. https://doi.org/10.5840/tht202421525