Areas for research that we are working on, or interested in, include: 

Ethics of AI

  • Moral status of AI-systems, artificial moral agency
  • The alignment problem
  • Machine ethics (possibility, role of virtue ethics, rules, …)
  • Superintelligence and XRisk (assumptions, prediction, policy, …)
  • Risks and Chances of AI in society (labor market, legal system, intellectual property rights, monopolies, fair distribution of gains, ecology … vs. better decisions, large data insights, …)
  • AI and responsibility allocation (responsibility gaps)
  • Bias in decision systems
  • Specific risks from large language models and other large ML systems
  • Surveillance, esp. new forms (brain surveillance, deep analysis, …)
  • Human dependence on AI, manipulation of humans, loss (or change) of humanity
  • Can or should progress in AI be stopped or slowed down?
  • Ethical recommendations for regulations of AI
  • Political & societal structures to control AI development and use

Theoretical Philosophy and AI

  • Artificial sentience, role of consciousness in cognition
  • Models of natural cognition and AI
  • Notion of computation
  • Notion of intelligence
  • Role of normative cognition in intelligence
  • Normative approaches in AI evaluation, training, and reward (limits of rational choice, Markovian reward, etc.)
  • Normative approaches to AI interpretation problems (opacity, explainable AI)
  • Philosophy through AI (esp. philosophy of mind, language, epistemology)
  • Role of representation in AI (esp. concepts & language)
  • Use of AI in scientific research (e.g. neuroscience, medicine, …)
  • The limits of ML, relation of the to scaling of models
  • Embodied cognition and AI
  • Perception and the predictive mind


See the activities on