MH

Chair of Theory and Ethics of Artificial Intelligence (Alexander von Humboldt Professorship)

Research associates

PhD candidate

Max Hellrigel-Holderbaum focuses on risks from AI systems. Presently, he examines concepts that are often invoked but rarely scrutinized in this context (such as goals, agency, manipulation, planning) and seeks to contribute to a more informed understanding of AI-associated risks. Previously, he studied philosophy, political science and neuroscience in Frankfurt and Berlin.

Recent Publications

  • Hellrigel-Holderbaum, M., & Dung, L. (2025). Misalignment or misuse? The AGI alignment tradeoff. Philosophical Studies, 1-29.
  • Dung, L., & Hellrigel-Holderbaum, M. (2025). Against racing to AGI: Cooperation, deterrence, and catastrophic risks. arXiv preprint arXiv:2507.21839.
  • Lundgren, B., Catena, E., Robertson, I., Hellrigel-Holderbaum, M., Jaja, I. R., & Dung, L. (2024). On the need for a global AI ethics. Journal of Global Ethics, 20(3), 330-342.