A $1 million grant from OpenAI will support research at Duke University aimed at developing AI algorithms to predict human moral judgments.
OpenAI has made a significant move in the realm of academic research by awarding a $1 million grant to a team at Duke University, aimed at exploring artificial intelligence algorithms that could potentially predict human moral judgments. This funding reflects an ongoing commitment to addressing the ethical dimensions of AI as discussions around the responsible use of technology become increasingly prominent in societal discourse.
The research, titled “Making Moral AI,” is being conducted by Duke’s Moral Attitudes and Decisions Lab (MADLAB), under the leadership of Walter Sinnott-Armstrong, who serves as a professor of practical ethics and the principal investigator for the project. Alongside Sinnott-Armstrong is co-investigator Jana Schaich Borg from the Social Science Research Institute. Their joint efforts are focused on understanding the intricate factors that influence moral attitudes and judgments among individuals.
MADLAB is designed as an interdisciplinary laboratory, integrating fields such as computer science, philosophy, psychology, economics, game theory, and neuroscience to unravel how AI can function as a “moral GPS.” The objective is to develop AI technologies that can assist people in making informed ethical decisions, honing in on the efficacy of algorithms in contexts where moral dilemmas arise, particularly in areas such as medicine, law, and business.
According to a press release from Duke University, the grant from OpenAI will specifically facilitate the development of algorithms capable of deciphering human moral judgments in complex scenarios involving competing moral considerations. Despite the promise of this research, there are significant challenges ahead, as the nuanced nature of ethics and the emotional components of human decision-making pose obstacles for existing AI technologies. Current models primarily rely on data patterns and statistical reasoning, which may not adequately capture the subtleties inherent in ethical situations.
The interdisciplinary nature of the research underscores the complexity of integrating insights from various social sciences into AI algorithms. The task of aligning AI with human morality continues to be a substantial endeavour that requires time and careful consideration.
This initiative not only reflects the growing interest in the ethical implications of AI but also highlights the role that academic partnerships play in forging paths toward responsible technological advancement. As societal reliance on AI expands, efforts such as this may prove critical in shaping future applications of artificial intelligence.
Source: Noah Wire Services
- https://www.templetonworldcharity.org/blog/ai-ethics-walter-sinnott-armstrong-jana-schaich-borg-vincent-conitzer-podcast – Corroborates the research on AI ethics led by Dr. Walter Sinnott-Armstrong and Dr. Jana Schaich Borg, and their focus on integrating AI into moral decision-making.
- https://automatedteach.com/p/interview-ai-ethicist-moral-ai – Supports the details about the interdisciplinary research and the goal of building moral AI to prevent harm and ensure fairness, as discussed by Walter Sinnott-Armstrong.
- https://www.youtube.com/watch?v=YfusZkIikOQ – Provides additional context on Professor Walter Sinnott-Armstrong’s work on moral AI and the challenges of aligning AI with human morality.
- https://www.templetonworldcharity.org/blog/ai-ethics-walter-sinnott-armstrong-jana-schaich-borg-vincent-conitzer-podcast – Mentions the collaboration between Duke University and Carnegie Mellon University, and the funding from OpenAI for the AI ethics research.
- https://automatedteach.com/p/interview-ai-ethicist-moral-ai – Discusses the ethical frameworks and the challenges of developing AI that aligns with human moral judgments, as outlined in the book ‘Moral AI’.
- https://www.templetonworldcharity.org/blog/ai-ethics-walter-sinnott-armstrong-jana-schaich-borg-vincent-conitzer-podcast – Explains the concept of a ‘moral GPS’ and the aim to make AI systems more reliable than human intuition in moral decision-making.
- https://automatedteach.com/p/interview-ai-ethicist-moral-ai – Highlights the importance of integrating insights from various social sciences into AI algorithms to address ethical considerations.
- https://www.youtube.com/watch?v=YfusZkIikOQ – Outlines the benefits and risks of AI and the need to develop AI that reduces moral dangers, as presented by Professor Walter Sinnott-Armstrong.
- https://www.templetonworldcharity.org/blog/ai-ethics-walter-sinnott-armstrong-jana-schaich-borg-vincent-conitzer-podcast – Details the involvement of researchers from Duke University and Carnegie Mellon University in the project to develop moral AI.
- https://automatedteach.com/p/interview-ai-ethicist-moral-ai – Addresses the challenges of current AI models in capturing the subtleties of ethical situations and the need for more nuanced approaches.
- https://www.youtube.com/watch?v=YfusZkIikOQ – Emphasizes the growing interest in the ethical implications of AI and the role of academic partnerships in responsible technological advancement.


