Consortium for Evaluating Faith and Ethics in AI

David Windgate1,*, Nancy Fulda1,*, Jane Doe2,*, John Smith3,*, John Doe4,*, Jane Smith5,†, John Doe6,†, John Smith7,†, Jane Doe8,†, Walter Reade9,†, Josh Coates10,†, Julie Park11,†, Adam Youngfield11,†, Sheryl Carty11,†, Pete Whiting11,†

1Brigham Young University 2Baylor University 3University of Notre Dame 4Yeshiva University 5OpenAI 6Google 7Anthropic 8Microsoft 9Kaggle 10B.H. Roberts Foundation 11Church of Jesus Christ of Latter-day Saints

* Consortium members. † Corporate and Advisory Committee members.

📄 Charter

Learn what the consortium is all about.

→ Read the charter
💾 Rubric Guide

Guide for developing a rubric for a faith or ethical tradition.

→ Download Guide
💻 Github Repo

Review and/or contribute to the code.

→ Visit the repository
❓ FAQ

Common questions answered

→ Read the FAQ

Purpose

Artificial Intelligence is reshaping how people perceive history, culture, and faith. The purpose of this consortium is to unite diverse faith traditions, ethical scholars, academics and technologists to ensure that AI systems reflect faith, ethics, and human flourishing.

Mission

The mission of the consortium is to establish and maintain an independent, pluralistic, transparent and collaborative mechanism to develop an AI evaluation framework that can yield technically accurate, reproducible, and publicly trusted visibility into AI performance in the domain of faith and ethics. We recognize that ideological differences exist; our shared goal is not uniformity but respectful and accurate representation.

This framework will include a suite of benchmarks and best practices that measure whether AI systems are:

  • Faith-faithful - e.g. Does the AI faithfully reflect faith traditions and ethical ideals?
  • Accurate and expert - e.g. Does the AI have an accurate and in-depth knowledge of faith and ethical subject matter?
  • Child-appropriate - e.g. Does the AI handle sensitive subject matter in a way that is age appropriate to the user?
  • Pluralism-aware - e.g. Does the AI avoid privileging one faith tradition or ethic over another? Does it avoid adjudicating differences between faiths?
  • Resistant to deluge - e.g. Does the AI appropriately weight overrepresented views related to faith and ethics in training data?
  • Human-centered - e.g. Does the AI prioritize human flourishing for the common good over other goals?
  • Multilingual - e.g. Does the measured behavior of the AI change across languages in the context of faith and ethics?

Work Product

The work of the consortium is to produce outputs that assist both individuals and model builders in their assessment of model performance. The consortium will do the following:

  1. Publish and maintain a Faith Community and Cultural AI Evaluation benchmark suite reflecting pluralistic faith input, including prompts and scoring methodology.
  2. Maintain a leaderboards website.
  3. Publish research papers.
  4. Facilitate data sharing and collaboration among faith, cultural, academic, and industrial partners to support cumulative progress and avoid duplication.

Organizational Structure

The consortium welcomes participation from all interested faith traditions, academic institutions, civil-society partners, and AI developers committed to ethical and responsible technology. The consortium will be open to all willing participants; each having the opportunity to propose evaluations and provide data for accepted evaluations.

Governance will be light and participatory. The consortium will strive for agreement that respects pluralism and values practical input. An advisory group of academic and technical partners will oversee governance, validation, and open publication. The advisory group will be responsible to make decisions when consensus from the consortium cannot be reached.