Artificial intelligence: amplifier of disinformation or tool to build peace?

Lisa Schirch used AI to generate this image of “Anabaptist Mennonites deliberating on the use of technology.” She showed it at a meeting in Japan and to Anabaptist groups online. — Courtesy of Lisa Schirch Lisa Schirch used AI to generate this image of “Anabaptist Mennonites deliberating on the use of technology.” She showed it at a meeting in Japan and to Anabaptist groups online. — Courtesy of Lisa Schirch

Artificial intelligence is one of the most transformative technologies humanity has ever developed. AI offers blessings and burdens that can foster peace and democracy or fuel violence, inequality, polarization and authoritarianism. Religious ethics have something to offer.

Over 100 religious people met at the Peace Park in Hiroshima, Japan, in July to discuss “AI Ethics for Peace.” Representing the University of Notre Dame and the Toda Peace Institute, I presented how my Anabaptist ethics shape my use of AI to support democracy and peacebuilding.

The Vatican’s Pontifical Academy of Life organized the conference with Religions for Peace Japan, the United Arab Emirates’ Abu Dhabi Forum for Peace and the Chief Rabbinate of Israel’s Commission for Interfaith Relations.

Religious leaders from Judaism, Christianity and Islam joined with leaders from Buddhism, Hinduism, Zoroastrianism, Bahá’í and representatives of the Japanese government and big tech companies such as Microsoft, IBM and Cisco.

The two-day workshop ended with a poignant signing ceremony at the Hiroshima Peace Park, located at ground zero of the 1945 atomic bomb explosion in a city synonymous with the devastating effects of unrestrained technological power.

AI poses several dangers, including the potential to amplify disinformation, exacerbate societal polarization, infringe on privacy, enable mass surveillance and facilitate autonomous weapons.

Participants signed the “Rome Call for AI Ethics,” which emphasizes AI’s ethical development and use. It advocates for AI systems that are transparent, inclusive and respect human rights. Pope Francis calls for broad-based ethical reflection on how AI can respect human dignity. He has highlighted the importance of placing ethical considerations at the forefront of technological innovation.

My religious tradition, Mennonite Anabaptism, has something to offer these conversations. Some Anabaptist communities deliberate carefully to evaluate new technologies’ potential positive or negative impacts. They might decide cars or phones are acceptable for some purposes but not others.

Several Anabaptists are involved in discussions on the ethics of AI and how it might be used to support our peace commitments. Three of us, including Paul Heidebrecht at Conrad Grebel University College in Ontario and Nathan Fast of the University of Southern California, are working on how AI can support democracy and peacebuilding.

Lisa Schirch with Father Paolo Benanti of the Vatican’s “Rome Call for AI Ethics” at the Peace Park in Hiroshima, Japan. —Courtesy of Lisa Schirch
Lisa Schirch with Father Paolo Benanti of the Vatican’s “Rome Call for AI Ethics” at the Peace Park in Hiroshima, Japan. —Courtesy of Lisa Schirch

At the workshop, I presented how Anabaptist theology has shaped my commitment to peace and efforts to regulate digital technologies. My research emphasizes how social media and AI are driving conflict and polarization and undermining democracy and human dignity.

But AI also offers opportunities to enhance creativity, solve global challenges and strengthen democratic engagement. It can act as a “bicycle for the mind,” increasing our ability to address issues like climate change and inequality creatively and efficiently.

The story of the Tower of Babel in Genesis 11 serves as a metaphor for AI. United in their ambition, humans build a tower to reach the heavens. God intervenes to prevent them from becoming too powerful by confusing their language. People can no longer work together and scatter across the Earth. Like the Tower of Babel, AI offers immense new powers but distorts information and fosters confusion and polarization.

Social media platforms, powered by first-generation AI algorithms, determine what content a person sees on their newsfeed. These platforms prioritize attention-grabbing content, generating both profit and polarization. Technology must be designed to support social cohesion — the glue that holds society together.

Building relationships and fostering understanding are religious tasks that AI can aid. But humans must guide it to do this.

At the University of Notre Dame’s Kroc Institute for International Peace Studies, I teach “peacetech” courses, where students train AI to combat hate speech and improve digital conversations. We use AI to analyze discussions on deliberative platforms, highlighting shared values and solutions that reflect diverse perspectives. These technologies help map different viewpoints, enabling us to “listen at scale.”

AI-powered deliberative technologies like Pol.is and Remesh strengthen democracy and foster social cohesion. They’re used in Taiwan, Finland and elsewhere.

An AI-generated Tower of Babel image. — Courtesy of Lisa Schirch
An AI-generated Tower of Babel image. — Courtesy of Lisa Schirch

In June, the Toda Peace ­Institute brought 45 peacebuilders from around the world to the Kroc Institute to learn how to use AI-­powered technologies to support public deliberation. Working in groups, they will explore whether technology can help Afghans living around the world communicate and set priorities for their future, assist ­Palestinians and Israelis in deliberating about coexistence, support Colombians in discussing the full implementation of their peace agreement and enable Nigerians to weigh the trade-offs of oil and environmental damage.

As we grapple with the challenges and opportunities of AI, some of us are asking whether AI and democracy can fix each other.

Last year I was part of a team working with Open AI’s “Democratic Inputs to AI” project to test whether delib­erative technologies can align AI with the will of humanity. We tested a methodology using the Remesh platform to ask Americans to develop guidelines for how ChatGPT should answer sensitive queries. Despite initial polarization, Remesh helped people of diverse views to come to a strong consensus on how AI tools should respond to questions about international conflicts, vaccines and medical advice.

Religious ethics are relevant to AI development, helping us see how AI can support human dignity and social cohesion while also voicing our grave concerns over the potential for these new technologies to cause economic, political and social harms.

Lisa Schirch is Richard G. Starmann Chair in Peace Studies and professor of the practice of technology and peacebuilding at the University of Notre Dame and senior fellow with the Tokyo-based Toda Peace Institute. A longer version of this article was published by the Toda Peace Institute at toda.org/global-outlook/2024/religion-and-ai-ethics-for-peace.

Lisa Schirch

Lisa Schirch is a professor of peace studies at the University of Notre Dame with 30 years of experience working Read More

Sign up to our newsletter for important updates and news!