What Led the Vatican to Debate Artificial Intelligence
The challenges of artificial intelligence took an unexpected stage in early March. The Vatican organized a seminar called Potential and Challenges of Artificial Intelligence, bringing together experts in ethics, technology, and governance to discuss the direction of this revolution that is reshaping practically everything around us. It is not every day that one of the most traditional institutions in the world decides to put algorithms and neural networks on the agenda, but the moment called for exactly that.
The event took place on Monday, March 2, at the Salone San Pio X, on Via della Conciliazione 5, in Rome, and was organized by the Secretariat for the Economy and the Labor Office of the Apostolic See, known by the acronym ULSA. The opening remarks were given by Professor Pasquale Passalacqua, director of ULSA, who revealed that Pope Leo XIV, when informed of the initiative by its president, Monsignor Marco Sprizzi, made a point of expressing appreciation and encouragement, voicing his desire for a deeper awareness in this highly relevant and complex field. The debate was moderated by Alessandro Gisotti, Deputy Editorial Director of the Dicastery for Communication.
To kick off the discussion, the organizers brought back a quote attributed to Albert Einstein that sums up the current moment well: an abundance of means and a confusion of ends. This provocation set the tone for the whole conversation, raising questions about how algorithms, bias, and commercial interests are shaping the future of technology — and why ethics needs to be at the core of this conversation from day one. 🤔
The symbolic presence of the pontiff reinforced the message that ethics cannot be treated as an optional add-on when we are talking about technologies that affect billions of people around the globe. The gesture also signals that even millennia-old institutions recognize the urgency of actively taking part in this debate instead of just watching from the sidelines. As highlighted during the seminar, the Holy See — which has no military or commercial objectives — can play a key role in promoting global governance capable of developing systems that are ethical from the design phase.
The Experts Who Gave Voice to the Debate
The seminar brought together a heavyweight panel. Among the speakers were Bishop Paul Tighe, Secretary of the Dicastery for Culture and Education; Franciscan friar Paolo Benanti, professor at the Pontifical Gregorian University and Luiss Guido Carli University; and Professor Corrado Giustozzi, who teaches in the Master’s program in Intelligent Systems Engineering at the Campus Bio-Medico University of Rome. Each one contributed a different perspective, but they all converged on a central point: technology does not develop in neutral spaces.
Bishop Tighe chose the acronym VUCA — Volatility, Uncertainty, Complexity, and Ambiguity — to sum up the consequences of the massive adoption of ChatGPT starting in 2022. To illustrate how geopolitical interests intertwine with technological progress, he mentioned the case of Anthropic, a US-based company founded with the goal of promoting more ethical AI and which, according to reports, has allegedly faced government pressure to relax its ethical commitments regarding military and surveillance uses. Tighe was blunt in stating that the development of new technologies is interwoven with geopolitical rivalries, commercial pressures, and personal ambitions.
In the face of so much complexity, the bishop referred to the document Antiqua et nova, which points to the wisdom of the heart, capable of integrating the whole and the parts, as what humanity most needs today. Tighe also stressed that the Church has moral authority and the ability to bring together qualified stakeholders, making it a meaningful partner in guiding the development of artificial intelligence. Connecting the themes raised in the talks, Gisotti emphasized that the seminar also represented a commitment by the ecclesial community to engage in this debate.
The Politics Built into Algorithms
Father Benanti’s presentation added an extra layer of depth to the debate. He proposed a new ethics of technology that questions the politics embedded in artificial intelligence models. His central claim was clear and provocative: every technological artifact, when it affects a social context, acts as a configuration of power and a form of order. In other words, no technology is neutral. Every system carries within it design decisions that favor certain groups, perspectives, or interests over others.
Benanti pointed out that this is an urgent issue, discussed at many negotiation tables, from the Holy See to the United Nations — he is the only Italian member of the UN’s AI advisory body. In these forums, it is increasingly clear that power structures are being shaped by commercial deals. And this shows up very concretely in the information ecosystem. The visibility of an article, for instance, does not necessarily depend on its quality, but on the ranking that an algorithm assigns it on web pages. Benanti concluded by calling this a mediation of power, something that should concern anyone who consumes digital content on a daily basis.
This point is especially relevant for anyone following the world of technology and artificial intelligence. When a language model decides which information to prioritize, which perspectives to amplify, and which voices to mute, it is exercising a form of power that rarely gets recognized as such. What is even more troubling is that these decisions happen at massive scale, affecting millions of people simultaneously, without any oversight proportional to that impact.
Bias, Data, and the Training Problem
Professor Corrado Giustozzi’s talk focused on the nature of one of AI’s core components: the algorithm, and on critical issues related to algorithmic decision-making processes. One of his main points was the problem of bias. Algorithms can encode bias, whether unintentional or deliberate, that distorts outcomes or makes them inequitable.
Giustozzi explained that training — the phase in which an algorithm is developed based on input data — plays a decisive role here. If the data is incomplete or skewed, the outcomes will inevitably be wrong or discriminatory. This is a challenge the tech industry knows well but has not yet solved in a satisfactory way. Cases involving facial recognition systems with disproportionate error rates across different ethnic groups and recruiting tools that penalize certain profiles remain real warnings, documented in numerous academic studies and international reports.
The discussion made it clear that building efficient technology is not enough. It must also be fair, transparent, and auditable. And that is one of the biggest challenges the industry faces today. Seminar participants reinforced that responsibility for these systems cannot rest solely on the engineers who build them; it needs to be shared by companies, regulators, and society as a whole.
The Vatican’s Role in Global AI Governance
Another topic raised in the debate was the role of big tech corporations in setting the rules of the game. When private companies control the most advanced language models and the datasets used to train them, society becomes dependent on corporate decisions that do not always prioritize the public good. Participants argued that artificial intelligence governance needs to involve multiple actors, including governments, academia, civil society, and yes, institutions like the Vatican, which can bring crucial human-centered perspectives to balance the race for innovation with the protection of basic rights.
The Holy See’s position in this context is, at the very least, strategic. Because it has no direct military or commercial interests at stake, the Vatican presents itself as a player capable of facilitating dialogue between parties with conflicting agendas. This relative neutrality allows the institution to act as a kind of global mediator, something it has already done in other sensitive areas over the centuries. Applying this experience to artificial intelligence seems like a natural step, especially considering that the impacts of this technology cross national borders and affect entire communities in deep and often irreversible ways.
Challenges That Go Beyond Technology
On the one hand, artificial intelligence offers enormous transformative potential — speeding up medical diagnoses, optimizing supply chains, and expanding access to information. On the other hand, it brings challenges that go far beyond the purely technical sphere. The Vatican seminar devoted a good portion of its time to discussing the impact of automation on the job market, the concentration of technological power in a handful of companies, and the risk of eroding individual autonomy in a world increasingly mediated by algorithms. The experts were emphatic in saying these are not future problems, but realities that are already unfolding and demand urgent, coordinated responses across different sectors of society.
Regulation inevitably came up as a central theme. The European Union has moved forward with the AI Act, considered the world’s most comprehensive regulatory framework for artificial intelligence, and several other countries are working on their own laws. Participants in the Rome event stressed that regulation should not hold back innovation, but ensure that it happens within clear ethical boundaries. This balance between technological progress and the protection of fundamental rights was described as one of today’s most complex challenges, precisely because it involves massive economic interests and highly sensitive geopolitical dynamics.
Ethics from the First Line of Code
The seminar’s conclusion brought an important consensus among participants: ethics cannot be an afterthought in the development of new technologies. It needs to be baked in from the conception of any artificial intelligence system, shaping algorithm design, data selection, and the definition of success metrics that go beyond profit or efficiency. The notion of systems that are ethical from the design phase was repeated like a mantra throughout the event, reinforcing that responsibility starts long before a product ever hits the market.
By hosting this kind of gathering, the Vatican showed that the debate about the future of AI does not belong only to engineers and investors. It belongs to society as a whole. And the more diverse voices take part in this conversation, the better our chances of building systems that truly serve people, instead of the other way around. The Rome seminar was not just a one-off event, but a sign that the discussion around artificial intelligence and ethics is gaining momentum and reaching places that, until recently, seemed unlikely. And that is, honestly, good news for everyone. 🚀
