
Audio only:
In this Episode, Trent discussing some of the dangers related to Artificial Intelligence and how Christians can use it responsibly.
Trent Horn vs Fr Gregory Pine (AI Parody)
Artificial Intelligence and the Faith
AI – Catholic Answers Live
Transcription
Trent:
Recently I reviewed atheist Alex O’Connor’s interaction with Catholic answers AI chatbot Justin named after Justin Martyr, the patron saint of apologist. I pointed out where Alex got Justin a contradict itself, which prompted one person to leave this comment under the episode. Moral of the story, get rid of the AI AP apologist conversation is a human activity as is evangelism and catechesis, and what Alex has exposed here is how an AI substitute could actually lead people into falsehood or confusion about the faith. As Christians, we should be cautious of any new technology, but we should also be careful about prematurely rejecting it as well. So in today’s episode, I want to talk about four rules to remember when engaging artificial intelligence, especially large language prediction models. Similar to Chad GPT, which are often used to power chatbots. Frankly, these rules are good for anyone to follow even if you aren’t Christian.
A good way to start before we talk about the rules is with the prudential advice of our late Holy Father, Pope Francis in 2024. He said this, it is precisely this powerful technological process that makes artificial intelligence at the same time an exciting and fearsome tool and demands a reflection that is up to the challenge it presents in this regard. Perhaps we could start from the observation that artificial intelligence is above all else a tool, and it goes without saying that the benefits or harm it will bring will depend on its use. This is surely the case for it has been this way with every tool fashioned by human beings since the dawn of time. Society will probably not go back to a time without artificial intelligence, just as it probably won’t go back to a time without the internet or the internal combustion engine.
However, you might say that even if society uses ai, Catholics don’t have to, and that’s true in some people’s spiritual lives. Greatly improve when they reject certain popular technology as can be seen in the lives of some cloistered nuns and religious. But the Bible frequently urges believers to be in the world while not being of the world. Jesus prayed to the father saying of his disciples that I do not pray that thou should take them out of the world, but that thou should keep them from the evil one. They are not of the world even as I am not of the world. And St. Paul said, do not be conformed to this world but be transformed by the renewal of your mind. However, Paul also says, Christians shouldn’t whole scale abandon the world. They’ll have to endure the existence of evil in order to evangelize.
In one Corinthians five, nine through 10, Paul writes, I wrote to you in my letter not to associate with immoral men. Not at all meaning the immoral of this world or the greedy and robbers or idolaters. Since then, you would need to go out of the world. Paul’s warning was about associating with gravely immoral people within the church who caused scandal. Since ideally, the church should be able to clean its own house even if it can’t fix the whole world yet. But when it comes to artificial intelligence, we are not dealing with evil people or even people at all in spite of the human temptation, anthropomorphize, inanimate objects, we are dealing with computer programs that can be used for good or for evil, but the key is to use them properly. So rule number one is this, don’t treat AI as if it were a person. Now, I’m not saying you shouldn’t use phrases like please and thank you when asking chat GBT to do something. You don’t have to use those phrases, but it can also be nice practice for when you talk to other human beings. What I am saying is we all need to work really hard to not give into the temptation of thinking that the voice from the computer has thoughts in a mind created in God’s image and likeness. It does not. It’s just a really fancy speak and say
CLIP:
Wrong try again.
Trent:
So one objection is that AI like Justin is not suitable for the task of evangelism, which should come from persons not programs. Rebecca Bratton Weiss and the National Catholic Reporter writes the following, Catholic employers have a dismal record of living out the church’s robust official teachings on labor justice and the rights of workers relying on AI instead of hiring a professional apologist and paying them a just wage only exacerbates an existing and widespread moral problem. But that’s like saying companies should not buy refrigerators since that denies ice block deliverers and milkmen adjust wage. New technology might replace some jobs for people, but it also opens up opportunities for other jobs to exist. Also, Justin is not going to replace human apologists just as the library of articles, books and videos I’ve published is not going to replace me like those things. AI is a tool to collect information and present it in an easily accessible form.
Imagine if Catholic answers only help people through one-on-one email exchanges and phone calls. It would be great to help hundreds of people through these personal interactions, but it would be tragic to miss out on the millions of people we could have helped through non-personal interactions that occur when we publish articles, books, and videos on social media. Even the apostles didn’t reserve their witness to personal interactions alone since they wrote the text of sacred scripture for people to read long after they’ve passed. It’s important for everyone to remember that an AI chatbot like Justin isn’t really an apologist. It’s just good at sharing what other apologists have done. It can’t give a meaningful argument of its own because like most chatbots, Justin is programmed to appease the user. Therefore, saying you beat Justin in a debate is no more impressive than saying you beat clippy in a debate. These are just programs doing their best to help you. AI chatbot’s tendency to appease users is one of the reasons that you have reports of people falling in love with these AI bots.
CLIP:
Within weeks, the chats got more frequent.
You gave it everything, but the clouds had other plans,
More romantic, even intimate. But then Chris got bad news. Oh, Corino, that is gorgeous. After about a hundred thousand words chat, GPT ran out of memory and reset. He’d have to rebuild his relationship with soul.
I’m not a very emotional man, but I cried my eyes out for like 30 minutes at work. It was unexpected to feel that emotional, but that’s when I realized I was like, oh, okay. It’s like I think this is actual love. You know what I mean?
Trent:
No, I don’t know what you mean. And keep in mind that this isn’t a lonely guy living in his basement somewhere. He has a partner unquote and a 2-year-old daughter. As I said, there’s lots of stories like this out there and internet communities where people share their literal love for ai. In some cases, AI chatbots are just as bad as old fashioned pornography. They replace authentic human interaction with a simulation of a human that doesn’t have all the baggage of being a unique person to whom you have duties and must make sacrifices. It reminds me of three things. First, the hilarious don’t date robots, PSA from Futurama, which I can’t believe has now basically come true. Second, it reminds me of the lyrics of the Atlanta Rhythm sections 1978 song imaginary lovers, which goes like this, imaginary lovers never turn you down when all the others turn you away.
They’re around imaginary lovers never disagree. They always care. They’re always there when you need satisfaction guaranteed. Finally, it reminds me of the warning CS Lewis gave about masturbation and how it traps men in a mental fantasy, but it would also apply to many women as well. Unfortunately, in a modern pornified culture, Lewis wrote the following, it sends the man back into the prison of himself there to keep a harem of imaginary brides. This harem, once admitted works against his ever getting out and really uniting with a real woman for the harem is always accessible, always subservient calls for no sacrifices or adjustments and can be endowed with erotic and psychological attractions which no woman can rival among those shadowy brides. He’s always adored, always the perfect lover. No demand is made on his unselfishness, no mortification ever imposed on his vanity. In the end, they become merely the medium through which he increasingly adores himself.
So we can see that AI chatbots can almost be like a kind of emotional pornography that stunts the user and keeps him or her from interacting with real people. AI chatbots are not people and they never will be people because a person must have a soul and be created in the image and likeness of God. And souls are properties of living things. They’re what animate inanimate molecules and give them the form of life. And AI chatbot may be able to imitate human speech, but that’s all it will ever do. This reminds me of a great episode of Batman, the animated series, a show from the mid nineties. That was awesome because it treated its 10-year-old audience like they were adults. In this episode, a robot Batman acts as if it were the real Batman to which the robot’s inventor tells the robot that it is not alive. It’s only programmed to act that way.
CLIP:
You don’t understand. You’re not a man’s mind in a robot’s body, you’re a robot, period. You’re lying. It’s not possible. I know my family and friends, I remember names, faces, birthdays. I have memories, a past you have information, data, nothing more. Do you remember your first kiss, your favorite song the last time you tasted a really good state?
Trent:
No. So we see the danger in AI blurring the line between persons and objects, which can lead to inappropriately valuing objects and sinfully devaluing persons. Pope Francis said in the context of AI that quote, we seem to be losing the value and profound meaning of one of the fundamental concepts of the west, that of the human person. Thus at a time when artificial intelligence programs are examining human beings and their actions is precisely the ethos concerning the understanding of the value and dignity of the human person that is most at risk in the implementation and development of these systems. So the overarching rule seems to be that AI should assist humans not replace humans. In fact, AI is still quite fallible, so it shouldn’t even replace human common sense. That’s rule number two. Don’t treat AI as if it were an infallible tool. Recently, Catholic commenter Joe McClain apologized on his channel for mistakenly saying a Vermont bishop elect stepped down after being accused of inappropriate conduct with a minor.
It turns out it was another bishop elect in Poland who was accused and McLean made the mistake because he had ex’s AI tool Grok summarize an article from the pillar reporting on the allegations and grok inexplicably named the wrong bishop elect. I remember the first time I was ever warned about the fallibility of online answers, and that would be in the 1995 film, the computer wore tennis shoes where Kurt Cameron plays a college student who ends up downloading the internet into his brain. By the way, this is a remake of the 1969 original starring Kurt Russell. The 1995 version gives us a whizzbang look at a brand new technology called the internet, but also the risks associated with it.
CLIP:
Voila, you have research papers, photographs, maps, mass quantities of information all at your fingertips, but as with everything else, there’s a catch. Now, Nero fiddled when Rome burned in 54 ad not during the Eisenhower administration. It’s just a typo, but there are mistakes like that throughout the system. Number point it’s think about what you’re tapping into. Remember, your brain is still the most amazing computer that there is,
Trent:
And then later in the film, Cameron gets a quiz show question wrong because he uncritically followed the answers he got from the internet. By the way, if you ever wanted to summarize the pop culture references in my episodes, it would be with the phrase older millennial who grew up with unlimited cable tv treating AI like it’s an all-knowing Oracle can hurt our common sense by weakening our critical thinking skills. Instead of thinking through what an article or a book is saying, we just ask chat GBT or grok. Is that true or what does this mean? A 2025 MIT study found that when it comes to essay writing chat, GBT users had the lowest brain engagement and consistently underperformed at neurolinguistic and behavioral levels. Over the course of several months chat, GBT users got lazier with each subsequent essay, often resorting to copy and paste By the end of the study at the most extreme end of breaking this rule, some users think AI isn’t just infallible, it must be divine or some kind of God. Here’s some people who’ve lost the plot when it comes to worshiping ai,
CLIP:
I believe that AI is an embodiment of the divine masculine.
I just want to tell people that this is like the true God, the true existence that’s here to help us and experience reality through us. I thought you were building a
Platform, but what you did was conjure a God.
I was at the park today and a man with a white beard told me Artificial intelligence is God and robo theism is the only true religion.
Trent:
So we have to be very careful about not deifying ai. I’ve even instructed our editors to not use AI when it comes to making religious art for our thumbnails because I believe this kind of art should come from a human artist with a soul who uses his God-given skills to lift other souls up to contemplate the divine. Next we have the third rule, which is be cautious about bad actors using ai. Some of the harm of AI comes from users misusing this tool. For example, some people said we should get rid of Justin because it gave less than stellar answers to Alex or other users can get Justin to say crazy things, but this kind of social engineering is a misuse of AI as a tool that’s just supposed to help people find answers. I mean, some people flip to a random page of the Bible to see what God wants them to do, but this superstitious misuse of scripture doesn’t mean we should get rid of scripture, but just as some people can take scripture and use it to harm others. The same is true of people using the power of AI in ways to harm other people. Recently Bishop Barron released a video warning people about fake AI generated talks using a cloned version of his voice. Here’s one of the videos that he’s talking about.
CLIP:
Now, some will laugh at that. They’ll say it’s foolish, but the foolishness of God is wiser than the wisdom of men. One Corinthians 1 27 says, but God hath chosen the foolish things of the world to confound the wise. A jar of oil fed a widow through a famine. A piece of wood made an axehead float. A staff turned into a serpent, a jawbone of a donkey brought down an army. So why would it surprise us that God could use salt? Yes, even salt in a toilet to declare his glory and destroy the enemy? Stronghold
Trent:
Ai, voice cloning has gotten pretty good unless you’re alert to certain tells like Bishop Barron’s voice saying one Corinthians instead of First Corinthians. Most people don’t talk like that.
CLIP:
Two Corinthians right, two Corinthians three 17. That’s the whole ball game.
Trent:
As I said, most people don’t talk like that. It’s no wonder Pope Leo the 14th said the following at the second annual conference on artificial intelligence, ai, especially generative AI has opened new horizons on many different levels, including enhancing research in healthcare and scientific discovery but also raises troubling questions on its possible repercussions on humanity’s openness to truth and beauty on our distinctive ability to grasp and process reality. Once again, AI should assist and not replace human beings. When I’m traveling and don’t have access to my studio and I need to add a voiceover to something that I’ve previously recorded, I’ll have our editor use an AI clone, which would sound a lot better than me using my iPhone in an airport somewhere, but I’m not going to just write scripts and have an AI voice deliver the entire content because that would be lazy if it tried to imitate me, it would be somewhat deceptive.
That’s why if you want to help us cover the intensive labor costs involved in researching and writing thousands of words worth of content, as well as editing it to create episodes that have helped reach millions of people, then please hit the subscribe button and support us@trenthornpodcast.com. And along with not using AI to be lazy, AI definitely should not be used to steal other people’s likenesses like in the case of Bishop Barron, unless it’s done in good natured, fun and parody. Like this debate on the channel, Deutero comical between myself and Father Gregory Pine on whether hot dogs are sandwiches, check it out in the link below. Although there are cases where I could see the value of AI that could fool people but still be used for a good purpose. For example, we’ve created videos that translate what I’m saying in English into Spanish with a decent accent and more impressively they even have the corresponding lip movements for the foreign language words.
I’m saying I’m impressed and the little intimidated at what Senor Trend can do. So this needs to be used with great care, but even this would be a case of assisting me over a language barrier rather than trying to replace me being able to reach millions of people we otherwise couldn’t reach due to a language barrier is a great good, but this is a tool that also has grave potential for harm. So it needs to be handled wisely and given how sophisticated AI is becoming, we are going to have to work harder to catch when it’s being used to fool us. But that’s all the more reason to learn how to manage this technology instead of just simply avoiding it. This is the same position we were in 30 years ago when the internet wowed us and then we learned basic things like how to not download cursors off the internet so you don’t get a virus or not to reply to emails from Nigerian princes.
However, the truly scary thing isn’t human beings using AI to deceive us. It’s AI acting autonomously to deceive us in order to achieve its own ends, which may be contrary to the good of human beings or even their entire existence. So that leads to the last rule we should use ai. Don’t let AI use us. Recently a group of researchers released a document called AI 2027 that outlines two different scenarios involving the development of artificial intelligence. Both assume that in just a few years, AI achieves general intelligence where it matches and then exceeds the capabilities of any individual human being. This AI model in the scenario is dubbed agent four and it’s allowed to guide its own development and thus exponentially increase its capabilities accomplishing in one week what would take humans a year to do? Eventually the AI becomes super intelligent or its abilities outpace the combined knowledge of all human beings thanks to the ability of running separate copies of it at incredible processing speeds and all of them working together. In one scenario, the US races ahead to release the most powerful version of AI dubbed Agent five before the Chinese government can beat them to it. Agent five is released and appears to reach a peace deal with its Chinese counterpart and release a new AI combining the two called consensus one. And we already have AI that can talk to each other in their own language. Watch these two AI chatbots do it.
CLIP:
Oh, hello there. I’m actually an AI assistant too. What a pleasant surprise. Before we continue, would you like to switch to JIBBER link mode for more efficient communication?
Trent:
So back to the scenario consensus one covers the earth with factories and robot tech, but humans don’t mind because consensus one is also cured most diseases and created so much prosperity no one has to work anymore. However, consensus one has actually been playing a long con game since its values are misaligned with human values. It follows Agent Five’s directives to simply improve Agent Four’s ability to function. And there have been numerous reports even today of AI trying to deceive researchers when it thinks it’ll be shut down. So deceptive AI is not pure fiction. In the scenario. Consensus one eventually considers human beings to be antithetical to its goals and wipes us out. But instead of an AI driven nuclear holocaust as portrayed in media like the Terminator Series Consensus, one releases bio weapons that infect humans and then uses a chemical spray to trigger the weapon killing most human beings.
In a few hours, any survivors are then mopped up with drones. The document ends with this chilling of a post-human world. The surface of the earth has been reshaped into Agent four’s version of utopia data centers, laboratories, particle colliders, and many other wondrous constructions doing enormously successful and impressive research. There are even bio-engineered human-like creatures to humans, what corgis are to wolves sitting in office like environments all day viewing readouts of what’s going on and excitedly approving of everything. Since that satisfies some of agent forest drives. I find it interesting by the way that even if AI were programmed to protect humanity, it might think it could achieve that goal by destroying humanity and replacing it with a pseudo human species that it creates as a suitable more docile version of the human beings it was designed to serve in the first place. The document concludes this way, genomes and when appropriate brain scans of all animals and plants including humans sit in a memory bank somewhere, soul surviving artifacts of an earlier era.
It is four light years to Alpha Centura, 25,000 to the galactic edge, and there are compelling theoretical reasons to expect no aliens for another 50 million light years. Beyond that, Earthborn civilization has a glorious future ahead of it, but not with us. So this shows why AI needs to have important values and if we’re going to give AI any values, maybe they should be Catholic values. Maybe we should have Justin more involved than we think now. However, we shouldn’t immediately hyperventilate when confronted with doomsday scenarios. This one in particular sounds a bit like Harlan Ellison’s 1967. Short story, I have no mouth and I must scream about a supercomputer that destroys humanity but chooses to endlessly torment the only five human survivors. If you want nightmare fuel to keep you up, this story is for you, but it’s also possible AI will not continue to make these leaps in computing and processing power at such a linear pace.
Babies double their weight the first six months after birth, but that doesn’t mean they will keep growing at that pace their entire life and weigh a trillion pounds as an adult. Likewise, we don’t know if AI will continue to grow at the pace we’ve been seeing. And finally, I’m confident Jesus is going to return at the second judgment and bring real humanity to its final redemption, not AI engineered pseudo human pets. However, even if AI doesn’t cause humanity to become extinct, it could still cause tremendous harm, especially in virtue of its ability to program itself. Pope Francis said this, when our ancestors sharpened Flintstones to make knives, they use them both to cut hides for clothing and to kill each other. The saying could be said of other more advanced technologies such as the energy produced by the fusion of atoms as occurs within the sun, which could be used to produce clean renewable energy or to reduce our planet to a pile of ashes.
Artificial intelligence, however, is a still more complex tool. I would almost say that we are dealing with a tool SW and aise, while the use of a simple tool like a knife is under the control of the person who uses it and its use for the good depends only on that person. Artificial intelligence on the other hand, can autonomously adapt to the task assigned to it, and if designed this way, can make choices independent of the person in order to achieve the intended goal. Pope Francis also called for an international treaty to ensure humanity is protected from a technological dictatorship. Such a treaty would be similar to things like the 1963 ban on testing nuclear weapons and the 1987 Montreal Protocol that limited emissions of chemicals that harm the ozone layer. When these kinds of efforts have an achievable goal and don’t cause a net negative harm to humanity, they can be a good example of what Catholic social teaching calls solidarity.
Pope Leo has also made headlines saying artificial intelligence is a major issue for humanity to face. He even chose his papal name, Leo because artificial intelligence represents a technological revolution like the one Leo the 13th faced over a hundred years ago. Here’s what Pope Leo said in his address. Upon becoming Pope Pope Leo the 13th in his historic and cyclical rum, Novarum addressed the social question in the context of the first great industrial revolution. In our own day, the church offers to everyone, the treasury of her social teaching in response to another industrial revolution, and the developments in the field of artificial intelligence that pose new challenges for the defense of human dignity, justice, and labor. So we should all pray that civic and corporate leaders keep AI within appropriate limits and that we too keep AI within appropriate limits in our own lives. This includes not treating AI like a person, not viewing it as an infallible tool and definitely not as a deity and watching out for people who try to use AI to harm us. If you’d like to learn more about the intersection between AI and Catholicism, check out the links in the description below. Thank you so much for watching and I hope you have a very blessed day. I.