
The question—or rather, questions—I intend to answer in this article is what man is that AI cannot become. More precisely, how are we to maintain the Christian view of humanity when robots can present a nearly perfect facsimile of it? Will the two be indistinguishable, or must there also be a difference? And if so, what is that difference?
We begin by talking about what is uniquely man, from the Christian perspective. I will enumerate the most important aspects, insofar as they mark genuine points of radical contrast with AI, and then explain why AI is not, and cannot be, what man is in these respects.
1. Man is a substance.
This is to say, in Aristotle-speak, that man is a basic and tightly unified being, a thing with an intrinsic principle of unity—and is not wholly reducible to his more elementary material components.
Indeed, in many substances, and especially in man, there are real, often novel, systems-level powers or abilities—for example, the power of conceptual thought—that cannot be fully or adequately explained by the component pieces alone.
Consider how many of our materially ingredient components—our organs and cells—have not merely the functions they do, but even the very natures they do, precisely by being parts of the whole. For instance, a heart is identifiable as a heart only within the circulatory system, whose activity it serves; an eye is an eye only within a visual system that enables seeing; and even cells take on their specific natures—muscle, liver, skin—only through their roles within the organized unity of the organism. The whole, we might say, has priority or command over the parts—a remarkable and undeniable feature. And there are things a human can do—essentially new powers—that cannot be adequately explained merely by the coordination and mechanical operation of the constituent material parts.
This is not true of AI. AI is not a tightly unified thing, but rather a conglomerate of entirely disparate components, just like a radio, computer, or car. Its principle of unity is extrinsic, imposed by us. Whatever functions AI exhibits are simply the mechanical aggregates of its components operating in the arrangement we designed. There is no unified center of being in AI, and AI is, in fact, reducible to the sum of its component parts (its algorithms, parameters, hardware, and training data—nothing more, aside from the meaning it has only in relation to our minds, not intrinsically). Any “abilities” AI displays arise entirely from the coordinated operations of these parts; nothing essentially new emerges at the level of the whole. To be even more pointed, there is no genuine “whole” that stands over and above its constituents at all.
This is not the case for man. Man is a substance, a naturally and intrinsically unified being; AI is an artifact, something with only an accidental unity imposed by a mind (namely, ours).
Time, then, to discuss two of these irreducible powers—thinking (or understanding) and willing.
2. Man is a thinking thing.
To engage in conceptual thought is to grasp a determinate (an exact and unambiguous) concept of something, or a unit of precise meaning. This is not something AI has or does. In fact, AI does not understand anything at all. The symbols it manipulates do not mean anything to the machine itself; they have significance only insofar as we interpret them or assign meaning to them. Absent a mind to interpret them, the symbols are nothing but shapes, meaningless inside and to the machine.
As ChatGPT itself will tell you, what the machine actually does is engage in extraordinarily sophisticated pattern recognition: it converts text into numerical representations, matches those numbers to patterns drawn from its training data, computes probability distributions over possible next tokens, and then outputs whatever continuation is statistically most likely. (More simply put, it’s like a supercharged version of your phone’s autocomplete.) None of that—literally none of it—requires understanding or any grasp of the meaning behind the symbols it processes.
AI operates entirely at the level of formal symbol manipulation, without any intrinsic awareness of what the symbols represent. To be sure, it doesn’t even “understand” the rules it follows; it just mechanically shifts from one physical state to another—like a waterwheel turning, only much more complex.
This is, for those familiar, precisely the point illustrated by John Searle’s famous Chinese Room thought experiment. Very briefly, Searle imagines a man who does not understand Chinese locked in a room. He is given Chinese symbols and an instruction book (written in English) telling him exactly how to manipulate those symbols to produce appropriate outputs (i.e., a coherent sentence). From outside the room, it appears as if he understands Chinese—his answers are indistinguishable (there’s that word again!) from those of a native speaker. But inside the room, he has zero understanding of Chinese. He is merely following formal rules for symbol manipulation.
Searle’s point is that syntax (formal rules for manipulating symbols, which a machine can obviously follow) is not semantics (meaning). The mere manipulation of symbols—even perfect manipulation—does not generate or entail understanding or meaning. Said differently, no amount of correct symbol-shuffling equates to understanding; they are categorically different activities. Understanding is not a computational output at all, but an act of mind—the realm in which meanings exist and through which anything else that has meaning acquires it.
One further reason to think that understanding is not purely computational is that it must be immaterial. Why? Because anything purely physical is inherently indeterminate—it does not, by itself, carry an exact or unambiguous meaning, but is always open to an enormous (indeed, potentially infinite) range of possible interpretations of meaning. A simple example to illustrate the point, borrowed from philosopher Edward Feser (building upon the work of James Ross), is a triangle drawn on a chalkboard. What does it represent? A red triangle? An isosceles triangle? The abstract form triangle? Or even the forgotten pop band Triangle? Nothing about its physical properties alone can determine the meaning. Physical marks are always intrinsically indeterminate, whereas conceptual thought is determinate—our grasp of triangle-as-such, for example, is exact and unambiguous. We know exactly what we are thinking about, the precise meaning.
Well, one cannot be the other, which means that whatever else is going on in our conceptual activity, it is not a purely physical thing or operation. But if AI consists entirely of physical operations—and we have every good reason to think it does—then it cannot think as we think, since our intellectual activity involves, necessarily, a genuinely immaterial dimension. And to the extent that AI appears to generate determinate content, this is only because its pattern recognition is sophisticated enough to produce outputs that fall within the familiar structures of human thought and language—structures that exist only because we have already carved out a space of relevance through our own concepts.
AI works inside a conceptual landscape shaped by beings who actually understand what, say, a triangle is, essentially; every single part of the system is constrained by our prior conceptual framework. Simply put, the determinacy resides in our intellect, not in the machine’s physically indeterminate operations [1].
3. Man is a willing thing.
Traditionally understood, the will is the rational appetite—in practical terms our ability to bring deliberation to an end, a final “I choose this.” However, because the will is inherently tied to the intellect—since deliberation involves surveying the “space of reasons” or concepts—the act of weighing reasons to favor, say, chocolate ice cream over vanilla (because of its richness, for example) requires precisely the sort of conceptual activity described in the previous point, which AI fundamentally lacks.
There is more to the story. Traditionally, it has been maintained that the will is naturally ordered toward the good as such (ultimately toward God), and that no finite object can by itself determine the will, because any finite thing can always be conceived under different aspects—as more or less desirable, more or less perfect or imperfect, as having or lacking certain qualities. The will, then, is our active power to bring deliberation to a close, to give that final “oomph” to some object of desire that is not, by itself, powerful enough to determine us to it. It needs our final “I choose you.”
This is a power wholly lacking in AI. AI is entirely determined by physical procedures; because it cannot understand, it cannot will. And because it follows fixed physical laws, it is obviously determined by those laws. However, if the human being—or at least some aspect of the human being, as I maintain the intellect and will are—is genuinely immaterial, then whatever else one wants to say about physical things being determined by physical laws, those considerations quite obviously do not apply to non-physical or immaterial powers.
I have now identified three differences between man and machine that cannot be overcome: man is (1) tightly unified substance that (2) thinks and (3) wills—and a proper understanding of what these notions are, and of what AI is, gives us good reason to think AI will never enter into the metaphysical realm of man.
However, these differences do depend upon a certain conception of humanity—not only the Christian one, but the classical philosophical picture of man more broadly. If one does not share this conception and instead thinks the human mind, and everything that goes on in it, is essentially computational in nature, then such a person is in a position to think that AI not only could become everything man is, but in some sense already has—perhaps even surpassed it.
To finish, I’ll reiterate this: AI is best understood as a simulation—an impressive one, but still only a simulation. As Feser notes, Arthur C. Clarke once remarked that “any sufficiently advanced technology is indistinguishable from magic,” but at most this means we might mistake advanced technology for magic, not that it actually becomes magic. We know something is not magic precisely because we know that it is produced by technology. Likewise, no matter how advanced a computer program becomes, it does not become intelligence. To think otherwise is simply to confuse the appearance of a thing with the thing itself.
PS: The reader might be amused to hear that just prior to finishing this article, my daughter stuck a small piece of candle wax up her nose. ChatGPT immediately provided instructions on how to perform the “mother’s kiss” to get it out—offering an instant resolution to what would otherwise have been a dramatic episode. Although it may not be what man is, AI certainly has its advantages and, like any technology, will ultimately function as a force multiplier for both good and evil acts. I was thankful, at least in that moment, to have it around.
[1] To avoid a possible misunderstanding: nothing I am claiming is meant to deny the brain’s (perhaps indispensable) role in human cognition. What it does deny is that brain activity alone exhausts what understanding is. As I argue in The Best Argument for God, considerations of this sort license only the following claim: even if we cannot think without our brains, we do not think with our brains. The brain may be a necessary condition for human thought, but it is not a sufficient condition for the functioning of the human mind—at least insofar as formal thought and conceptual content are concerned.



