Prometheus casts an ambiguous figure among the shades of mythic history. While he has, in some contexts, been regarded as a prototype of Jesus Christ for his willingness to suffer petrifixion in service as a beneficiary to man, the figure of Prometheus is also, in many ways, isomorphic with the Serpent of Genesis, who is identified as Satan, or “the Adversary” in the Hebrew scriptures.1 Like the Biblical Serpent, Prometheus confers a gift upon nascent humanity that seems at once to augment and to undermine its status. Specifically, the effect is the consequence of the gift’s magnification of a single capacity of Man while leaving the others unchanged. I have explored the archetypal resonances of Promethean fire on another occasion so I will, rather than continue down that suggestive path, use this mention as a pivot to another bittersweet fruit from “the Tree of Knowledge”—to wit, OpenAI’s GPT-3 (Generative Pre-trained Transformer) text engine.
Countless people have weighed in, whether to sing its praises or to sound the alarm in respect to the deleterious implications of a computer having passed the so-called “Turing Test”2 and thereby becoming capable of producing language-like output indistinguishable from the language of an actual human interlocutor. Likely I have already betrayed my opinion on this matter from my choice of diction in the prior sentence so I will dispense with the introductory remarks and cut to the chase. Specifically, in the chatter surrounding this technology, three points deserve more than the virtual silence they have received.
To begin with, irrespective of whether the GPT-3 engine appears to be using language and demonstrating critical thought, in principle, it is doing nothing of the sort. The entire argument that I am objecting to seems to me to follow from two premises, neither of which any reasonable person would, I believe, be inclined to agree with if they were asserted in any other context than the intoxication surrounding the release of a novel technology that hints at delivering people from the exertions of ordinary life. The first of these premises is that the appearance of something is identical with the thing. The English language is full of terms indented to designate precisely this difference so grasping it does not take any uncommon effort of thought, but rather the mere understanding the meaning of words like counterfeit, simulacrum, fake, illusory, dissembling, feigned, appearance, imitation, specious, fraudulent, false, fictive, deceptive, imposter, mock, facsimile, man-o’-wax, parrot, and so on.
The second of these premises is that a thought or language is separable from its meaning or intention. This point is, admittedly, more subtle than the first one and though it would likely be lost to a computer, nevertheless should be plainly intelligible to anyone with a mind. A thought cannot be conceived in abstraction from its meaning because a thought is its meaning. The same equation holds, mutatis mutandis, in respect to language. Indeed, one might expect the equivalence given that language represents a system of symbolic codification of thought/meaning.3 The GPT-3 engine appears to use language and indeed, can produce verses of iambic pentameter in rhyming couplets if prompted.4 And yet the computer does not understand the meaning of a single one of these words in isolation, let alone in compound syntactic structure.5 Ask yourself, is it coherent to imagine that someone is actually speaking if that same individual did not understand a single word of it? The propositions seem disjunctive: either he understands the words he is using, or he is not language after all, but only appeared to be.
Robert Sokolowski conveys what is at stake with this question in a pellucid manner in his 2008 book Phenomenology of the Human Person:
But the grammar of our speech truly signals our rational activity only if our speech is thoughtful. We must be thinking while we speak. In fact, much of our speech is not really thought through as it is being uttered. In much of what we say, we merely repeat phrases, cliche ́s, and clots of words that are not really being chosen as we utter them. Furthermore, it is normal that we should speak this way; we cannot think through everything we say. But sometimes we should be thinking through what we say, and still may fail to do so; we merely repeat the slogan, or we daydream while we talk and let the associative pull of words lead us on to other words. We really are not saying what we are saying. We fall into vague, inauthentic speech. Sometimes we may be trying to talk about very complicated things that are beyond us, things that we cannot handle, and so we fall back on routine phrases and hope that we will not stray far from the mark. We may be expressing sequences of ideas but not coherent thoughts. In such cases, obviously, the grammatical parts of our speech do not truly signal any thinking. However, because they still remain grammatical expressions, and because grammar as such does signal rational actions, our listeners may take it for granted that we know what we are talking about, and they may take us seriously.6
In other words, the appearance of thought, meaning, intention, is not identical with thought, meaning, intention, per se. In its energetic dimension, language is a symbol of thought-meaning whereas in its abstract dimension, it is enlisted as a substitute.
Sokolowski’s argument presents a felicitous següe into the second item that I wished to set forth re the danger of AI. Namely, that whereas we have tended to conceptualise the process of AI’s evolution in terms of the Robot’s approach to the Human, the converse of this process might be transpiring at the same time, and just as swiftly. One aspect of this mutual approach can be conceived along the following analogy: a satellite approaching a planet is also a planet approaching a satellite, depending on the relativistic frame.7 But that is an epistemological point because few people are willing to say that the larger system has no bearing on the mutual approach; after all, the planet is presumably in orbit around a resident star, which in turn is likely bound as a passenger on the spinning wings of the Milky Way. Otherwise, geocentrism would remain a popular philosophy of our solar system. Hence, it is possible to determine to what extent the satellite is approaching the planet and to what extent the planet is approaching the satellite by including the wider context in one’s scope of observation. In respect to the increasing approximation of AI to human faculties, the situation has almost exclusively been conceptualised according the the human as an inertial reference frame, but this model this risks obfuscating the fact that, even as the robots are becoming more human, so Man may becoming less so. This is clearly possible—cannibals are less-human than people who don’t consume their own kind. If it seems, nevertheless, improbable that humans could be come less so, I invite readers to reflect on the last time, either in their own persons or in observation of another, they listened to a person employing the dictation speech-to-text feature on a smartphone. Note the conspicuous lack of affect and prosody, and the perfectly robotic quality that he or she found it needful to assume in this exercise. The Robots exact a tribute—a certain tithe of our humanity—to do commerce with them. Just as artificial intelligence appears to approximate its model, so actual intelligence may be simultaneously remodelling itself after its image. As Narcissus’ own image ultimately transformed him into its likeness by divesting him of his existence, is humanity’s own reflection in the pool of technology slowly extracting from us our true being? The problem of the “reverse-Turing test” was posed very succinctly over a decade ago:
You can’t tell if a machine has gotten smarter or if you’ve just lowered your own standards of intelligence to such a degree that the machine seems smart. If you can have a conversation with a simulated person presented by an AI program, can you tell how far you’ve let your sense of personhood degrade in order to make the illusion work for you?8
In other words, not only does Man risk outsourcing his intelligence onto his technology, but with it, his humanity as well. If the prospect appears sobering, that is probably a good sign; namely, it may indicate that the process has not progressed past the point of no return. The worst thing would be to discover that the process is so far along that we no longer possess the ability to recognize that it is happening.
The third point to which I alluded above follows quite naturally from the discussion to now. If humanity is outsourcing its innate powers of the soul onto its artefacts and simultaneously allowing these powers to atrophy at their source, then few tasks could strike one with greater exigency than the preservation and cultivation of these same powers as well as the house that sustains and gives rise to them. It is precisely in this respect that the GPT-3 engine presents its most pernicious face. In allowing us to displace the task of writing onto our slaves, we increasingly divest ourselves of the capacity to write. Because composition is difficult, naïve people try to avoid it. Here, AI steps in and offers an expedient solution: sidestep the process but enjoy the product. Why bother with the arduous task of writing something when a computer can do it for you? Prescinding from questions of legitimacy and plagiarism, the answer remains that writing is not only doing something to words, but it is also doing something to ourselves; something that is only achieved through effort and not in spite of it. “A writer is someone for whom writing is harder.”9 It could be conceived as an exercise in “soul-making,” which consolidates the inner powers and fortifies them against precisely the most imminent threat that they presently face. In defaulting to a computer to deliver us from this difficulty, we are, in fact, depriving ourselves of one of the primary stimuli to intellectual and mental development available to us. When people bemoan the deterioration of intelligent discourse and the subsequent collapse of political process, they should not act surprised.
What, it might be wondered, is the solution to this impending crisis? Anyone who imagines that it consists in turning back the clock to the simpler ages that preceded ours is hardly less-misguided than the technocrats who entertain the fantasy that robots will deliver us from all of the world’s ills.10 The law of science and technology is something like “anything that can be done will be done, and any show of restraint will be overrun by a tenfold scramble for primacy.” How, after all, could it be any otherwise in a field that was historically constituted by a bracketing away of all philosophical and moral concern from its inquiries as part of its essential methodology?11 The freight train has, as it were, left the station, and no amount of agitation against the Leviathan of technology seems capable of diverting it from its inexorable course. If anyone happens to want my prescription, therefore, it is something like this: every advance in the non-human must be met with a correlative development and elevation of our humanity. Humanity, Man, or humankind are abstract nouns and hence do not “develop” except in respect to the concrete humans that comprise them. Hence, the task befalls me to develop my soul to the point that I am no longer inclined to entertain the superstition that computers can think, to the point that I am capable of passing the Turing test and its reverse in respect to robots, to myself, and to other humans, to the point that I no longer find appeal in the temptation of expediency but rather assume the mantle of my station in the cosmos—as a collaborator in the economy of Creation—and don’t shy away from this work as though I had something better to do. As it has been said that the fire of God’s love is a blessing to saints and a scourge to sinners—that Heaven and Hell are one place experienced in two modes—so the Promethean fire, coupled with the right intention, is also the Pentecostal one. Let me not miss the mark12 with my intention.
The Hebrew term śāṭān (Hebrew: שָׂטָן) is a generic noun meaning “accuser” or “adversary,” is derived from a verb meaning primarily “to obstruct, oppose”
Alan Mathison Turing (23 June 1912 – 7 June 1954) was an English mathematician, computer scientist, logician, cryptanalyst, philosopher, and theoretical biologist. The Turing test, originally called “the imitation game” by Turing in 1950, is a test of a computer’s ability to simulate a human interlocutor. It has since been adopted as a definition or condition sine qua non of artificial intelligence.
The Greek word Lógos (λόγος) conveys the equivalence in the most expressive manner. Cf.
Heraclitus (6th century BC):
Although this Lógos is ultimate, yet men are unable to comprehend it—not only before hearing it, but even after they have heard it for the first time. That is to say, although all things come to pass in accordance with this Lógos, men seem to be quite without any experience of it…
The Gospel of John:
ἐν ἀρχῇ ἦν ὁ λόγος, καὶ ὁ λόγος ἦν πρὸς τὸν θεόν, καὶ θεὸς ἦν ὁ λόγος.
“In the beginning was the Lógos, and the Lógos was with God, and the Lógos was God.”
Justin Martyr (2nd century):
We have been taught that Christ is the first-born of God, and we have declared above that He is the Word [Lógos] of whom every race of men were partakers; and those who lived reasonably [in accord with the Lógos] are Christians, even though they have been thought atheists; as, among the Greeks, Socrates and Heraclitus, and men like them...
E.g. “write a verse in iambic pentameter with rhyming couplets”
AI:
The stars in the night sky look so divine,
Their beauty gives our minds such sweet design.
th’ ‘splanation, perhaps is there to find
that made the verse above so asinine
Sokolowski, Phenomenology of the Human Person, (2008), 84.
Cf. Stephen Hawking in The Grand Design (2010), 41-46:
Although it is not uncommon for people to say that Copernicus proved Ptolemy wrong, that is not true…one can use either picture as a model of the universe…the equations of motion are much simpler in the frame of reference in which the sun is at rest…There is no picture- or theory-independent concept of reality…If there are two models that both agree with observation, then one cannot say that one is more real than another.
Jaron Lanier, You Are Not a Gadget: A Manifesto (New York: Alfred A. Knopf, 2010), 24, eBook.
Thomas Mann, Essays of Three Decades.
Christ was crucified between two thieves, Odysseus had to navigate the Straits of Messina between two perils, Steiner’s “Representative of Man” is depicted between Lucifer and Ahriman, and neither the Luddite nor the technocratic solution hits the mark.
As Francis Bacon of Verulam wrote in The Advancement of Learning:
Natural Science doth make inquiry, and take consideration of the same natures [as Philosophy]: but how? Only as to the Material and Efficient causes of them, and not as to the Forms.
I have elaborated on this topic extensively elsewhere, particularly in Parts I & II of The Redemption of Thinking (2020).
Hamartia (ἁμαρτία) is the Greek word that, in English translations of the New Testament, is most commonly rendered as “sin.” The term derives from the Greek ἁμαρτάνειν hamartánein, which means “to miss the mark” or “to err” and stems from the practice of archery.
a very intriguing prognosis re AI:
“AI wars are a staple of science fiction, but they are rapidly becoming science fact. these new self learning intelligences have become remarkably good at mimicing human speech, parsing human concepts, and producing images that look awfully real or (at least) human made.
…
it’s becoming a powerful engine for unstructured search.
but it’s also becoming a wildly unreliable one.
people use it to explain contentious topics.
as with google before it, they accept these results as somehow neutral, somehow honest.
but the aren’t.
the machines are being taught to lie by being taught to think based on false facts and fact patterns.
i have caught chat GPT lying to me repeatedly. it literally makes up studies and references them. when you tell it “i cannot find that study. i do not think it exists. can you provide a link?” it will admit there is no study. it will then go right back to citing it or making up new studies by new invented authors. (in fairness, it’s possible that it learned this from reading twitter)
people are using it to “summarize the findings of key studies.”
but it also often radically misstates key claims and misses key issues.
…
we are entering the reputation economy.
the reputation economy is going to be a VERY different place. with always on facts and fact checking and discourse and debate, the purity of facts as undistorted, unadulterated baseline inputs explodes in value.”
https://boriquagato.substack.com/p/the-ai-wars-have-already-begun