If you have been living under a rock, or are a dedicated Luddite, perhaps you have heard little to nothing about the AI revolution sweeping the internet. Okay, that’s a slightly overblown way to describe it (which fits in with the generally overblown way people talk about AI, which naturally refers to Artificial Intelligence). Ever since ChatGPT passed the Turing test back in Dec 2022, it has been making story after story (along with its other AI pals which use a Large Language Model to do cool things). After so many stories, the natural thing to do was try the AI chatbots out and see what they could do. If these models prefigure the future of humanity and our interaction with computers, it makes sense to me to get a little bit of experience and see what the future might be. So over Spring Break I took a journey up the mountain to see the future, and now come to report a few thoughts on it. After playing with ChatGPT and Bing’s new AI-powered search, the results ranged from amusing to alarming, trivial to transformative, and everywhere in between. If this is a glance of the future, buckle up, it’s going to be a bumpy ride.
A brief introduction: how they work (in really simple terms)
There are many places where you can read a more robust description of how these programs basically work.
For any interested in a more technical review of things, with many helpful descriptions and clarifications, check out this WIRED article.
In essence, the AI program has worked through reams and reams of written data from the Internet—things like encyclopedias, web pages, and even chats from reddit. What is it doing with all this data? finding statistical correlations between words. It is “learning” what sort of words go together in what sort of contexts. The result is that the Large Language Model AIs can predict words that should go together to form sentences in response to a prompt. Using this method, they are able to produce sentences which look and sound like a human wrote them. It is really a clever approach to the problem of getting computers to work with human language (because they are awful at it).
It is both really capacious and really limited approach, all at the same time. As one writer at WIRED notes,
“Both [ChatGPT and Google Bard] use powerful AI models that predict the words that should follow a given sentence based on statistical patterns gleaned from enormous amounts of text training data. This turns out to be an incredibly effective way of mimicking human responses to questions, but it means that the algorithms will sometimes make up, or “hallucinate,” facts—a serious problem when a bot is supposed to be helping users find information or search the web.”
Sometimes the writing they come up with is astoundingly good; other times it is ugly. More interesting, sometimes the information the AI spits out is accurate and useful; other times it is downright wrong.
After a week of messing around, here are a few thoughts on chatbots, the future, and humanity.
I. the soul of humanity: who are we?
How do we define ourselves and understand ourselves? While there are lots of interesting things to comment on about these chatbots, I’ll start with the most related to the normal scope of this blog: how will these chatbots affect the way we understand ourselves? Like every piece of technology, these AI chatbots are a partial mirror held up to us. Whatever the future brings, in the present, they force the question upon us in a new way: what does it mean to be human?
For most of human history, we were the only things in the world that could use human language. While other animals exhibit varying degrees of ability to think in abstract ways and even to communicate, human language is utterly unique.
We have created a machine which utilizes human language in at times a stunningly proficient manner. Who are we? Do we matter? How do we understand ourselves? These questions have all become much more difficult to answer. And, I’m willing to predict, that they’ll become increasingly more difficult to answer as the abilities of artificial intelligence bots increase. Because they certainly will. Although, it’s unlikely that anytime soon they will have “human-like” abilities. Because even if it gets a lot better at handling language, the amount of energy and computational power required by a large language model AI chatbot makes it extremely unlike a human.
In my time interacting with these chat bots, I was reminded of a movie scene from the 2004 Will Smith movie I, Robot (based ultimately on the writings of Isaac Asimov, the godfather of modern Sci-Fi). You might be able to watch the scene here. While interrogating a robot, the human detective asks:
“Can a robot write a symphony? Can a robot turn a canvas into a beautiful masterpiece?”
To which the robot, Sonny, replies:
The detective takes some traits of humanity as a whole and points out that robots don’t fulfill them. Humanity as a whole is creative and certain people create symphonies and masterpieces. The obvious logical flaw, which the robot points out, is that most members of humanity don’t share in these apex creative abilities. If we define humanity by the best of human accomplishments, then most of humanity fails to be truly human.
This scene is apropos in that we have now created a robotic chatbot which appears to use language at a reasonable human level. What does it mean to be human when our creations begin to behave in distinctly human terms? Is an AI that can use language with reasonable fluency more human than a person who has lost that ability? While our intuitive answers at this point is probably, “no,” as the years go by it’s going to be harder and harder to simply make that assertion.
We have something significant to contribute to the world at this moment in history. As followers of God, we believe and confess that people have value and identity not based on their abilities and not based on any distinctly human trait that nothing else shares, but based on their created identity. The identity of human as made in the image of God is not based on superior intellectual abilities (even though we have those as compared to other creatures God made), it is not based on language abilities, or even on the trappings of civilization. It’s based on relationship to God the creator.
There will be lots of difficult questions and decisions for society in the coming decades. AI chatbots are already pretty neat; they will only get better and more pervasive. More and more often we will interact with computers in ways that are difficult to distinguish from interacting with people. We will have need of being able to assert with clarity and confidence that people—all people—are valuable simply because they are made in the image of God, not because of any abilities or skills that they have.
II. accuracy: and what is truth?
I was generally impressed at the accuracy of the AI models. I queried them over questions about Ancient Greek and was pleasantly surprised to receive competent answers in short order. Answers of the quality that few Greek learners would be able to quickly provide. In fact, I would suggest that the best way to envision the sorts of responses these chatbots give (at least currently) is much like what a topic expert would give you if you asked for 15-30 second answer from them on a given question. Short. Superficial. A good place to start.
But it also did not take long to see how inaccurate and misleading they can be. From giving a wildly inaccurate summary of the plot of the book The Neverending Story, to a sophomoric and deeply misleading summary of the debate over a verbal aspect in the Greek verbal system, in more than a few places I found the answers wanting.
The difficulty, though, is that even when wrong, the chatbots “sound” like they know what they are talking about. Most human beings are incapable of fluently “lying” (making up things that they do not believe to be correct). We are often able to tell when a person we are talking to is fudging their answer. Or, people will just tell you, “I’m not sure, but here is what I think.” Chatbots don’t do that—at least not yet. Unless you are well-versed in the topic, you may not even notice that the chatbot is making up an answer “out of thin air.” To say that the AI models lie is, I think, inaccurate. They are not, after all, designed to tell the truth but to work with human language in a way that produces coherent sentences. Being able to produce coherent sentences and paragraphs is a very different discipline from understanding and speaking truth.
How AI bots navigate this distinction in the future will be deeply important.
III. creativity: combining or creating?
The chatbots really shine at producing de novo compositions where the concern is creativity rather than truthfulness. A prompt to write a paragraph about a plywood door in the style of various authors—from Alexandre Dumas to Gregory McGuire to Patrick McManus, and everywhere in between—resulted in some paragraphs which would easily pass for excellent human writing. In fact, writing which, if you encountered it outside of the context of a query for an AI bot, you would be forgiven for thinking was laden with deep symbolism and life experience. Writing which is much better than most people will ever write in their lives. And yet, sometimes the response was flat and predictable and, for lack of a better word, lame. As though just adding the word “rune” to a string of otherwise insipid and poorly connected sentences is enough to make a paragraph in the style of J.R.R. Tolkien.
AI chatbots traffic in borrowed creativity.
But then, isn’t all creativity borrowed, in some sense or another? We learn to become creative in dialogue with others. The endless advice for aspiring writers is, after all, to read other writers and write more in response to them.
What exactly is creativity? And how would we know if and/or when a chatbot moves from clever imitation of human creativity through aping patterns found in writers to having its own? In fact, couldn’t we suspect that the algorithms currently in use are themselves an expression of creativity (human creativity), given to the chatbots by people? Questions about the nature of creativity will just be one of many which will become increasingly difficult to answer as large language model AI chatbots become more and more proficient.
In the meantime, good luck to you teachers who are trying to figure out what it means for students to write a paper in a world where they can get a unique paper generated in a matter of seconds (but probably a pretty poor one).
IV. economics: who pays for all this?
In the hoopla over AI chatbots, a noticeable lacuna deserves further attention: Who is going to pay for the internet?
We take it for granted that websites are (mostly) free. While we pay for an internet connection, and sometimes pay for access to certain parts of websites, most websites in the world are free to access and use. That works because behind every website is an individual/group paying to keep the website open and functional. They are either paying for it on their own or using adds.
Adds. The Faustian bargain of the internet. We love to hate them, yet the internet as we know it would not exist without the integration of adds. Advertisers (and, by logical extension, people who buy stuff which advertisers are advertising) pay for much of the internet. We get to use products like Google (or Duckduckgo), Gmail, yahoo, YouTube, Facebook, Instagram, LinkedIn, Twitter, 4Square, Twitch, SnapChat, etc., for “free” because they generate the lion’s share of their income through advertisements (and selling data to interested buyers, which is, I suppose, sort of like advertisements).
Currently, the AI chatbots do not have adds. They make no (little) real money (though, monetization is already underway, and will proceed apace). People are expecting these things to make money (especially Microsoft who dropped $10 billion into Open AI).
How will AI be financed, and how will the internet of the future be financed? AI is more expensive to use than traditional searches—devouring tons of energy and computational resources by comparison.
And, along with that, how will the financing mechanisms affect the accuracy and fairness of these AI large language models? Thoughtful users of the internet are already aware of this basic problem: if you don’t make the top page of the Google results, you don’t exist. Just 0.63% of people ever look at the second page of results. So, if First Baptist Church of Manistique doesn’t show up on the first page of Google results for “churches in Manistique,” virtually no one will find it through the internet—and that is where almost everyone looks for a church these days. This holds true for business, churches, you name it. Websites are largely in thrall to the whims of a search engine.
While no one knows exactly what logic stands behind Google’s search results (or those of other search engines), what is clear is that the decisions Google makes about what is important for internet searchers to see have huge implications for what people see. As advertisers and others look for ways to monetize these AI chatbots, who is going to make these decisions? Will Chatbot search results become corporate sponsored advertisements like the soap operas of yesteryear?
V. learning: what is going to change?
Teachers of all sort have to figure out how to integrate AI chatbots into the learning process. To the degree that people using these things—especially students—view them as aids to learning rather than means to evade learning, they present a great tool. But that is a difficult thing to navigate. For those who already don’t see much point in learning, the idea that a robot can “give the right answer” after just getting asked will certainly make the learning process more difficult. One way to use them at this point: get the chatbot to give an answer and then go through and find all the ways that the answer is wrong, or could be improved. The chatbots are good tools for provoking creativity.
We already perform a lot of learning today in tandem with computers to achieve answers that were not possible before the digital age (or were so time-consuming as to be improbable to the extreme). My own dissertation fits in this stream of work. Without searchable computer databases, my project would have been impossible. Having computers as tools opens up certain sorts of questions and ways of attending to knowledge which are impossible without. Yet it also closes other ways of attending to knowledge. We have certainly lost something which humanity through much of history has had in that we spend so little effort building a well-stocked and well-functioning memory.
There are difficult questions ahead; there are interesting times ahead.
If you have read this far, congratulations, you should win an award. As someone who has now spent a good amount of time talking at a computer (I regularly use Microsoft Word dictation and voice-to-text features) and who has been working with computers for most of my life, the whole interaction with an AI chatbot did not seem that strange. It’s a little weird that they often refer to themselves as “I.” If it were up to me, I would write that anthropomorphism out of the responses. However, it’s not up to me, so you know.
These are interesting technologies with interesting potential and will certainly be influential. But just how remains to be seen. There’s going to be a wave of legal cases in how they are employed. There’s going to be all sorts of practical and technological limitations. They’ll be fun to use and there’ll be ways in which they fail and are hurtful. As I reflect on this new technology from the perspective of a follower of Jesus, the big thing which it leads me to is contemplating the nature and value of people. It seems as though we’re still many years, likely decades, out from artificial intelligence that can competently and widely interact in a human way. Who knows? Maybe they’ll be here sooner or maybe never. But in an age of rapid technological change, becoming secure in how we understand ourselves as humans created in God’s image is probably more important than ever.
 I asked both Bing and ChatGPT to summarize the story under its German name, Die unendliche Geschichte. The results were amusing. Sometimes the answers in German, sometimes in English, without any obvious reason why one language or the other.
 “We’re getting a better idea of AI’s true carbon footprint,” Melissa Heikkliä.