Seeing, Believing, and Charity

Seeing is believing, as the old saying goes. Or a common one on the internet, “if there aren’t pictures, it didn’t happen.” These sayings neatly present a simple reality of how we as human beings live in the world: we depend a lot on our eyes. What we see has enormous influence on what we think, feel, and believe. That’s nothing new. But today there is something new. Today, it is easier than ever to make pictures that aren’t what they seem.

AI pictures

For nearly as long as there have been pictures, people have made fake pictures. In days of old, it took a fair deal of creativity and skill to pull off a decent fake picture. But no longer. Thanks to developments in artificial intelligence image generators, it takes shockingly little effort to make a reasonably good picture. But how good?

Surely, you will be able to tell the difference. Why don’t you give it a try?

Check out this quiz.

You might be a little disoriented that it’s on a German website and in German, but you can handle this. All you have to do, is choose whether you think the picture is Fake, or a Foto (that’s German for “photo,”; I bet you figured that out because you’re clever). After each result, you’ll see a percentage pop up. That is the percent of people who guessed right.

Why don’t you go check it out?

Seeing and believing

Assuming you tried out the quiz (and if you haven’t, go do it—it is pretty fun), consider the following. At the time I took the quiz, the collective accuracy of people was right around 64%. To put that into perspective, that’s not much better results than the same amount of people taking the test would get if they just guessed randomly without looking at the pictures at all!

Naturally, this image is from an AI image generator. The prompt that elicited it: guessing with a blindfold, cartoon.

The results of this quiz suggest there’s a very high likelihood you will see fake pictures generated by artificial intelligence and not be able to recognize they are fake!!

Charity

No, the sky is not falling

Life will go on. You don’t need to invest all your savings in gold and go live off-the grid to try to avoid the computer apocalypse (at least not yet, anyway 😊).

But, it is time for us to start the long, slow process of preparing ourselves for living in a world where fake pictures are indistinguishable from the real, at least much of the time. Since what we see has such a strong influence on what we believe to be true, it’s going to take some work on our parts to live in this brave new world.

One key plank in our preparations for this high-tech world is the decidedly ancient Christian practice of charity. Charity is the effort to believe the best about other people and to assume that they act with consistent intentions and in ways consistent with their character. Of course, this “believe the best about people” can be grossly abused. Often people prove that we should not believe the best about them. But charity is going to become increasingly necessary. After all, it has never been easier to make a compromising picture or clip of someone and weaponize it against them. All it takes is a bit of time on an AI image generator. Anyone can do it.

Practicing charity with pictures we see online will better prepare us to treat the real people in our lives with charity. Jesus modeled a way of treating others that held together justice and mercy—a pathway of charity. A big part of charity is to see people not just for what they are now, but for what they can become through God’s grace. If having to wade through AI generated pictures helps us train in charity, then it won’t be all bad.

Even if it will be unsettling, confusing, amusing, and a little bit scary.

And just for fun

Here is the response ChatGPT spit out when I asked it to summarize this blog post in two sentences:

The article discusses the prevalence of fake pictures created through artificial intelligence and highlights the challenge of distinguishing between real and fake images. It emphasizes the importance of practicing charity in the digital age, as the ability to believe the best about others and treat them with grace becomes increasingly necessary in a world where manipulated images can be used to harm individuals.

Not bad. Although it feels funny to me to cite AI-generated text in a quote.

AI chat: some thoughts after a week of playing with generative AI

God touches robots finger

If you have been living under a rock, or are a dedicated Luddite, perhaps you have heard little to nothing about the AI revolution sweeping the internet. Okay, that’s a slightly overblown way to describe it (which fits in with the generally overblown way people talk about AI, which naturally refers to Artificial Intelligence). Ever since ChatGPT passed the Turing test back in Dec 2022, it has been making story after story (along with its other AI pals which use a Large Language Model to do cool things). After so many stories, the natural thing to do was try the AI chatbots out and see what they could do. If these models prefigure the future of humanity and our interaction with computers, it makes sense to me to get a little bit of experience and see what the future might be. So over Spring Break I took a journey up the mountain to see the future, and now come to report a few thoughts on it. After playing with ChatGPT and Bing’s new AI-powered search, the results ranged from amusing to alarming, trivial to transformative, and everywhere in between. If this is a glance of the future, buckle up, it’s going to be a bumpy ride.

A brief introduction: how they work (in really simple terms)

There are many places where you can read a more robust description of how these programs basically work.

For any interested in a more technical review of things, with many helpful descriptions and clarifications, check out this WIRED article.

In essence, the AI program has worked through reams and reams of written data from the Internet—things like encyclopedias, web pages, and even chats from reddit. What is it doing with all this data? finding statistical correlations between words. It is “learning” what sort of words go together in what sort of contexts. The result is that the Large Language Model AIs can predict words that should go together to form sentences in response to a prompt. Using this method, they are able to produce sentences which look and sound like a human wrote them. It is really a clever approach to the problem of getting computers to work with human language (because they are awful at it).

It is both really capacious and really limited approach, all at the same time. As one writer at WIRED notes,

“Both [ChatGPT and Google Bard] use powerful AI models that predict the words that should follow a given sentence based on statistical patterns gleaned from enormous amounts of text training data. This turns out to be an incredibly effective way of mimicking human responses to questions, but it means that the algorithms will sometimes make up, or “hallucinate,” facts—a serious problem when a bot is supposed to be helping users find information or search the web.”

Sometimes the writing they come up with is astoundingly good; other times it is ugly. More interesting, sometimes the information the AI spits out is accurate and useful; other times it is downright wrong.

After a week of messing around, here are a few thoughts on chatbots, the future, and humanity.

I. the soul of humanity: who are we?

How do we define ourselves and understand ourselves? While there are lots of interesting things to comment on about these chatbots, I’ll start with the most related to the normal scope of this blog: how will these chatbots affect the way we understand ourselves? Like every piece of technology, these AI chatbots are a partial mirror held up to us. Whatever the future brings, in the present, they force the question upon us in a new way: what does it mean to be human?

For most of human history, we were the only things in the world that could use human language. While other animals exhibit varying degrees of ability to think in abstract ways and even to communicate, human language is utterly unique.

Not anymore.

We have created a machine which utilizes human language in at times a stunningly proficient manner. Who are we? Do we matter? How do we understand ourselves? These questions have all become much more difficult to answer. And, I’m willing to predict, that they’ll become increasingly more difficult to answer as the abilities of artificial intelligence bots increase. Because they certainly will. Although, it’s unlikely that anytime soon they will have “human-like” abilities. Because even if it gets a lot better at handling language, the amount of energy and computational power required by a large language model AI chatbot makes it extremely unlike a human.

In my time interacting with these chat bots, I was reminded of a movie scene from the 2004 Will Smith movie I, Robot (based ultimately on the writings of Isaac Asimov, the godfather of modern Sci-Fi). You might be able to watch the scene here. While interrogating a robot, the human detective asks:

“Can a robot write a symphony? Can a robot turn a canvas into a beautiful masterpiece?”

To which the robot, Sonny, replies:

“Can you?”

Stunned silence.

The detective takes some traits of humanity as a whole and points out that robots don’t fulfill them. Humanity as a whole is creative and certain people create symphonies and masterpieces. The obvious logical flaw, which the robot points out, is that most members of humanity don’t share in these apex creative abilities. If we define humanity by the best of human accomplishments, then most of humanity fails to be truly human.

This scene is apropos in that we have now created a robotic chatbot which appears to use language at a reasonable human level. What does it mean to be human when our creations begin to behave in distinctly human terms? Is an AI that can use language with reasonable fluency more human than a person who has lost that ability? While our intuitive answers at this point is probably, “no,” as the years go by it’s going to be harder and harder to simply make that assertion.

We have something significant to contribute to the world at this moment in history. As followers of God, we believe and confess that people have value and identity not based on their abilities and not based on any distinctly human trait that nothing else shares, but based on their created identity. The identity of human as made in the image of God is not based on superior intellectual abilities (even though we have those as compared to other creatures God made), it is not based on language abilities, or even on the trappings of civilization. It’s based on relationship to God the creator.

There will be lots of difficult questions and decisions for society in the coming decades. AI chatbots are already pretty neat; they will only get better and more pervasive. More and more often we will interact with computers in ways that are difficult to distinguish from interacting with people. We will have need of being able to assert with clarity and confidence that people—all people—are valuable simply because they are made in the image of God, not because of any abilities or skills that they have.

II. accuracy: and what is truth?

I was generally impressed at the accuracy of the AI models. I queried them over questions about Ancient Greek and was pleasantly surprised to receive competent answers in short order. Answers of the quality that few Greek learners would be able to quickly provide. In fact, I would suggest that the best way to envision the sorts of responses these chatbots give (at least currently) is much like what a topic expert would give you if you asked for 15-30 second answer from them on a given question. Short. Superficial. A good place to start.

But it also did not take long to see how inaccurate and misleading they can be. From giving a wildly inaccurate summary of the plot of the book The Neverending Story,[1] to a sophomoric and deeply misleading summary of the debate over a verbal aspect in the Greek verbal system, in more than a few places I found the answers wanting.

The difficulty, though, is that even when wrong, the chatbots “sound” like they know what they are talking about. Most human beings are incapable of fluently “lying” (making up things that they do not believe to be correct). We are often able to tell when a person we are talking to is fudging their answer. Or, people will just tell you, “I’m not sure, but here is what I think.” Chatbots don’t do that—at least not yet. Unless you are well-versed in the topic, you may not even notice that the chatbot is making up an answer “out of thin air.” To say that the AI models lie is, I think, inaccurate. They are not, after all, designed to tell the truth but to work with human language in a way that produces coherent sentences. Being able to produce coherent sentences and paragraphs is a very different discipline from understanding and speaking truth.

How AI bots navigate this distinction in the future will be deeply important.

III. creativity: combining or creating?

The chatbots really shine at producing de novo compositions where the concern is creativity rather than truthfulness. A prompt to write a paragraph about a plywood door in the style of various authors—from Alexandre Dumas to Gregory McGuire to Patrick McManus, and everywhere in between—resulted in some paragraphs which would easily pass for excellent human writing. In fact, writing which, if you encountered it outside of the context of a query for an AI bot, you would be forgiven for thinking was laden with deep symbolism and life experience. Writing which is much better than most people will ever write in their lives. And yet, sometimes the response was flat and predictable and, for lack of a better word, lame. As though just adding the word “rune” to a string of otherwise insipid and poorly connected sentences is enough to make a paragraph in the style of J.R.R. Tolkien.

AI chatbots traffic in borrowed creativity.

But then, isn’t all creativity borrowed, in some sense or another? We learn to become creative in dialogue with others. The endless advice for aspiring writers is, after all, to read other writers and write more in response to them.

What exactly is creativity? And how would we know if and/or when a chatbot moves from clever imitation of human creativity through aping patterns found in writers to having its own? In fact, couldn’t we suspect that the algorithms currently in use are themselves an expression of creativity (human creativity), given to the chatbots by people? Questions about the nature of creativity will just be one of many which will become increasingly difficult to answer as large language model AI chatbots become more and more proficient.

In the meantime, good luck to you teachers who are trying to figure out what it means for students to write a paper in a world where they can get a unique paper generated in a matter of seconds (but probably a pretty poor one).

IV. economics: who pays for all this?

In the hoopla over AI chatbots, a noticeable lacuna deserves further attention: Who is going to pay for the internet?

We take it for granted that websites are (mostly) free. While we pay for an internet connection, and sometimes pay for access to certain parts of websites, most websites in the world are free to access and use. That works because behind every website is an individual/group paying to keep the website open and functional. They are either paying for it on their own or using adds.

Adds. The Faustian bargain of the internet. We love to hate them, yet the internet as we know it would not exist without the integration of adds. Advertisers (and, by logical extension, people who buy stuff which advertisers are advertising) pay for much of the internet. We get to use products like Google (or Duckduckgo), Gmail, yahoo, YouTube, Facebook, Instagram, LinkedIn, Twitter, 4Square, Twitch, SnapChat, etc., for “free” because they generate the lion’s share of their income through advertisements (and selling data to interested buyers, which is, I suppose, sort of like advertisements).

Currently, the AI chatbots do not have adds. They make no (little) real money (though, monetization is already underway, and will proceed apace). People are expecting these things to make money (especially Microsoft who dropped $10 billion into Open AI).

How will AI be financed, and how will the internet of the future be financed? AI is more expensive to use than traditional searches—devouring tons of energy and computational resources by comparison.[2]

And, along with that, how will the financing mechanisms affect the accuracy and fairness of these AI large language models? Thoughtful users of the internet are already aware of this basic problem: if you don’t make the top page of the Google results, you don’t exist. Just 0.63% of people ever look at the second page of results. So, if First Baptist Church of Manistique doesn’t show up on the first page of Google results for “churches in Manistique,” virtually no one will find it through the internet—and that is where almost everyone looks for a church these days. This holds true for business, churches, you name it. Websites are largely in thrall to the whims of a search engine.

While no one knows exactly what logic stands behind Google’s search results (or those of other search engines), what is clear is that the decisions Google makes about what is important for internet searchers to see have huge implications for what people see. As advertisers and others look for ways to monetize these AI chatbots, who is going to make these decisions? Will Chatbot search results become corporate sponsored advertisements like the soap operas of yesteryear?

V. learning: what is going to change?

Teachers of all sort have to figure out how to integrate AI chatbots into the learning process. To the degree that people using these things—especially students—view them as aids to learning rather than means to evade learning, they present a great tool. But that is a difficult thing to navigate. For those who already don’t see much point in learning, the idea that a robot can “give the right answer” after just getting asked will certainly make the learning process more difficult. One way to use them at this point: get the chatbot to give an answer and then go through and find all the ways that the answer is wrong, or could be improved. The chatbots are good tools for provoking creativity.

We already perform a lot of learning today in tandem with computers to achieve answers that were not possible before the digital age (or were so time-consuming as to be improbable to the extreme). My own dissertation fits in this stream of work. Without searchable computer databases, my project would have been impossible. Having computers as tools opens up certain sorts of questions and ways of attending to knowledge which are impossible without. Yet it also closes other ways of attending to knowledge. We have certainly lost something which humanity through much of history has had in that we spend so little effort building a well-stocked and well-functioning memory.

There are difficult questions ahead; there are interesting times ahead.

Conclusions

If you have read this far, congratulations, you should win an award. As someone who has now spent a good amount of time talking at a computer (I regularly use Microsoft Word dictation and voice-to-text features) and who has been working with computers for most of my life, the whole interaction with an AI chatbot did not seem that strange. It’s a little weird that they often refer to themselves as “I.” If it were up to me, I would write that anthropomorphism out of the responses. However, it’s not up to me, so you know.

These are interesting technologies with interesting potential and will certainly be influential. But just how remains to be seen. There’s going to be a wave of legal cases in how they are employed. There’s going to be all sorts of practical and technological limitations. They’ll be fun to use and there’ll be ways in which they fail and are hurtful. As I reflect on this new technology from the perspective of a follower of Jesus, the big thing which it leads me to is contemplating the nature and value of people. It seems as though we’re still many years, likely decades, out from artificial intelligence that can competently and widely interact in a human way. Who knows? Maybe they’ll be here sooner or maybe never. But in an age of rapid technological change, becoming secure in how we understand ourselves as humans created in God’s image is probably more important than ever.


[1] I asked both Bing and ChatGPT to summarize the story under its German name, Die unendliche Geschichte. The results were amusing. Sometimes the answers in German, sometimes in English, without any obvious reason why one language or the other.

[2]We’re getting a better idea of AI’s true carbon footprint,” Melissa Heikkliä.

Down the rabbit hole: redeeming the news, part 2

Alice falling down the rabbit hole into wonderland

In the last post, I noted how I’ve been falling into the rabbit hole of online news. To switch metaphors, it is like a bug-zapper. Something about it keeps drawing me in, even though I know that getting too close can be perilous. The endless stream of novel stories just waiting to be read beckons me on. How shall we be sensible and faithful in engaging with the world of digital news (or TV news, for that matter)?

Fitting the rabbit hole into life

Orienting principle: there is nothing wrong with reading the news. There can be great benefit in it. But if you find yourself in a place like me where turning to the news becomes a burden, a time-sucking draw, it may be time to climb a little out of the rabbit hole. Here are a couple things to consider for engaging with the news in a faithful way.

The limits of your time

It should go without saying, but needs to be said anyways, that our time is limited. Whether that be my time at work while preparing sermons and prepping other things for church, or our time with our families, or your time doing whatever it is you like to do. We all have limits.

An ever-present threat from online news, or TV news (or any sort of online activity) is that there is no end. There is no natural disengagement point. Movies end. Games end. But you never reach an end with online news, so you never need to stop.

Since the news never runs out, we must be careful of what amounts of time we allow ourselves to spend in engaging with it. Ask yourself how much time you can beneficially spend perusing news stories, then set some way to enforce that time limit.

I'm trying out a browser extension (LeechBlock) on my computer at work to limit the amount of time on different news sites the 10 minutes per every two hours. That's enough time to browse through, find some worthwhile stories, and get back to work.

The goal is to make engaging with the news a part of a broader strategy of preparing for ministry each week, rather than a way to escape from preparing.

It’s worthwhile for all of us to consider the limits of our time and how we engage with the news and other digital media.

Consider the limits of your circle of care

The global scope of online news makes it easy to forget that we have limited circles of care. What is a circle of care? There are only so many people, so many groups, and so many institutions which we can be involved in and meaningfully care about. There are only so many topics about which we can even be marginally informed, and a very tiny amount that we can be an expert on. It should be humbling to consider the vast number of topics about which each of us knows absolutely nothing.

The news provides a smorgasbord of information that far exceeds any individual’s circle of care.

It’s easy to think that my circle of care is far bigger than it really is. But at some point, you have to wonder how much should I really care about the pink dolphins in the Amazon river and how the global environmental situation affects them? While my heart goes out to people protesting in Iran over the recent death of Mahsa Amini, how much do I really care? Better yet, how much time and effort should I spend in caring about it? How relevant is it to my life and the lives of people I live with?

There are no easy answers to this question, but it is a question that we need to ask ourselves periodically.

A practical way to deal with the limits of your circle of care is to come up with a few areas of the news you want to be informed on. Then go find websites which address those areas. Avoid news aggregating sites; they will always bring you things outside of your circle of care. You’ll never hit the bottom of the rabbit hole you’re falling down.

Some news sources I frequent: WIRED, Christianity Today, and Spiegel (a major German news outlet)

The limits of your mental/emotional care

Related, it is important to consider the limits of our mental and/or emotional care. We all have a finite amount of love, concern, and care that we can give. There are more things in the world to care about (and that are worthy of caring about) than we can possibly care about in any meaningful way. Where we use up our care impacts every part of life.

If I have 10 units of emotional care that I can give in a day, and I spend five of them reading different news stories that have very little direct impact on my life, what effect might that have on the people who I go home to live with after work? I have to wonder, how much of my emotional and mental capacity am I spending trying to find and understand crises here there and everywhere around the world, and what sort of state is this leaving me in for when I go home and one of my children is having actual crisis that requires my mental and emotional attention?

Too much attention to the news can burn up our capacity to care for the people in our lives.

You are only human

In summing up these considerations, the main point is that I need to take a little time and remember that I am only human. And I mean that in the best possible sense that one can mean it. Being only human is not a bad thing; It is a glorious thing. But if we try to live as demigods, while only having the capacities of a human, we end up short-serving everyone we really live with.

One of the fascinating and wonderful things about being only human is that our abilities are well-suited for caring for and helping people who are connected to where we actually live in life. There are good and appropriate ways we can learn about and be concerned about things happening on the other side of the world. But the chief measure of loving your neighbor is not how much care you have for people on the other side of the world, but how much care you have for people on the other side of the street.

Pulling out of the rabbit hole

Over the course of this week, I’ve been doing some thinking and planning to engage with the online news in a more limited and focused way. I doubt I have the answer, but I want to engage in a searching for better practices in how to use online news in life and ministry. The rabbit hole is there, it is bottomless, and it has an endless draw for those who look to peek their head into it. I’m working on doing better at standing on my own two feet and not falling in.

Down the Rabbit Hole: redeeming the news, part 1

Alice falling down the rabbit hole into wonderland

The news cycle is shorter than ever. That is commonly accepted wisdom (though it is debatable whether there is anything more worthwhile to say in the endlessly shortening news cycle or not). One easily gets the impression that the news is a track of sad music on endless repeat. Recently, I am reminded that the news is an endless rabbit hole that you can fall into and never come back out from.

The news

I used to not really watch the news. In addition to occasional online browsing, I would catch a few minutes of NPR here and there while driving the kids somewhere or commuting back and forth between my house and school.

But recently, I’ve felt myself falling down the rabbit hole of online news.

Online news

Since starting into pastoral ministry in January, I have tried to be more aware of what’s going on in the world. The stuff in the world and community weighs upon the lives of the people who live in the community, so it makes sense to engage with the news. But there is also a danger, which is what I have been noticing recently.

I have started to fall down the rabbit hole.

The rabbit hole

In the last couple weeks, I find myself with a nagging urge to go check the news. Have a couple minutes? Pull up the news tab. Need a break from thinking about the sermon? Go scan the news. And so forth. After finishing a chunk of work, rather than taking a couple minutes to stand up, walk around, and stretch, I find myself browsing news headlines.

Note, browsing headlines is not a very good way to engage with the news to begin with.

Since the headlines change by the minute—even when very little of substance changes that quickly—there is always something new to look at, read, be interested in. Sometimes there are stories that are worth reading. Sometimes there are stories which promise a juicy tidbit. For someone who has not watched an NFL game in I’m not sure how many years now, so far this season I’ve seen all sorts of headlines about Tom Brady’s life both on and off the field.

How does the rabbit hole draw us in?

The pull

I suspect the draw to go and check the news is much like the well-known and studied way that social media apps work. In short, social media platforms use algorithms. All that means is that they use complex mathematical rules describing how data relates to each other.

Check out here for a brief explanation of how algorithms work on various social media platforms.

Combining these rules, and the scads of information the social media company has about you, its user, results in you receiving a continuous stream of content directed your way that you should like to look at.

 “Like” simply means content that the social media company believes you will take the time to look at, not whether you will find it pleasant, happy, or uplifting.

This all works on a pretty simple premise: our brains crave novelty. Said differently, we notice new things and tune out things that aren’t changing. Just think of the last time you walked into your favorite restaurant. When you first step in the wall of aromas envelopes you. Your nose is going wild as you soak up the delicious scents.

Within a couple minutes, you don’t even notice the smells anymore. But if you got up and walked into another restaurant, your nose would go crazy again. Why is this? Our brains prioritize paying attention to things that are new and changing, not to things that are staying the same. New stimuli—smells, sounds, images, touches—get high priority, but if the stimuli don’t change, in a short time they get downgraded and we no longer pay conscious attention.

Back to the digital world of social media and news. Online companies face one simple problem: the main way they make money is by selling adds, not by charging their users. In the online economy, you are the product. More pointedly, your attention is the product being sold by the tech company to an advertiser. It is in the tech company’s financial interest to keep you browsing as much as possible and coming back as often as possible.

The novelty-seeking brain is key. I want novel content. The news sites give endless novel content. Constantly changing headlines. The endless promise of something good.

Is falling into the rabbit hole good?

The dilemma

I‘ve been noticing that this increased intake of online news is complicated. On the one hand, I know much more about “what is going on in the world” than I have for quite some time. On the other hand, I’m not really sure that is a good thing. And I am not alone on this hunch.

It turns out, many studies note that watching the news can be deleterious to your health. Beyond the very real possibility that my stress (and yours, too) is heightened by watching the news, I wonder how all this casual news consumption relates to my ability to live well with the people I live with.

Staying out of the rabbit hole

In the next post, I reflect on some different things I am trying in order to put better boundaries around the news in my life. After all, as neat as it is to know things about what is going on all over the world, loving my neighbor as myself certainly should begin with my actual neighbors, not my digital ones.