As our AI technology continues to advance, there are both exciting possibilities and potential dangers to consider. On the one hand, AI has the potential to revolutionize many aspects of our daily lives, from making routine tasks easier and more efficient, to improving our healthcare and education systems. However, there are also concerns about the potential negative impacts of AI, such as job loss and the ethical implications of machines making important decisions. Depending how you feel about AI, this paragraph may speak to those points more than you might first imagine.
Why? Because other than the last sentence, I didn’t write it. OpenAI’s ChatGPT machine learning tool wrote it when I asked it to. “Can you write an introductory paragraph about the intriguing possibilities and dangers of AI being able to do so many of our daily tasks?”
I’ve written before about the annoyances of the not-so-smart smart home assistants that now occupy our lives. I’ve not yet given up on using Alexa, but it is rare a day goes by she works without some frustrating hiccup too. With the level of responsiveness I find from her, Siri and Google Assistant, I would think the only reason to fear AI would be that “they” might frustrate us to death.
Though I have grown accustomed to a series of ten buck smart plugs and increasingly inexpensive lightbulbs making something like, “Alexa, turn on my bedroom” or even, “Alexa, turn on my inflatable snowmen outside” a part of life, it seems like Alexa never wastes an opportunity to prove to me that “smart speakers” are not particularly smart. I tolerate it, because after the first time you enter a room with your hands full and the lights come on that way, it’s really hard to go back.
Recent progress on other machine learning fronts are incredible, though. Earlier this year, I explored the then newly released Stable Diffusion. That project continues to make progress, expanding its ability to “create” pictures and even “photographs” based on text prompts. Enhancements of the project have added the ability to selectively replace parts of existing photos, “expand” existing images beyond their original borders and even generating fascinating videos zooming through computer created space as it is drawn.
I’ve been particularly impressed by the work of Divam Gupta, who released Diffusion Bee, making Stable Diffusion as accessible as installing any other MacOS app and exposing several of these expansions of Stable Diffusion for anyone to use. Apple’s recent contributions to the Stable Diffusion Open Source community, intended to make building future Mac and iOS apps leveraging the technology easier, promise that Diffusion Bee is just the beginning of what we will see with easy-to-use implementations.
Yet, as impressive as Stable Diffusion is — and I’ve found it enormously useful in unexpected ways the past few months (it provided the illustration for this article, by the way) — ChatGPT is something at an entirely different level. The sort of level that makes one think, “Maybe AI could really take things over” as in the Terminator or Star Trek: Discovery.
It’s a tad unnerving to realize a “photograph” isn’t real or to see a beautiful painting that no artist ever had a brush visit a canvas to create. But, having an in-depth extended discussion with an AI that responds, even to complex statements, and does so in conversational English, is an entirely different matter. It is unnerving yet exciting — and, whether one takes it pessimistically and optimistically, we have crossed a threshold: there will be no going back.
I have had quite a few such conversations with ChatGPT over the last week. I’d read about and seen some outputs from other iterations of OpenAI’s technology, but my curiosity was piqued when I saw fellow pastors toying with asking the system to preach sermons. And it did. I typed in several passages of Scripture and watched it produce short, but coherent sermons that were actually thoughtful and surprisingly theologically accurate.
To see how far I could push, I asked for a three point sermon on a specific passage and ChatGPT produced a meaningful sermon divided into three distinct sections that logically developed from each other. I wouldn’t preach a sermon I hadn’t written, but as a friend quipped about himself and I’d echo of myself, I’ve preached sermons that weren’t as good as this AI’s homily.
Switching gears entirely, I asked ChatGPT to solve various programming problems, outlining how I wanted things handled and asking it to create output in various different programming languages. And it did.
There has been talk about AI for solving computer programming tasks and I think I now believe that is not just entirely possible, but nearly unavoidable. For myself, asking it for various aspects to a programming project seemed a great way to think through how something should be done, even if it didn’t do everything I wanted it to do. I could see ChatGPT, or the emerging AI tools meant specifically for code generation, turning non-programmers into coders while taking those capable of crafting applications and accelerating their efforts by having AI do the grunt work.
I did observe a few issues. For example, while it produced syntactically accurate code, at one point, it did so using a non-existent library (libraries are resources programs can call to do things). Everything was consistent in ChatGPT’s reality where the non-existent resource apparently “existed,” down to the bot’s helpful comment about where I’d find the documentation for that resource — it just didn’t exist. Clearly, AI doesn’t yet know how to distinguish “real” from “consistent,” so to speak, but even then: just having it work through the basic logic of a task, even if it requires manual cleanup, could be a huge boon.
Machine learning technology has grown past novelty and started approaching genuine usefulness for productivity.
ChatGPT’s ability to generate “consistency” first and foremost proved an asset in another conversation I had. I’ve dreamed up the basic plot of a novel in my head for two decades now and I decided to try to bat it around with the chatbot. First, I described the basic story and asked it to generate how a certain plot point would evolve over 20 chapters, which it did quite well.
More impressively, I continued the conversation for a half hour or so, exploring hypothetical motivations of characters and such. The AI was able to “remember” what we discussed over the course of the conversation, including the names and background of different characters, and bring them together logically. It even pointed out areas I would need to consider how the overall plot should develop given the areas we had discussed.
(That coherency can go even further: someone figured out how to ask ChatGPT to pretend to be a computer terminal, used the pretend terminal to visit this imaginary computer’s imaginary version of the ChatGPT web site and have ChatGPT “imagine” what an alternate reality version of itself would be like. I tried this computer terminal trick and visited OFB; it dreamed up what might be on this site in its own imaginary alternate reality, based on what OpenAI had collected about our publication when creating ChatGPT. This is Inception-level layers of imagined reality, far more interesting than Mark Zuckerberg’s metaverse.)
While I would be reticent myself to have AI write anything I was going to put my name to (the first paragraph of this column aside), at its present level of development it seems like a wonderful tool for a writer facing writer’s block to brainstorm. If it eventually retains conversations between sessions (which it no doubt will), it might even prove invaluable for helping a human writer stay on track on big projects, like my novel.
A version of Microsoft Word that says, “I see you said Mackenzie is at the hot dog stand, but back in chapter 2 you said she was a vegetarian. You may want to change the venue or consider developing the plot point that has led her to reject her vegetarianism” would have sounded like far off Sci-Fi to me two weeks ago. Reachable, perhaps, but still imaginary. Now, it feels just ahead of us.
After finishing the first draft of this piece, I fed the whole thing to ChatGPT and asked for its opinion. The bot suggested I give more concrete examples, apparently thinking the ones I have given are less than adequate. Everyone’s a critic… apparently now even the computer.
It raised a good point, though: I should mention, before ending, that all this technology does genuinely present the practical threat many have been increasingly raising alarms about. If AI can create better art than an artist or write better code than a programmer or craft a better column or sermon than I can, what does that mean for almost any field in the future?
One thing the years since the industrial revolution have taught us time and again, the solution won’t be to try to fight the technology. If AI continues to improve at its present pace, it will transform how we work. What we will need to do is think of how to leverage it to do the things we do better (and find new jobs for those whose professions it unavoidably replaces), because there has never been an age where the “luddite” cause has prevailed.
We either learn how to cope with technological advances or we get passed by. That’s not a moral judgment, just an observation from history.
(ChatGPT gave me a thumbs up on the addition of these last three paragraphs as providing helpful nuance. And yes, it gave that feedback — and paraphrased back to me my point — in the context of being asked if those paragraphs improved the column. I’m not joking.)
As to what all of this means for smart assistants like Siri and Alexa — well I think it tells us they are going to get a lot smarter. If I were a company like Apple, I would be either racing to match OpenAI’s achievements here or be trying to acquire them.
As a human being who has seen or read a fair amount of ever less far-fetched AI-related science fiction, though, I’d be lying if I said I felt completely at ease about where we are headed. One thing I don’t want AI to decide it has advanced beyond is being human.