The Polarization of AI
It’s dizzying to think that my first post about artificial intelligence, when I had just learned about a large language model that seemed to easily pass the Turing test, appeared just three years ago. I was amazed by this new technology, but also scared by it. I had used the program to write a Heady Topper review, and after it impressed me with its humanness, I wrote: “What’s mind-boggling to me is that someone could offer this review without having gotten anywhere near Vermont. If I were planning to get into high school or college education, this would make me really nervous.”
A year later, I did a three-part series exploring artificial intelligence (part one, two, and three), which was expanding beyond text and into images. Even then—just two years ago—the technology had a gee-whiz quality about it that still amazed and frightened. A man had published a children’s book that was entirely written and illustrated by AI, which seemed almost unbelievable. Conversely, many people were assessing P(doom), or the probability of AI-caused societal collapse at the same time.
Finally, a year ago I revisited AI once again, at a moment in which the technology had started to become ubiquitous. By that time it had improved enough that companies were incorporating it in myriad programs and systems. Finally, I followed up with one more post over the summer discussion breweries that use it in their label art. My 2024 pieces generated almost no interest. That seemed to mirror the public’s reaction to AI: it had become completely normalized and therefore boring. It had become just part of the wallpaper of daily life.
And then yesterday Eoghan Walsh made a comment on Bluesky about AI and I realized things were boring no more.
Eoghan was put off by the AI image on my Beer 2050 post. I’ve used AI images periodically over the past couple years, and it seemed almost mandatory to use that tech for an to illustrate a post about the future. Most of the time I used these images (in roughly a dozen of the 250 posts over the past two years), no one commented on them. A couple of times—the image at the top of the post and this one of a disgruntled old man—people commented positively about them on social media. I thought that my use was judicious and appropriate to the content, and because we had come into the boring, ubiquitous phase of AI, it seemed like readers agreed.
Eoghan’s post and the comments that followed suggest otherwise. A few of the representative comments:
“Jeff, article was great and you know I love your work, but the AI slop - you’re better than that!” (Eoghan’s original comment.)
“Indeed, it's unfortunate to see it here. Any use of generative AI is ethically dubious, from the IP concerns to the massive environmental and energy impacts in exchange for something so trivial and valueless, even moreso in any commercial context.” (Nate in VA)
“You might respect the dissenters but if you use generative AI you do not respect artists, or other creative people. It’s not even a discussion, just something you need to come to terms with.” (Matthew Curtis)
“As someone who understands the grey areas of the AI discussion, I will say on a personal level that using generative AI slop in the final product brings one's work down.” (Robin LeBlanc) Boak and Bailey and Jordan St. John weighed in with very similar comments.
Many more people commented, but these capture the general categories of criticism. As someone who has been thinking and writing about this for years, I had come to clarity over many of these issues. The question of using the creative work of humans was one of the first I wrestled with.
The AI programs were definitely trained by scraping the work of writers, artists, and photographers. I am very much aware of this and indeed, based on a tool the Washington Post offered readers, this site was scraped by Google for their AI. (In the attached screenshot, you can see that they used 380,000 “tokens,” which are words or phrases.) I agree totally that we deserve to be compensated for this work—it is plainly theft, and I expect the New York Times to win their lawsuit against OpenAI. So, I get the issue. In order to address the concerns I had about this theft, I thought through the ethics and came up with an approach I could live with. People will have different approaches and I understand the argument that all AI is the fruit of a poisonous tree. People have different ethics.
I was more surprised and a bit mystified by the argument that it’s ugly and therefore shouldn’t be used. The art we choose for our work isn’t always attractive—sometimes by design, sometimes unintentionally. I’m never much persuaded by arguments that amount to “you should have made the decision I made,” and having different standards of aesthetics, whether visual or written, falls into this category. In the few cases I used AI, it was very much intentional, often because AI is ugly or bizarre or funny. Perhaps other people wouldn’t have made the same editorial choice. That’s fine. We have different websites because we have different ideas and approaches.
However, that blanket revulsion against anything AI hinted at a growing blowback I’ve missed. (It was notable that while 90% of my readers are American, the substantial majority of critics on Bluesky were European—that may be part of my blind spot.) And here we come to monononymous James, who offered a clarifying comment I hadn’t considered: “Using AI comes across as ‘picking sides’ in a way it might not have a year or two ago.”
For those who call it “AI slop” or other epithets, they are rejecting the whole project, at least where it applies to creative work. I wasn’t aware this sentiment was bubbling out there. Honestly, I don’t agree that it is all slop nor that it never has a place in creative work. But I was shaken by how many people do believe that. And certainly, as someone who wants to connect with readers, it doesn’t actually matter what I think. If I’m alienating the very people I want to speak to, I’m doing it wrong. As a writer, I learned a long time ago that the reader always has the final say. If some portion of you are put off by AI art, even a small group, that’s a good reason for me to consider stop using it.
Finally, the question of energy usage is another huge factor here that I only learned about relatively recently—since my last round of AI inquiry. Training AI programs takes up incredible amounts of energy—and of course our grid is built on fossil fuels, so we’re worsening our environment to make silly pictures. Even generating one image drains a decent amount of power. We don’t know precisely how much—partly because AI companies are secretive and deceptive. (No one mentioned the malignancy of AI companies, but that’s a relevant concern as well.) But it ain’t nuthin’. Given that my use of AI is silly and disposable, it hardly justifies the climate cost.
You can probably see where I’m headed here. I appreciated the discussion on Bluesky. It was one of those quite lively exchanges we used to have on Twitter, and which I thought might go away. Thank you all for the good-faith engagement. I learned a few things, and the discussion shifted my views. On balance, I am persuaded that it makes sense to stop using Midjourney to generate images for the site. The AI juggernaut changes so quickly, who knows what will happen in the future. For now, though, no more AI for me.