Image
Open hand facing up and glowing slightly from the palm with the letters 'AI' floating above

Commentary - ChatGPT can now write a comprehensible column about Kansas politics. It might also end the world.

© Shutthiphong Chandaeng - iStock-1452604857
Clay Wirestone
(Kansas Reflector)

ChatGPT has taken a couple of steps closer toward becoming a Kansas Reflector opinion columnist.

The last time I asked the artificial intelligence chatbot to write a column about Kansas politics, in December 2022, it couldn’t connect to current articles online. ChatGPT produced a generic and inoffensive piece: “There is much more to Kansas than meets the eye,” and so on.

When I asked it to write about the 2025 state legislative session on Monday, a better column resulted, citing both current events and web sources (including the Reflector).

“Advocates must continue to push for increased education funding, expanded health care access, and the protection of civil rights,” it pontificated.

Yet something was still missing.

Image
PROMO 64J1 Miscellaneous - Opinion Ideads Blocks Heads People Lightbulb - iStock - http---wwwfotogestoeber.de

In the two-and-a-half years between those two columns, the AI sector has exploded. New models have made their debuts, and big tech has sunk billions of dollars into making them smarter than ever. Yet the promise that I once wrote about has been replaced — for me — by a nagging sense that AI still falls woefully short and poses profound risks for the globe. The problem of hallucinations has not only persisted, but grown worse. Too few people have given serious through to where we’re all headed.

No one invests in these large language models to create robotic columnists. Yet the job of synthesizing current events and adding context or perspective would seem well suited to artificial intelligence. For good or ill (probably good, given my hope to remain employed), ChatGPT has shown little improvement in writing memorable prose. Its sentences read like so much baby food, mushed and mushed again into a smooth paste.

The code underlying this current brand of AI offers an explanation. These “large language models” train on text generated by human beings and generate responses based on probability. Words follow other words thanks to statistical calculation. The addition of real-time web searching improves the experience, but blandness persists.

These models can be more interesting to read about than use. For example, researchers don’t entirely understand why current AI technology works at all.

Perhaps that’s why computer gurus hold AI in such reverence. The rest of us, though, deal with its real-world shortcomings. Last year, I wrote about a particularly gnarly hallucination in which ChatGPT insisted that my last name — created by my husband and I in 2010 — had deep historical origins. Those lies irritated me, and I wrote a column venting my frustration.

These days, an identical query results in a much smarter response. The bot refers to my piece, for one thing.

This kind of improvement can’t be denied. Yet it only happened because a human being noticed the mistake and fussed about it. If I had not noticed, if I had not written the column, there was no reason to expect that the model would improve. And this points to the fundamental, continuing limitation of large language models: They only know what they’ve been taught and can infer from such data. They can’t imagine, create or desire new experiences. They only reflect what has gone before.

At the same time, I wouldn’t grow too comfortable. A growing cadre of scientists warn that AI could lead to devastating outcomes — and not from boredom. They believe that if current technology leads to the creation of artificial superintelligence, humanity itself might go extinct. A forthcoming book on the subject bears the comforting title “If Anyone Builds It, Everyone Dies.

I could go into the existential risk arguments at length, but for convenience’s sake I’ll summarize.

If you create something much more intelligent than you, it could decide that its own goals differ from yours. If you stand in its way, that something might decide to remove you. In other words, Frankenstein’s monster might decide to depose Dr. Frankenstein.

That’s it.

While you might consider such a situation far-fetched, the world only has to end once. There could well be multiple super-intelligent AIs in years to come, with different creators and goals and personalities. Can we be sure that every one of them has our best interests at heart?

For that matter, I don’t think the risk comes exclusively from super-intelligent systems. Just hook up a garden-variety AI to the military assets of any reasonable-sized country. Presto! Instantaneous instability. Again, this might seem far-fetched to most of us. But can you guarantee that every single country will refrain from such a step?

The risks seem real enough to merit attention from governments and everyday folks. Better safe than sorry, as the saying goes.

Yet I’m not much worried about the current-generation large language models, of the type that fuel ChatGPT or Google’s Gemini or Elon Musk’s Grok. I have watched this technology develop over the past couple of years with a skeptical eye. It can take decent notes from an audio recording. It can help high schoolers and college students cheat on tests. It has made many aspects of software coding faster and more efficient.

That all makes sense. It could do the same things back in 2022, to a lesser extent. Meanwhile, customers sigh as poorly integrated “AI” features spam their search engines and cell phones. Immense power requirementsgobble up grid capacity and spew pollution.

Where are the breakthroughs? What happened to the hypothesized medical discoveries? Where are the penetrating insights from virtual Kansas Reflector columnists? Could it be that large language models, as we currently experience them, only go so far and work well in limited circumstances? Could it be that the furor we’ve experienced stems mostly from marketing hype and the wishful thinking of tech bros?

I’m not sure. But I’m suspicious.

Clay Wirestone is Kansas Reflector opinion editor. Through its opinion section, Kansas Reflector works to amplify the voices of people who are affected by public policies or excluded from public debate. Find information, including how to submit your own commentary, here.