Skip to main content

AI Is A Flat Circle

Submitted by JKHoffman on

(Yes, that’s a reference to True Detective.)

AI, or more properly LLMs, is the technology that will eat its own tail.  
First of all, I have no issue with large language model tools like ChatGPT.  Like any other digital tool, they are neither inherently good nor bad.  Fire can cook my food or burn my house down, depending on how I use it.  Second of all, these tools are far from actually intelligent, no matter how miraculous they seem.  They are not thinking in any sense of the imagination.  They are merely extrapolating more quickly than we can comprehend using an enormously complicated rule set that, at this point, I don’t think anyone fully understands.  All of this is only possible due to relatively recent advancements in computation speed, calculating at speeds that even computer professionals find hard to grasp.  For me, the truly interesting development is the natural language processing of the user queries.   In my mind, that ability to take casual, everyday language and produce any kind of usable output is incredible.  Of course, just like search engines, knowing what you’re talking about and being able to formulate a good query goes a long way toward getting good outputs.  Or, as someone once put it, being able to formulate a good question is halfway to solving your problem.

Here’s the thing though; with so many people using LLMs to generate content, eventually, many fear that the only “inputs” or training data that will be available will be older output from those same LLMs!  The biggest complaint about AI or LLM-generated content is that it’s generic and all starts to sound the same.  If that’s true now, imagine what it will be like in just ten or fifteen years as original training data becomes harder and harder to come by for the developers.  And, maybe I’m being generous based on how quickly we’ve seen things like ChatGPT and Claude.ai take off.  Just two or three years ago, almost no one had heard of these tools, or they didn’t even exist in a form available to the public.  I know I was one of the early adopters among my peer group when I used ChatGPT to summarize and rewrite some of my notes into a short report at work.  Though, it’s possible that I should be worried about how dry my writing had become, considering that no one noticed it was generated text and not strictly my own work.  

I recently read two books, both of which had significant things to say about these tools: More Than Words and Against Platforms.  More Than Words: How to Think About Writing in the Age of AI is written by John Warner, who is a writing teacher.  Naturally, his argument is a little biased, but not entirely without merit.  Essentially, he tells us that writing is more than just generating content.  Many of us write to learn, not just about better communication, but also about the subject of our writing.  I cannot disagree with his assertion that being a competent communicator is a better-than-fair goal and, like it or not, words, in particular written words, are the tools we use to communicate.  It follows then that practicing writing and practicing clear thinking via our writing to enhance our communication is not only a laudable goal, but one that can easily atrophy through the overuse of LLM-based tools.  If we’re doing more than just generating content for other robots to ingest without ever concerning ourselves about a human audience, then what we’re doing can hardly be considered “writing” in his definition.
Against Platfoms: Surviving Digital Utopia by Mike Pepi takes a slightly different tack.  He argues much like I do that LLMs, which he prefers not to refer to as “AI” for the same reasons I dislike it, namely that they are far from intelligent, are just tools which vary in quality based on the choices made by their designers and programmers and produce results that likewise vary in quality based on the user of those tools.  It’s no different than the results I get using a hammer, and they differ from the results that a master carpenter gets using the same hammer.  He does, however, warn us against trying to solve social or political problems with technology.  Naturally, his arguments extend far beyond the LLMs, but I agree that using LLMs as a surrogate for things like actual human interaction is unlikely to produce positive results.  Loneliness is not cured through better programming.

So, what does it all mean?  Honestly, I don’t know.  I do know that I will continue to use tools like ChatGPT for certain aspects of my writing and research.  I will use it to brainstorm as long as I find it useful.  And, I will definitely use it to write code faster than if I were piecing it together myself from individual searches.  I still plan to do the most creative writing myself because my real intelligence still does it better than the artificial intelligence can, and I enjoy it.  Perhaps, one day, I’ll be one of the few remaining real people on the internet.