Artificial Isn't Natural
It’s a bit crazy how artificial intelligence has suddenly become quite mainstream. The technology has been around for years, but now it’s all the rage, with so many apps and tools announcing the addition of “AI” to the feature list. I feel like most of this is due to the rise of ChatGPT and other large language models, especially now that the technology has reached a point of general adoption.
Until a few months ago, I hadn’t ever really tried ChatGPT, and the first time I did, I was initially impressed. After the first few questions, the model’s ability to rephrase knowledge into common English really blew me away. However, after an evening of asking the chatbot questions (many of them prompted by my kids), I started to notice a kind of similarity or “flavor” in all of the responses. My intuition told me that this robot was still a robot. I also noticed that it is programmed to keep its responses politically-correct and blandly neutral, likely because of the relativistic world that we live in, and also because that’s sort of how we expect talking robots to respond.
Suggestion Tool Takeover
But it isn’t just ChatGPT that is using this technology. Most of the common software applications I use on a daily basis started quietly adding in features that suggest the next few words or the rest of my sentences while typing (e.g. “inline suggestions” in Microsoft Outlook) or a quick response to a message from a coworker (e.g. in Microsoft Teams). A host of programmer tools like GitHub Copilot and Codeium are cropping up as well. These tools join the suggestion game by recommending the next few lines of a code that a programmer should write, usually by looking at the rest of their programming project and learning from code gathered from info available on the web. These tools in particular created a bit of a stir when folks started using them because software developers started worrying they’d loose their jobs, because the tool was so quick at determining what was next and pretty good at writing code, too. However, most folks quickly stopped worrying about their jobs and turned to claiming that the tool saves them so much time they can’t live without it. As for me, I pretty rapidly joined the bandwagon and adopted all the new features I could. It was only later that I started thinking about how regular usage impacted how I think, work, and write.
Going Faster, For What?
Like the advent of the vacuum, the washing machine, and so many other gadgets, the promise here is generally the same: you can work faster and more efficiently. Lately, I’ve been thinking about something I saw on Alan Jacob’s blog, where he asks why we are trying to go faster:
…You’re zipping through all these experiences in order to do what, exactly? Listen to another song at double-speed? Produce a bullet-point outline of another post that AI can finish for you?
The whole attitude seems to be: Let me get through this thing I don’t especially enjoy so I can do another thing just like it, which I won’t enjoy either.
Yes, in my line of work, tools that help you go faster are usually appreciated and extolled. I spend my free time on slower days automating repetitive tasks, usually using tools like Alfred and Keyboard Maestro. But I wonder if the push towards making content generation quicker isn’t a bad thing. Is the “quick responses” feature in Microsoft Teams really that useful? Or does it take control over something that really should be human in my interactions, even those that are virtual? Does the suggestion of responses, or the completion of my sentences allow me to be truly creative, thoughtful, and genuine… or is it simply a robot putting words in my mouth for me? There’s part of my discomfort about these AI-based “auto-suggestion” tools that I can’t put into words, and I honestly didn’t really feel that it was that big of a deal for the last few months.
The Blurry JPEG Analogy
But then I read an article from The New Yorker titled, ChatGPT is a Blurry JPEG of the Web. In the article, Ted Chiang makes a comparison between large language models and compression algorithms and explains how it all works in common English that is super helpful for the non-technical layman. A few quotes in particular stood out:
The fact that ChatGPT rephrases material from the Web instead of quoting it word for word makes it seem like a student expressing ideas in her own words, rather than simply regurgitating what she’s read; it creates the illusion that ChatGPT understands the material. In human students, rote memorization isn’t an indicator of genuine learning, so ChatGPT’s inability to produce exact quotes from Web pages is precisely what makes us think that it has learned something. When we’re dealing with sequences of words, lossy compression looks smarter than lossless compression.
This explanation helped me to see past my initial amazement at ChatGPT’s abilities in order to realize that the robot doesn’t really understand the content it has ingested, it simply knows how to interpolate between phrases and words to rephrase things in a way that appears to simulate what we see as understanding in humans. Mr. Chiang also had this to say about our hopes of using ChatGPT as a mechanism for generating a “first draft” for content:
If you’re a writer, you will write a lot of unoriginal work before you write something original. And the time and effort expended on that unoriginal work isn’t wasted; on the contrary, I would suggest that it is precisely what enables you to eventually create something original. The hours spent choosing the right word and rearranging sentences to better follow one another are what teach you how meaning is conveyed by prose.
Artificial Ingredients Included
My takeaway from this article is that the auto-suggestion mechanisms of AI tools likely aren’t as helpful as they look, especially when considering their impact on my ability to think, write, and articulate my thoughts. Instead, these tools simply foster my dependence a piece of software that wants me to go faster. The more I thought about this, the more I saw these newly-added features as “training wheels”, or perhaps they are what they claim to be: artificial. In the world of food and drinks, a label that includes the words “artificially flavored” or “artificial ingredients” usually implies that the contents of the package aren’t healthy, at least not compared to what is “natural”. Perhaps applying this analogy to the tools we adopt might be helpful, even if the analogy does have its flaws. Natural (human) intelligence is still eons better than artificial intelligence, and there’s something to be said for finding a healthy balance between leveraging the tech and using our brains.
Are there uses for all this AI stuff? I expect so – especially those “machine learning” based improvements that allow me to quickly retouch a photo on my phone before sending it to someone, or the ability to identify anomalies and patterns in aerial imagery for GIS industry applications. Even large language models will probably find their use… but I suspect the technology still has a long way to go. For now, I’ve turned off most of the suggestion features in my software, and my work and communication is feeling a lot more natural.