Why Are People Starting to Sound Like ChatGPT?

3 min read

How sure are you that you can tell what's real online?

You might think it's easy to spot an obviously AI-generated image, and you're probably aware that algorithms are biased in some way. But all the evidence is suggesting that we're pretty bad at understanding that on a subconscious level.

As an etymologist and content creator, I always see controversial messages go more viral because they generate more engagement than a neutral perspective. But that means we all end up seeing this more extreme version of reality, and we're clearly starting to confuse that with actual reality.

The same thing is currently happening with AI chatbots, because you probably assume that ChatGPT is speaking English to you, except it's not speaking English, in the same way that the algorithm's not showing you reality. There are always distortions, depending on what goes into the model and how it's trained.

Like we know that ChatGPT says "delve" at way higher rates than usual, possibly because OpenAI outsourced its training process to workers in Nigeria who do, actually, say, "delve" more frequently. Over time, though, that little linguistic overrepresentation got reinforced into the model even more than in the workers' own dialects. Now that's affecting everybody's language.

Multiple studies have found that, since ChatGPT came out, people everywhere have been saying the word "delve" more in spontaneous spoken conversation. Essentially, we're subconsciously confusing the AI version of language with actual language.

But that means that the real thing is, ironically, getting closer to the machine version of the thing. We're in a positive feedback loop with the AI representing reality, us thinking that's the real reality, and regurgitating it so the AI can be fed more of our data.

And yet, this is how all fads now enter the mainstream. We start with a latent cultural desire. Maybe some people are interested in matcha, Labubu or Dubai chocolate. The algorithm identifies this desire and pushes it to similar users, making the phenomenon more of a thing.

But again, just like how ChatGPT misrepresented the word "delve," the algorithm is probably misrepresenting reality. Now more businesses are making Labubu content because they think that's the desire. More influencers are also making Labubu trends because we have to tap into trends to go viral. And yet, the algorithm is only showing you the visually provocative items that work in the video format.

TikTok has a limited idea of who you are as a user, and there's no way that matches up with your complex desires as a human being. So we have a biased input. And that's assuming that social media is trying to faithfully represent reality, which it isn't.

It's only trying to do what's going to make money for them. It's in TikTok's interest to have you looking at Labubus because that's commodifiable. So again, we have this difference between reality and representation, where they're actually constantly influencing one another.

We need to constantly remember that these aren't neutral tools. Everything that ends up in your social media feed or in your chatbot responses is actually filtered through many layers of what's good for the platform, what makes money and what conforms to the platform's incorrect idea about who you are. When we ignore this, we view reality through a constant survivorship bias, which affects our understanding of the world.

After all, if you're talking more like ChatGPT, you're probably thinking more like ChatGPT as well, or TikTok. But you can fight this if you constantly ask yourself: Why? Why am I seeing this? Why am I saying this? Why am I thinking this? And why is the platform rewarding this?

If you don't ask yourself these questions, their version of reality is going to become your version of reality. So stay real.