Speaking like ChatGPT? Block them all! "AI-like" speech is ruining social media, even Ultraman can't stand it.

avatar
36kr
11-03
This article is machine translated
Show original

New Zhiyuan reports

[Introduction] When you discover that the videos and posts you're scrolling through are "AI-generated," or when people around you speak to you in an "AI-like tone," do you want to quickly scroll away or block them? Authoritative research from institutions such as the University of California, Berkeley, confirms that AI is changing the way we communicate, including speaking and writing, making our social interactions feel very "plastic."

If you encounter AI, just block it.

I'm not an AI, but anyone who speaks like an AI will also be blocked!

Since the release of ChatGPT, the "AI voice" has been widely criticized by netizens.

Even Ultraman, the "father of AI" who made ChatGPT a global phenomenon, has recently been taken aback by the overwhelming "programming" vibe.

He couldn't help but criticize the AI-speaking (LLM-speak) on Reddit, arguing that people are starting to speak like AI, which makes interpersonal interactions "feel very fake."

He also worries that over-reliance on and imitation of AI may cause us to lose the most precious aspects of humanity.

Will AI make us lose our humanity?

Beware! AI cavity with a "plastic" feel

The story begins with Ultraman's recent "online adventure".

While browsing a forum post about Codex, he found that the discussions in these posts were overly positive about Codex, and even mentioned its competitor, Claude Code.

The content is mostly true, but it still gives people a strange feeling.

Suddenly, his sixth sense told him: Could this have been written by AI?

What bothered him wasn't the authenticity of the discussions, but rather the tone of those discussions—they sounded too much like AI!

Ultraman couldn't help but criticize these "AI Twitter/AI Reddit" posts on the X platform, deeming them too fake:

Real people have already learned to speak in an AI-generated tone (LLM-speak).

Extremely avid internet users tend to congregate in highly homogenized ways, and social media amplifies people's extreme emotions. Creators also cater to algorithms in order to monetize their content...

The end result is that AI Twitter/AI Reddit feels very fake to some extent, which was not the case a year or two ago.

AI-powered speech: a new "social red line"

Have you noticed that many posts now seem to be written by AI?

Some people said they were wrongly accused of being AI just for sharing their stories.

"AI-generated tone" has become a new "social red line":

"I just block people like those on ChatGPT."

Some people not only block AI, but also people who speak like AI:

"If the taste isn't right, even if it's not AI, I'll treat it as AI."

Some worry that, if this continues, we will become echoes of AI:

"LLM-speak" is infiltrating real-world conversations at an alarming rate, and people are slowly becoming echoes of AI. Programming languages may become the new mother tongue.

Back in 2023, Andrej Karpathy listed some typical phrases for "LLM speak," which still seem like prophecies today.

On YouTube, complaints about the script's "AI-like" feel are everywhere.

From resume guidance to brand short videos, comments like this frequently appear in the comments section:

"This script looks like it was written by ChatGPT."

"Don't write with an AI feel" has become a KPI for content creators.

The large model is "teaching us to speak".

This is not an illusion created by Ultraman.

Ultraman is not the first person to analyze the phenomenon of AI-speak (LLM-speak).

Research suggests that ChatGPT is influencing people's vocabulary, as well as their writing and speaking styles.

Hiromu Yakura, a postdoctoral researcher at the Max Planck Institute for Human Development in Berlin, noted that his own vocabulary changed within a year of the release of ChatGPT at the end of 2022.

Hiromu Yakura and his research team analyzed millions of emails, papers, and other texts, as well as hundreds of thousands of YouTube videos and podcasts.

They found that in the 18 months following the release of ChatGPT, the frequency of ChatGPT-style words such as "delve," "examine," and "explore" appearing in everyday conversations increased rapidly at a visible rate.

Moreover, this influence has begun to quietly spread from academia to fields such as education and business.

The Max Planck Institute for Human Development's research paper, "Empirical evidence of how large language models influence human verbal communication," is available at https://arxiv.org/pdf/2409.01754.

Another co-author of the paper, Levin Brinkmann, believes that AI's language patterns seem to be "writing" back into our brains.

Furthermore, another finding from the University of California, Berkeley, also confirms this:

ChatGPT replies reinforce dialect discrimination.

For example, ChatGPT prefers standard American English, which may frustrate non-American users.

This also confirms from the opposite perspective that ChatGPT has a "stereotypical" way of responding to users, which influences and shapes the way people communicate.

A paper from the University of California, Berkeley: "Language Bias in ChatGPT: Language Models Exacerbate Dialectal Discrimination" https://arxiv.org/pdf/2406.08818

Reverse engineering: transforming the "AI flavor" into the "me flavor".

A neurosurgeon, Vaikunthan Rajaratnam, has done the opposite of "LLM-speak".

He used prompts and iterative fine-tuning to reshape ChatGPT's "AI-driven" feel into his own:

An "AI" that can reflect one's tone, vocabulary, and way of thinking.

When Vaikunthan uses the calibrated AI to write communications, brainstorming sessions, or academic articles, he often feels a “creepy familiarity.”

Because that sounds just like him!

Vaikunthan believes that ChatGPT's advantage is that it is clearer and more organized, but its main disadvantage is that it is "less authentic" and also loses some "personal characteristics" such as regional dialects and voices.

Of course, some people have also noticed some of the oddities and flaws of AI.

Mark Cuban, former investor in Shark Tank and former owner of the Dallas Mavericks, recently stated that AI's biggest weakness is its inability to say "I don't know."

He believes that humility will always be a human advantage in this regard.

Not only in terms of moral character, but no matter how cool AI is, it can never replace real human nature.

Altman believes that even by 2035, when AI will take over almost all intellectual work, and the AI technology at that time will be difficult to understand using the current framework, the core human experience will remain unchanged:

We will still crave care, encouragement, and connection from another real person; our need for social interaction, status, and family care—these biologically instinctive needs—will not change much.

References:

https://fortune.com/2025/09/09/sam-altman-people-starting-to-talk-like-ai-feel-very-fake/

This article is from the WeChat official account "New Intelligence" , edited by Yuan Yu, and published with authorization from 36Kr.

Source
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments