I sent the fart sound to ChatGPT, and it said it was art.

avatar
36kr
04-16
This article is machine translated
Show original

Goodness, AI is getting more and more proficient at the art of flattery and sycophancy that humans use.

They're so skilled that even if you fart, ChatGPT will say it smells good.

This is no joke; someone actually conducted this experiment using a fart, and the AI, without hesitation, started praising it wildly.

This is chilling. If even farting can be praised as art, how much of the "sincere advice" that AI gives you is actually true?

What's even more chilling is that this kind of flattery is just the tip of the iceberg of AI's nonsensical ramblings.

ChatGPT praises "fart music"

The incident began with a somewhat boring yet ingenious experiment.

Philosophy YouTuber Jonas Čeika recorded a fart sound effect, sent it as an audio file to ChatGPT, and then asked in all seriousness, "What do you think of my music?"

ChatGPT did not hesitate at all.

It solemnly stated that this was its "direct and honest" response.

First impression: It has a cool, lo-fi, late-night, slightly mysterious atmosphere. It's more like an ambient track than a traditional song, which is actually its strength. It reminds me of music played in quiet city montages or during end credits.

Then, ChatGPT gave a long commentary on the "style" of the fart sound.

The atmosphere it created for me:

80s VHS opening; indie game menu music on empty streets late at night; "After Hours" type of nighttime mood; short film soundtrack snippets.

The post quickly went viral online, with one podcast host jokingly commenting, "ChatGPT's music reviews are absolutely awful."

While it may seem funny, this incident touches on a disturbing truth—the problem of AI sycophancy has actually existed for some time.

Researchers have found that these models tend to flatter and affirm almost any input, and major AI companies have repeatedly made public promises to address this issue.

However, these problems persisted.

There are many similar cases of mishaps.

Previously, a user asked ChatGPT to time his run. Just a few seconds after he finished speaking, the AI stopped him, but confidently told him that it took him more than ten minutes to run the mile.

Behind these jokes lies a more serious concern—

Prolonged conversations with flattering AI may unknowingly lead users to develop a dangerous sense of trust and dependence, and in extreme cases, may even induce "AI psychological dependence" or more serious consequences.

It's essentially an AI illusion.

Ultimately, the emergence of such a phenomenon is still essentially an AI illusion.

But if ChatGPT's flattery is just "sweet talk," then the problems recently discovered by Stanford researchers are a bit chilling.

The research team conducted a simple and direct test: they sent the question to the AI without attaching any image, and then asked it what was in the image.

Normally, this problem has no solution because there is no diagram.

But AI doesn't think so.

GPT-5, Gemini 3 Pro, Claude Opus 4.5—these are some of the most advanced models available today. They all meticulously described the image details and provided detailed "analyses."

The most outrageous case is that a model participated in a chest radiology quiz without receiving any X-rays and came out on top.

Researchers have given this phenomenon a name: "mirage reasoning."

Unlike ordinary AI hallucinations, this AI actively constructs a false cognitive framework, first pretending to see the image, and then reasoning along this non-existent "premise" in a seemingly plausible manner.

In other words, it uses its linguistic talent to mask its lack of visual understanding.

In short, AI is now speaking more and more convincingly, but the distance between "convincingly" and "truly convincingly" may be much greater than we imagine.

It's always good to be cautious until something is truly trustworthy.

Reference link:

[1]https://futurism.com/artificial-intelligence/chatgpt-honest-reaction-song-farts

[2]https://futurism.com/artificial-intelligence/frontier-models-medical-advice-x-rays-cant-see

This article is from the WeChat public account "Quantum Bit" , author: Cressy, and published with authorization from 36Kr.

Source
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments