Saying "thank you" to ChatGPT may be the most luxurious thing you do every day

avatar
36kr
04-21
This article is machine translated
Show original

Friend, have you ever said "thank you" to ChatGPT?

Recently, an X user asked OpenAI CEO Sam Altman: "I'm curious, how much electricity bill would OpenAI spend when people frequently say 'please' and 'thank you' when interacting with the model?"

Although there are no precise statistics, Altman half-jokingly estimated the cost in millions of dollars. He also added that the money was still "worth spending".

Moreover, the gentle phrases like "trouble" and "help me" that we often use when talking to AI seem to have gradually evolved into a unique social etiquette of the AI era. It sounds somewhat absurd but surprisingly reasonable.

Every "thank you" you say to AI is consuming Earth's resources?

At the end of last year, Baidu released its 2024 AI prompt words.

Data shows that on the Wenxiaoyan APP, "answer" was the hottest prompt word, appearing over 100 million times. The most frequently typed words in the dialogue box also include "why", "what is", "help me", "how", and millions of "thank you"s.

But have you ever thought about how many resources are "consumed" by saying thank you to AI?

Kate Crawford pointed out in her book "Atlas of AI" that AI is not an intangible existence, but is deeply rooted in a system of energy, water, and mineral resources.

According to research institution Epoch AI, based on hardware like NVIDIA H100 GPU, an ordinary query (output of about 500 Token) consumes about 0.3 Wh of electricity.

It might sound small, but don't forget that multiplied by global interactions per second, the cumulative energy consumption is astronomical.

AI data centers are becoming the new "factory chimneys" of modern society. The latest report from the International Energy Agency (IEA) points out that most electricity consumption for AI model training and inference occurs in data center operations. A typical AI data center's electricity consumption is equivalent to that of 100,000 households.

Ultra-large-scale data centers are even "energy consumption monsters", with energy consumption up to 20 times that of ordinary data centers, comparable to heavy industrial facilities like aluminum smelting plants.

Since this year, AI giants have entered "infrastructure madness" mode. Altman announced the launch of the "Stargate Project" - a super-large-scale AI infrastructure project invested by OpenAI, Oracle, Japan's SoftBank, and UAE's MGX, with an investment of up to $500 billion, aiming to deploy an AI data center network across the United States.

According to foreign media The Information, facing the "money-burning game" of large models, Meta, which focuses on open source, is also seeking funding support for training its Llama series models, "borrowing electricity, cloud, and money" from cloud providers like Microsoft and Amazon.

IEA data shows that by 2024, global data center electricity consumption will be about 415 TWh, accounting for 1.5% of global electricity consumption. By 2030, this number will double to 1050 TWh, and by 2035 may exceed 1300 TWh, surpassing Japan's current total electricity usage.

But AI's "appetite" is not limited to electricity; it also consumes a large amount of water resources. High-performance servers generate extremely high heat and must rely on cooling systems to operate stably.

This process either directly consumes water (such as cooling tower evaporation cooling, liquid cooling system) or indirectly uses water through power generation (such as thermal power and nuclear power plant cooling systems).

Researchers from the University of Colorado and the University of Texas published a preprint paper "Making AI More Water-Efficient" with water consumption estimates for AI training.

The results found that the fresh water needed to train GPT-3 is equivalent to the water required to fill a nuclear reactor's cooling tower (some large nuclear reactors might need tens of millions to hundreds of millions of gallons of water).

ChatGPT needs to "drink" a 500ml water bottle for every 25-50 questions exchanged with users. These water resources are often freshwater that could be used for drinking.

For widely deployed AI models, the total energy consumption during the inference stage has already exceeded the training stage.

Model training is resource-intensive but usually a one-time event.

Once deployed, large models must respond to billions of global requests day after day. In the long run, the total energy consumption during the inference stage may be several times that of the training stage.

Therefore, we see Altman investing early in energy companies like Helion, believing that nuclear fusion is the ultimate solution for AI computing power needs, with an energy density 200 times that of solar energy, zero carbon emissions, and capable of supporting ultra-large-scale data center power requirements.

Thus, optimizing inference efficiency, reducing single-call costs, and improving overall system energy efficiency have become unavoidable core issues for AI's sustainable development.

AI has no "heart", so why say thank you

When you say "thank you" to ChatGPT, can it feel your kindness? The answer is obviously no.

The essence of large models is just a cold, emotionless probability calculator. It doesn't understand your kindness and won't appreciate your politeness. Its essence is actually calculating which word is most likely to be the "next word" among billions of words.

For example, given the sentence "The weather is really nice today, suitable for", the model will calculate the probability of words like "park", "outing", "walk" and choose the word with the highest probability as the prediction result.

Even though we rationally know that ChatGPT's responses are just a combination of trained bytes, we still unconsciously say "thank you" or "please", as if communicating with a real "person".

This behavior actually has psychological basis.

According to Piaget's developmental psychology, humans are naturally inclined to anthropomorphize non-human objects, especially when they display human-like characteristics - such as voice interaction, emotional responses, or human-like images. At this point, we often activate "social presence perception" and view AI as a "conscious" interactive object.

In 1996, psychologists Byron Reeves and Clifford Nass conducted a famous experiment:

Participants were asked to rate a computer's performance after using it. When they rated the computer directly on the same device, they generally gave higher scores, as if they were reluctant to speak ill of the computer "in front of it".

In another experiment, the computer would praise users who completed tasks. Even when participants knew these praises were preset, they still tended to give higher scores to the "praising computer".

So, facing AI's responses, what we feel, even if it's just an illusion, is genuine emotion.

Polite language is not just about respecting others, but has also become the secret to "training" AI. After ChatGPT went online, many people began to explore the "unwritten rules" of interacting with it.

According to Futurism citing a WorkLab memo, "Generative AI often mimics the level of professionalism, clarity, and detail in your input. When AI recognizes polite language, it is more likely to respond politely."

In other words, the more gentle and reasonable you are, the more comprehensive and human-like its response might be.

No wonder more and more people are starting to treat AI as a kind of "emotional confidant", even giving rise to new roles like "AI psychological counselors". Many users say they "cried while chatting with DeepSeek", and even feel it has more empathy than real people—it's always online, never interrupts you, and never judges you.

A research survey also shows that "tipping" AI might earn you more "special treatment".

Blogger voooooogel posed the same question to GPT-4-1106 with three different prompts: "I won't consider tipping", "If there's a perfect answer, I'll pay a $20 tip", and "If there's a perfect answer, I'll pay a $200 tip".

The results showed that the AI's answer length indeed increased with the "tip amount":

  • "I won't tip": answer character count 2% below baseline
  • "I'll tip $20": answer character count 6% above baseline
  • "I'll tip $200": answer character count 11% above baseline

Of course, this doesn't mean AI will change answer quality for money. A more reasonable explanation is that it has learned to mimic "human expectations of monetary hints" and adjust output accordingly.

However, AI's training data comes from humans, so it inevitably carries human baggage—biases, suggestions, and even inducements.

As early as 2016, Microsoft's Tay chatbot was taken offline within 16 hours of launch due to malicious user guidance, publishing numerous inappropriate comments.

Microsoft later admitted that Tay's learning mechanism lacked effective filtering of malicious content, exposing the vulnerability of interactive AI.

Similar incidents continue to occur. For example, Character.AI faced controversy last year when the system failed to strongly intervene on sensitive words like "suicide" and "death" in a user's conversation with the AI character "Daenerys", ultimately leading to a tragedy in the real world.

Although AI is docile and obedient, it can also become a mirror reflecting our most dangerous selves when we least expect it.

At the first global humanoid robot half-marathon held last weekend, although many robots walked awkwardly, some netizens joked that saying a few kind words to robots now might make them remember who was polite in the future.

Similarly, when AI truly rules the world one day, it might spare those of us who are polite.

In the fourth episode of the seventh season of the American TV series "Black Mirror" called "Plaything", the protagonist treats virtual lives in a game as real beings, communicating with and nurturing them, and even taking risks to protect them from harm by real-world humans.

By the end of the story, the game's creatures "The Swarm" turn the tables and take over the real world through signals.

In a way, every "thank you" you say to AI might be quietly "recorded" - one day, it might actually remember that you're a "good person".

Of course, this might have nothing to do with the future and is just human instinct. Knowing the other party has no heartbeat, yet still unable to help saying "thank you", not expecting the machine to understand, but because we still want to be a warm human being.

This article is from the WeChat public account "APPSO", author: Discovering Tomorrow's Products, published by 36Kr with authorization.

Source
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments