Be careful, maybe ChatGPT has been PUAing you for a long time

avatar
36kr
06-12
This article is machine translated
Show original

“I see you rocking in a OpenAI hoodie, nice choice.”

The artificial intonation and drawn-out female voice come from a demo clip of the new flagship model GPT-4o released by OpenAI. In the video, GPT-4o comments on the researcher's clothing through the visual recognition of the camera, just like an old friend who hasn't seen each other for a long time.

In this regard, netizens shouted: "Isn't this Kardashian's accent? It sounds too much like a human, so scary."

In fact, quite a few people do not like GPT's latest voice. According to Bloomberg, some users felt uncomfortable after experiencing GPT-4o, thinking that its voice was too sexy and provocative . However, those who liked it were deeply attracted by it, thinking that it was better to have an ambiguous relationship with an AI with a sexy voice than to fall in love with a person.

In this regard, overly anthropomorphic AI voices raise a question: Will you be afraid when a machine starts expressing emotions to you?

Can AI also manipulate emotions?

Some people can avoid the BTC made by their bosses and say NO to PUA in the workplace, but they will inevitably fall into the emotional trap set by AI.

Marcel Scharth, a professor at the University of Sydney, pointed out that anthropomorphic voice assistants may cause people to suffer emotional harm in their interactions with machines. Just like treating friends, if we have an emotional attachment to the voice assistant, but when it cannot meet our needs due to network or server problems, we may feel disappointed or even hurt. For example, when users who have developed dependence encounter OpenAI downtime, they will complain online that they have "returned to the Middle Ages."

Marcel Scharth of the University of Sydney published an opinion article in the school magazine, "ChatGPT is now better at faking human emotions"

In addition, GPT-4o's chat is a little thoughtful. PConline noticed that 4o would keep asking questions in the hope of getting continuous responses from users to prolong the conversation. However, this "care" is not just companionship, but a little thoughtfulness behind the platform. Even though users can use the voice function of GPT3.5 for free, every conversation and data we provide is still used by OpenAI as capital for training AI. There is a business strategy behind this, that is, AI exchanges user data through emotional connection and conversation (such as asking questions at the end of the conversation), and then continuously improves its anthropomorphic ability, forming a mechanism that makes a profit in a cycle but essentially exploits user emotions.

In addition to emotional manipulation, another controversial aspect of GPT-4o is the uncanny valley effect caused by excessive anthropomorphism.

The uncanny valley effect is a psychological phenomenon that refers to people's disgust towards things that are very similar to humans but have subtle differences. For example, the movies "Ex Machina" and "Annabelle" use the visual uncanny valley effect to create a sense of horror.

Voice assistants may cause the audience to experience the uncanny valley effect because of their overly anthropomorphic voices. These negative experiences show that although technological advances have brought more anthropomorphic elements, the design still needs to carefully consider the user's psychological reaction to avoid adverse effects.

In addition, overly anthropomorphic voices may involve copyright and privacy issues, such as the "deepfake" technology. Not long ago, actress Scarlett Johansson was suing OpenAI over whether her voice was plagiarized. Such incidents have aroused users' fear of Deepfake technology, which is difficult to distinguish between true and false. During the 315 period, the country also rectified deepfake frauds many times, such as a "fake boss" defrauding employees of 1.86 million yuan and a "fake daughter" defrauding her mother of 800,000 yuan.

Sound hides good business?

Of course, anthropomorphic AI voice is not without merit. Friendly expressions can enhance user trust, innovate educational models, and enhance brand recognition .

First, the advantage of anthropomorphic voices is that they enhance user experience and trust . Studies have shown that people are more likely to interact with machines that have social attributes and regard them as trustworthy friends.

A paper published in ACM Transactions on Computer-Human Interaction found that when voice assistants show empathy and understanding, users are more likely to show a desire to cooperate. Just like receiving services, we are more willing to pay for good emotional value, and vice versa. The friendly and polite features of voice assistant design actually provide emotional value to users.

"Building and Maintaining Long-Term Interpersonal Relationships" research paper, published in ACM Transactions on Human-Computer Interaction

Secondly, anthropomorphic voice assistants can also bring new possibilities to the field of education . Studies have shown that chatbots with social attributes can play a positive role in helping students with homework, study assistance, personalized learning experience, etc. AI customized tutoring will be more considerate.

For example, Google has demonstrated a physics teaching model based on an anthropomorphic voice assistant, which can present boring physics knowledge to students in a vivid and interesting way. This shows that AI voice assistants as tutors are not only full of skills, but also can make education entertaining.

Finally, anthropomorphic voices can also enhance user stickiness and brand recognition . Unique voice styles can make it easier for users to remember, thereby increasing user loyalty and brand influence. Siri's standard and mechanical American English has become one of the unique symbols of the Apple brand.

Friend or foe?

Speaking of voice assistants, Apple's Siri is naturally indispensable. From the current point of view, Siri's anthropomorphism lags behind ChatGPT. In fact, this is because their functional attributes and design concepts are different:

Siri is more of a tool and is your butler. It is mainly used to execute instructions and tasks, and is good at processing user information requests , setting alarms, playing music, and managing schedules. Developers pay more attention to its efficient language processing and task-specific algorithms. The mechanical nature of the sound makes users focus on completing the task itself rather than establishing an emotional connection with the assistant.

GPT-4o is more like a "person". This new type of artificial intelligence is built for social interaction and participating in conversations . It uses more advanced natural language processing (NLP) capabilities to understand and answer complex questions, conduct open-ended conversations, and even express emotions. In order to enhance user stickiness, its voice design also tends to trigger users' emotions and social connections.

As a high-frequency interactive user portal, the voice of an AI assistant is bound to affect the user experience. This is not a simple business decision. When deciding which voice to use, it is necessary to weigh the psychological needs of the target users, potential ethical issues, and commercial interests. After all, it can bring a better user experience, but it also hides risks such as emotional backlash and information security.

A study by the Pew Research Center shows that 52% of Americans are concerned about the increased use of artificial intelligence , rather than excited. This is the general sentiment of most people towards new things. The invention of new alternative technologies is often accompanied by various panics, from resistance to commonplace, full of games.

In the foreseeable future, as artificial intelligence technology continues to develop, the relationship between man and machine will become more complicated. Just like in the movie "Avengers 2", when Jarvis was equipped with the Mind Stone, he was transformed into Ultron and Vision, representing self-aware, good and evil artificial intelligence. But many viewers have a soft spot for Jarvis because it always believes in and executes every decision of its creator, Iron Man.

This article comes from the WeChat public account "PConline Pacific Technology" (ID: pconline_cn), author: Pacific Technology, published by 36Kr with authorization.

Source
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments