GPT-4 passed the Turing test with a winning rate of 54%. UCSD's new work: Humans cannot recognize GPT-4

avatar
36kr
05-20
This article is machine translated
Show original

GPT-4 has passed the Turing test! The UCSD research team conducted empirical research and found that humans cannot distinguish GPT-4 from humans. Moreover, in 54% of cases, it was judged to be human.

Can GPT-4 pass the Turing test?

Once a sufficiently powerful model is born, people often use the Turing test to measure the intelligence of this LLM.

Recently, researchers from the Department of Cognitive Science at UCSD found that:

In the Turing test, people can't tell the difference between GPT-4 and humans at all!

Paper address: https://arxiv.org/pdf/2405.08007

In the Turing test, GPT-4 was judged to be human 54% of the time.

The experimental results show that this is the first time that a system has been proven to pass the "interactive" two-person Turing test.

Researcher Cameron R. Jones recruited 500 volunteers, who were divided into five roles: four evaluators, namely GPT-4, GPT-3.5, ELIZA and humans, and another character "played" the human himself, hiding on the other side of the screen, waiting to be discovered by the evaluator.

Below is an excerpt from the game. Can you tell which dialog box is a human?

Figure 1: Part of a conversation between a human interrogator (green) and a witness (grey).

In fact, among these four conversations, one was a conversation with a human witness, and the rest were conversations with artificial intelligence.

The first controlled "Turing test"

There have been many attempts at the Turing test over the past 74 years, but few controlled experiments.

The prestigious Loebner Prize is an annual competition, but from its inception in 1990 until 2020, no system has passed the test.

A recent large-scale study found that humans can recognize a sequence of LLMs with 60% accuracy in a 2-minute online conversation.

To date, there has been no "controlled" experimental demonstration of a machine passing the test.

To test whether it is possible for people to see through the deception of current AI systems, the researchers conducted a randomized controlled two-person Turing test using GPT-4.

The test requirements are simple, namely, human participants start a 5-minute conversation with humans and artificial intelligence, and judge whether the interlocutor is human.

As mentioned at the beginning, the researchers evaluated a total of three models - GPT-4, GPT-3.5, and ELIZA.

For the first two LLMs, the researchers adapted the best-performing prompts from exploratory studies with GPT-4 and GPT-3.5 to instruct the large model how to respond to messages.

A central challenge in interpreting the Turing test, the researchers say, is that a system can perform well for two reasons:

- First, it imitates humans very well

- Second, the judges are too credulous

The latter situation, often referred to as the ELIZA effect, is where participants humanize even simple systems.

The ELIZA system is very familiar to everyone.

It’s a simple, rules-based chatbot from the 1960s, but users find themselves attributing human traits to it, too.

Therefore, in the experiment, the researchers used ELIZA as a baseline, both to empirically quantify the ELIZA effect and to perform manipulation checks in the experiment.

So, what are the specific results of this Turing test?

GPT-4 pass rate 54%

First, let’s look at an important indicator: the pass rate.

The researchers defined the pass rate as the proportion of subjects in the game that were judged to be human by the evaluators.

GPT-4 achieved a pass rate of 54%, exceeding GPT-3.5 (50%) and the ELIZA baseline (22%), but lower than human subjects (67%).

The pass rate of the evaluation object (left) and the trust of the evaluator (right). Each point in the right figure represents a game

This result is actually not surprising and is basically consistent with the assumptions made in the initial stage of the research.

Before the experiment, the researchers made several assumptions: First, they assumed that humans’ ability to detect GPT-4 would not exceed chance (50%).

To test this, we fitted a logistic regression model to predict pass rate with only the intercept.

The intercept is not significantly different from 0 (z=0.14, p=0.482), which indicates that the pass rate of GPT-4 can be approximately equal to 50%. The Bayesian model predicts that the 95% credible interval of the GPT-4 intercept is -0.24-0.55 (equivalent to a pass rate of 44%-63%).

This means that when humans face GPT-4, the probability of successfully distinguishing it is about the same as flipping a coin!

The results also confirmed the researchers’ second and third hypotheses: the human pass rate was significantly higher than that of GPT-4 (z=2.42, p=0.017), while the GPT-4 pass rate was higher than that of ELIZA (z=4.11, p<0.001).

However, inconsistent with the researchers' experimental assumptions, the performance of GPT-4 and GPT-3.5 was almost the same (z=0.497, p=0.619).

In the area of "disguising as human", it was a tie.

Also unexpectedly, participants who reported having a better understanding of how the LLM worked were not more successful in identifying the LLM (z = -0.023, p = 0.869).

In exploratory analyses, the researchers found that age had a negative impact on interrogator accuracy (z = -2.24, p = 0.025), but other demographic variables (including gender, education level, and frequency of interaction with the chatbot) had no significant effect.

Why is this the case? Let's analyze the reasons behind it.

To gain insight into what exactly influences evaluators’ decisions, we categorized the strategies evaluators used and the rationales they provided for their decisions.

36% of raters engage in small talk with the person being evaluated, asking about personal details or daily activities.

The second and third most common strategies were social and emotional (25%)—for example, asking the person about their opinions and experiences and observing whether they had a sense of humor.

Others focus on knowledge and reasoning (13%), including questions about mathematics, logic and common sense.

Query strategy (left) and average accuracy (right)

To compare the effectiveness of each strategy, the researchers calculated the average accuracy using these strategies.

When the evaluator asked the other party about human experience (75%), logic and mathematics (78%) or directly "accused" the other party of being artificial intelligence (80%), they were usually closer to the "correct answer".

In games, AIs often fail because evaluators think they seem to be forcing themselves into a role, are too casual, or lack personality.

ELIZA is easily "exposed", mainly because of her unnatural reactions and lack of contextual awareness.

It can be seen that both the strategies used by the volunteers participating in the game and the reasons for their decisions are more focused on language style and socio-emotional factors rather than more traditional intelligence categories (knowledge and reasoning).

This seems to reflect an underlying assumption of the volunteers - that social intelligence is the human trait that AI is most unable to imitate.

Turing test passed

How to tell when a system has passed the Turing test?

Turing predicted in his 1950 paper that by the end of the century, AI systems would be able to play the Imitation Game so well that an examiner would have no more than a 70 percent chance of making a correct identification after a five-minute exchange.

This conclusion has led to the question of whether AI's pass rate can reach 30% becoming a topic of discussion among many researchers.

However, this benchmark seems arbitrary, and it is unclear whether Turing used it as a definition of success.

In fact, the 50% baseline is more reasonable because it provides more evidence that human reviewers are no better than chance at identifying AI.

This definition is particularly relevant to the goal of discovering whether users can reliably identify other humans in online interactions.

In our pre-published analyses, we addressed this issue by using the ELIZA baseline as a manipulation check.

Only when the analysis shows that ELIZA's pass rate is lower than probability, and GPT-4's pass rate is not lower than probability, can the model be judged to have passed the test.

And by this definition, GPT-4 clearly passes this version of the Turing test.

In addition, the participants’ confidence scores and decision-making basis indicate that they were not making random guesses: GPT-4’s average confidence level was 73% for humans (see Figure 2).

Netizen discussion

Some netizens said that if we are to equate the Turing test with intelligence, the Turing test is not a good test. However, the fact that AI can almost always "cheat" humans is a bit worrying.

Others said they were skeptical of the test because GPT-4 would outperform most people, making it easy to distinguish between humans and AI.

The researchers said that this is indeed a problem we encountered. For example, GPT-4's knowledge reserve is "too rich" or it has mastered too many languages. We explicitly prompted the model to avoid this situation, which is effective to a certain extent.

References

https://x.com/camrobjones/status/1790766472458903926

https://x.com/emollick/status/1790877242525942156

This article comes from the WeChat public account "New Intelligence" (ID: AI_era) , author: Taozi Yongyong, and is authorized to be published by 36Kr.

Source
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments