AI cheating storm sweeps US universities, nearly 50% of college students use ChatGPT to get high scores, OpenAI secretly keeps detection tools

avatar
36kr
03-19
This article is machine translated
Show original

ChatGPT or CheatGPT?

ChatGPT has become the "black technology" for American students to cheat, and OpenAI has secretly embedded a ChatGPT text detection tool for market share!

In the US, the WSJ says nearly 40% of high school students and nearly 50% of college students use AI to cheat and get high scores; in some schools, cheating is rampant and unabashed!

More and more American students are secretly using AI software like ChatGPT to complete assignments and get good grades, while parents and teachers are hardly aware of it and know little about it.

How prevalent is AI "cheating"?

A 17-year-old high school graduate from New Jersey openly admitted to using AI to cheat in English, math and history classes last year.

The student's experience reveals that generative AI technology has deeply penetrated the American education system, allowing this generation of students to easily "outsource" their learning tasks to AI software with massive knowledge.

High-tech cheat sheet | Image source: Alexandra Citrin-Safadi/WSJ

Education professionals acknowledge that AI has its value in the classroom.

However, on the issue of how to prevent students from abusing this technology to avoid learning, teachers and parents can only explore countermeasures on their own.

The tech companies that provide AI tools - whether tech giants like Google or AI startups like OpenAI - have provided virtually no substantive help in this regard.

How did AI become a secret weapon for students?

The 17-year-old student explained to the reporter the reasons she used AI in dozens of assignments last year:

Sometimes it's because the assignments are too boring or too difficult

Sometimes it's to pursue better grades

Procrastination until the last minute, and then find the assignment can't be completed

She used OpenAI's ChatGPT and Google's Gemini to get inspiration and review knowledge points, which many teachers allow.

But more often, she directly let AI complete the assignments - Gemini not only helped her solve math homework, but also scored high on a take-home test.

ChatGPT did the data calculations for her science experiments and wrote the more difficult parts of a history paper - she later rewrote that part to avoid plagiarism detection.

Google's AI homework helper introduction

In these cheating behaviors, she was only caught once.

40% of high school students, 50% of college students use AI to do homework

AI is changing all aspects of white-collar work, from drafting emails and making presentations to generating images, without exception.

Some professionals have lost their jobs because of this, and many company CEOs are also starting to adjust their future hiring plans.

According to OpenAI, ChatGPT's weekly active users have reached 400 million. 400 million is about 5% of the world's total population.

Among them, the student group is the main user.

Growth in ChatGPT's monthly active users

OpenAI's goal is to get students to develop the "habit" of using ChatGPT to answer questions for life - replacing Google's position as the primary platform for information queries for nearly 30 years.

A survey by Impact Research last year showed that among students using AI without teacher approval, about 40% of middle and high school students admitted to using AI to complete assignments.

And in the college student group, this proportion is close to 50%.

OpenAI's internal analysis also confirms that college students frequently use ChatGPT to assist in writing papers.

In a digital world without adult supervision, whether to use these AI tools that can secretly help them get high scores is a decision students have to make for themselves - the age restrictions set by AI companies are virtually meaningless and easily circumvented.

This situation is reminiscent of the previous generation's first encounter with social media - research on the pros and cons of AI on student learning, including its potential to encourage cheating, is still very inadequate.

Is AI good or bad for education?

Marc Watkins, Assistant Director of Academic Innovation at the University of Mississippi, said: "This is a huge social experiment that has already begun without anyone's consent."

The New Jersey student, although she successfully passed all her courses last year, admits that she actually learned much less than she should have mastered.

In her senior year of high school, she has stopped using AI illegally. She said: "I decided to take a step back and start thinking with my own brain again."

Paul Graham predicted a scenario in his blog last October: after the advent of AI, people will be divided into "writers" and "non-writers".

Whether academic misconduct should be blamed on AI, the attitude of AI companies is quite indifferent.

Siya Raj Purohit, a member of OpenAI's education team, said: "Cheating is not something OpenAI invented, and people who want to cheat will always find a way."

Siya Raj Purohit

But many educators are concerned that the convenience of AI chatbots will tempt more students to avoid difficult learning tasks.

Education sector: AI is not beneficial to education

As AI technology develops rapidly, as long as it is used cleverly, it is difficult to detect the traces of AI in assignments.

At an education technology conference last October, John B. King Jr., Chancellor of the State University of New York System and former U.S. Secretary of Education, said: "It's likely that a large number of students, from elementary school to college, used ChatGPT to complete their homework last night, and they didn't learn anything. This situation is worrying."

At the conference, when King expressed this concern, Purohit, who was on the same stage, raised a controversial point.

She said maybe we should treat the ability to use AI effectively as a standard for measuring critical thinking and communication skills.

She cited a recent discussion with a professor at the Wharton School, asking, "What is the value of writing in the AI era?"

In response, Daniel Willingham, a cognitive psychologist at the University of Virginia, gave his answer:

Writing can cultivate modes of thinking that cannot be replaced by other practices.

When explaining, writing forces you to explain more thoroughly; when arguing, writing prompts you to argue more comprehensively.

In his course, Jody Stallings, an 8th grade English teacher in South Carolina, has students read Harper Lee's "To Kill a Mockingbird".

At the beginning of each class, he has the students answer questions based on the content they have read.

Stallings said this writing exercise not only allows students to think deeply about the book's content, but also refines their thoughts through writing.

First edition cover of "To Kill a Mockingbird"

Tech sector: AI will reform education

However, tech supporters still firmly believe: AI can fundamentally reform and improve the quality of education.

Last year, OpenAI CEO Altman envisioned the bright future of education: "In the future, our children will have virtual intelligent tutors that can provide specialized guidance in any subject, in any language, and at the personalized pace of each child."

Leah Belsky, the Vice President of Education at OpenAI, suggests that schools should not resist but actively embrace AI in the classroom to address the issue of cheating.

Leah Belsky: Embrace AI, change the mindset

She states: "If educators can use AI responsibly in teaching and assignments, AI can be transformed from a tool that students secretly use to an important aid in the learning process."

Currently, multiple institutions and companies have introduced AI-powered tutoring systems to provide learning assistance to students without the presence of teachers.

At the same time, some teachers have also started using AI tools to help with lesson planning, assignment design, and drafting parent notifications.

Sandy Mangarella, a high school English teacher in New Jersey, says that AI chatbots have helped her improve her teaching content and design new classroom activities.

She says: "It feels like having an extra colleague to discuss things with at any time."

The Department of Education, state governments, non-profit organizations, and tech companies, including OpenAI, have issued guidance for teachers on how to use AI responsibly, noting that AI-generated information is not always accurate.

Rampant AI Cheating

However, these guidance documents mostly only briefly mention or completely overlook the issue of cheating.

Jacob Moon, a high school English teacher in Coosa County, Alabama, says he used to rarely discover cheating in his classroom.

But in the current school year alone, he has already found about 20 students using AI in their assignments, including essays.

Moon expresses concern: "As a teacher, what worries me the most is what will happen to these students when they go to college and the workplace?"

Chris Prowell, a sophomore student at the school, reveals that although his classmates often use AI to complete assignments, he personally never does so, as he is concerned it will affect his preparation for college.

He says the rampant AI cheating "is very unfair to those who truly put in the effort."

Some educators are skeptical about whether students can be regulated in their independent use of AI at home.

Joshua Allard-Howells, a high school English teacher in Sonoma County, California, says that last year, AI cheating spread like wildfire among his students.

As a result, he has implemented new measures: requiring students to handwrite drafts in class and prohibiting the use of electronic devices.

He says this change has had an unexpected effect: students have started to take writing more seriously, and their work has become more authentic and personal.

The downside of this method is that he has had to cancel all homework assignments.

He laments: "Whenever I assign homework, the students will use AI to cheat."

AI Writing Proliferates

Currently, dozens of companies are promoting apps that claim to be able to complete essays and assignments using AI, and "without anyone knowing."

For example, in July, a Facebook ad featured a marketing student wearing a backpack, headphones, and braces, with the caption:

Using You's research assistant, I completed my paper in just a few minutes, and even got the references sorted out.

This research tool has received an investment valuation of nearly $1 billion.

At the start of the current school year, the Estonian company Aithor heavily promoted its writing assistant on Facebook and Instagram.

The ads featured two graduation cap emojis and promised "one-click generation of a perfect paper."

In response, Aithor's Chief Marketing Officer Anatoly Terentyev said in an email: "In reality, we only provide a basic framework, and students still need to refine and personalize the content."

He said the company is re-examining the advertising language.

The ad copy of another AI company, Caktus, is more direct: "Teachers hate us."

Caktus CEO Harrison Leonard explains that this statement refers to "those teachers who resist change."

He believes that college students already have writing skills, and Caktus AI only helps them learn to use AI, preparing them for future work. He emphasizes that this is not a cheating tool.

However, Caktus AI's social media presence tells a different story, with a post on the "American Subreddit" Reddit claiming:

For the past three years, I've been playing football at a top school, and I really hate doing assignments and attending practices, those annoying things. So I developed software that can instantly generate all papers and instantly solve all assignments.

As a former University of Notre Dame football player, Leonard did not respond to this post, while his previous statement was: "I cannot control how students use this platform."

Patricia Webb, an English professor at Arizona State University, believes that although she explicitly prohibits the use of AI in her courses, she estimates that 20% to 40% of her students are secretly using AI in their writing assignments.

However, she states that without clear evidence, she finds it difficult to confront these students directly.

As Webb says: "Without evidence, you can't give out punishment."

This means she has to give passing grades to the assignments she is almost certain were AI-generated.

To address this, she has devised a workaround: assigning tasks that require personal experiences or interviews, which are more difficult to outsource to AI.

OpenAI Shelves Detection Tool

The investigation found that although OpenAI has developed a tool capable of accurately identifying text generated by ChatGPT, the company has chosen not to release it.

This is because internal research has shown that if this detection feature were launched, nearly 30% of users would reduce their use of ChatGPT.

Some teachers have turned to third-party AI detection tools instead.

However, these software solutions are often unreliable: sometimes they mistakenly identify AI-generated content as original student work, and other times they incorrectly flag students' genuine work as AI-generated.

The most widely used detection tool is Turnitin.

Turnitin claims to inspire students to produce original work

The company's Chief Product Officer, Annie Chechitelli, claims their tool can identify AI-generated text with 85% accuracy and rarely mistakenly flag students' original work as AI-generated.

However, the company has refused to provide the product for accuracy testing.

In 2023, Max Spero founded Pangram Labs, a company initially aimed at helping businesses identify fake product reviews generated by AI.

Pangram Labs claims an accuracy rate of over 99.98%

To his surprise, many of Pangram Labs' clients turned out to be teachers.

In a test, a reporter had ChatGPT write an essay analyzing the themes of "Lord of the Flies" at a 9th-grade level.

Pangram Labs' detection software analyzed the essay and determined it was highly likely to be AI-generated.

Researchers then ran the essay through HumanizeAI.pro, an app that claims to "transform AI-generated content into natural, authentic human writing style."

Interestingly, for the processed version, Pangram Labs' detection results became uncertain:

The first time it was tested, the system indicated "may contain AI-written content";

But when the exact same text was tested a second time, the conclusion was "completely human-written."

Spero stated that Pangram Labs is working to develop new technologies to "defeat these masking tools."

Carter Wright, a high school English teacher in the Houston suburbs of Texas, shared his frustrations.

He says he has spent countless hours trying to track down AI cheating: trying free versions of various detection software and carefully reviewing the revision history of his students' Google documents.

However, the students seem to always be one step ahead, finding new ways to circumvent the system.

Wright laments: "Unless these technologies are completely banned, it is almost impossible to completely eliminate cheating."

Reference:

https://www.wsj.com/tech/ai/chatgpt-ai-cheating-students-97075d3c?mod=tech_lead_pos2

This article is from the WeChat public account "New Intelligence", author: New Intelligence, editor: KingHZ Dingwei, authorized by 36Kr for release.

Source
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments
Followin logo