Chinese PhD student in the US expelled from school for allegedly using ChatGPT to write papers: sued school, demanded public apology + compensation of $575,000

avatar
36kr
03-07
This article is machine translated
Show original
Here is the English translation of the text, with the content inside <> not translated:

Nowadays, various AI-generated images, novels, and videos can be seen everywhere. In universities, more and more students are starting to use AI tools to assist in writing their theses. The third-party institution "Meikesu" conducted a survey of more than 3,000 university teachers and students, and the results showed: Nearly 60% of university teachers and students use generative AI daily or several times a week; in the scenarios of using AI, nearly 30% are mainly used for writing theses or assignments; and some students admit that they often directly copy the content generated by AI.

However, from the current perspective, the academic community is very rigorous in the use of AI, and some universities have even strictly prohibited the use of AI to write theses.

In June 2024, East China Normal University and Beijing Normal University jointly launched the "Generative Artificial Intelligence Student Usage Guide", requiring students to clearly distinguish the content directly generated by AI and their own contributions when using AI to complete academic tasks, and ensure that the AI-generated content does not exceed 20%;

In November 2024, Fudan University issued the "Regulations of Fudan University on the Use of AI Tools in Undergraduate Graduation Theses (Designs) (Trial)", which mentioned the "six prohibitions" and detailed the prohibited scope of AI tools, including thesis writing, defense, and other links;

On the 5th of this month, Shanghai Jiao Tong University also recently issued the "Regulations of Shanghai Jiao Tong University on the Use of AI in Education and Teaching", requiring students to adhere to the value positioning of AI-assisted learning and abide by the AI use regulations of each course. If there are any related violations, the school will investigate the relevant responsibilities.

Even recently, some university professors have directly reminded students: If the essay is generated by AI, it will be scored 0 points.

Certainly, "reasonable" use of AI is the trend. For students, while using AI tools to improve efficiency, they need to uphold academic principles; for teachers, identifying AI-generated content is a method, but it is not 100% accurate.

According to the foreign media KARE11 report, the University of Minnesota expelled a third-year Chinese international student Haishan Yang last November, on the grounds that he violated the rules by using AI tools to complete the exam. However, Haishan Yang firmly denied the accusation and sued the school in January this year: demanding $575,000 in compensation, the restoration of his student status, and a public apology.

During the media interview, Haishan Yang mentioned that due to his expulsion from the school, he not only suffered mental harm, but it also had a serious impact on his career planning, and his visa also became invalid: "For me, this is practically a 'death sentence'."

Exams Explicitly Prohibiting the Use of AI Tools

It is reported that the 33-year-old Haishan Yang was a doctoral student in the Health Services Research, Policy and Management program at the University of Minnesota, and this was his second doctoral degree. Previously, he had obtained a bachelor's degree in English Language and Literature from Nanjing Normal University and a doctoral degree in Economics from Utah State University. The main purpose of him pursuing a second doctoral degree was: to continue his academic career and become a professor.

The beginning of this incident was in August 2024, when Haishan Yang was traveling in Morocco and remotely took an important exam - the comprehensive exam before the doctoral dissertation, which required him to answer three essay questions within 8 hours. This exam was an open-book exam, allowing the use of classroom notes, books, and research reports, but the use of AI tools was also explicitly prohibited.

After completing the exam, Haishan Yang felt that he had performed well and was very satisfied with his performance. However, a few weeks later, he received an email notification: he failed the exam, and the professors believed that he had used ChatGPT.

In his daily life, Haishan Yang admitted that he uses ChatGPT every day to find travel inspiration, correct grammatical errors in assignments, etc., but he insisted that he did not use AI in that exam, and did not even use AI in the preparation process.

According to reports, the main reasons why the professors "strongly suspected" that Haishan Yang used AI include: the content of Haishan Yang's thesis went beyond the course scope, and they believed he "could not have" completed these answers independently; two professors generated similar answers using ChatGPT, and they claimed that they were "highly similar" to Haishan Yang's thesis; his thesis also used a rare abbreviation "PCO" (Primary Care Organization), and this abbreviation also appeared in the ChatGPT-generated answers, so the professors believed this was also "evidence".

Based on the documents provided by Haishan Yang, it was initially the University of Minnesota Associate Professor Ezra Golberstein who submitted a complaint to the school, stating that four review professors unanimously believed that Haishan Yang's exam answers did not match his personal style, and recommended that he be expelled.

Subsequently, the University of Minnesota held a student conduct review hearing for this, similar to an "academic trial". The Office of Community Standards (OCS) was responsible for the investigation and provided Haishan Yang with several so-called pieces of evidence:

4 professors read Haishan Yang's thesis and believed that the "style and content" were highly similar to the answers generated by ChatGPT, and therefore suspected him of cheating.

When the exam essay questions were input into ChatGPT, it was found that its generated answers had certain similarities to Haishan Yang's thesis in terms of structure, terminology usage, and writing style.

They also found one of Haishan Yang's assignments from a year ago, in which he had written a prompt to AI: "Rewrite it, make it more casual, like a foreign student wrote it, but not too much like AI (rewrite it, make it more casual, like a foreign student write but no ai)." - The professors believed this showed that Haishan Yang had a habit of relying on AI.

They also fed Haishan Yang's thesis into the AI detection tool GPTZero, and the results showed that the thesis might contain AI-generated content.

Doctoral Student's Counterattack: The Evidence Is Not Credible!

Regarding these evidences and accusations, Haishan Yang firmly denied them, claiming that the professors' accusations had serious loopholes, and he raised two key rebuttals:

(1) The ChatGPT answers were modified by the professors "at least ten times": Haishan Yang claimed that the ChatGPT-generated answers submitted by the professors in the hearing were different from the initial versions discussed internally. He pointed out that the professors might have repeatedly adjusted the ChatGPT prompts until they generated an answer that was more similar to his thesis, and then used it to accuse him.

(2) AI detection tools are inaccurate: In recent years, GPTZero has been proven to have misjudgment problems many times, and many studies have shown that it often mistakenly identifies human writing as AI-generated. Therefore, Haishan Yang believes that using GPTZero as the only evidence is completely unreasonable. "Assuming the AI detection tool claims an accuracy of 99%, what does that mean?" He believes that even a 1% error rate can seriously affect students who have not actually used AI to complete their assignments.

In addition, Haishan Yang also pointed out that he had studied the relevant papers and course content for a long time before the exam, and the use of certain rare professional terms was not strange, and the professors' claim that his thesis had an "AI style" was not convincing.

In fact, the University of Minnesota, like many other universities, has not completely banned the use of AI, but rather has suggested that teachers integrate AI into course design and re-examine evaluation goals, not relying solely on the results of AI detection software, but using it as a "last resort" - after all, many studies have shown that the accuracy of AI detection software is indeed insufficient.

For example, during that hearing, Joscelyn Sturm, a fourth-year English major student at the University of Minnesota, told the review panel that she had also used AI detection tools to check her own papers, and the result was that her article was judged to be AI-generated. Joscelyn Sturm said that she and many students are afraid of AI detection software: "The unreliability of AI detection tools will determine whether a student gets a degree or is forced to drop out and go home."

Nevertheless, for the University of Minnesota, the evidence provided by the professors was sufficient. In the final ruling of the hearing, the review panel, composed of several professors and graduate students from other departments, stated that they trusted the professors' ability to identify AI-generated papers - ultimately, the school still decided to expel Haishan Yang.

Demand that the school revoke the expulsion decision, publicly apologize, and pay $575,000 in compensation

Faced with the decision to expel him, Haishan Yang tried to appeal but was rejected, and ultimately chose to sue the University of Minnesota.

In January of this year, he filed a lawsuit in the Minnesota state and federal courts, accusing the University of Minnesota professors of abusing their power, fabricating evidence, and violating his due process rights - Haishan Yang admitted that he did use ChatGPT in the process of drafting the legal documents for this lawsuit. Reportedly, his main litigation demands include:

The school should re-examine his exam scores, revoke the expulsion decision, and publicly apologize;

Compensate him for the academic and mental losses caused by his expulsion, demanding that the University of Minnesota pay $575,000 in compensation;

File a defamation lawsuit against the associate professor Hannah Neprash, who allegedly used ChatGPT-generated "evidence", and demand $760,000 in compensation.

In this incident, Haishan Yang's advisor - Brian Dowd, a senior professor in the University of Minnesota's Public Health and Management program with over 40 years of teaching experience - is one of the few professors who supports him. In his daily teaching, Brian Dowd allows students to use generative AI, as he believes it is impossible to completely prevent or accurately detect the use of AI.

"I would be surprised if Haishan Yang or any of us faculty members have never used these AI tools before." For this reason, in a letter submitted to the hearing, Brian Dowd told the school that Haishan Yang is usually an "extremely knowledgeable and talented student", and believes that the school's allegations "lack evidence": "In my over forty years of experience, I have never seen a college face such hostility towards a student, I cannot explain the source of this hostility."

Regarding this statement, Haishan Yang mentioned that he had conflicts with some professors a year before the exam. Records show that the school revoked his financial aid due to his "poor performance" and "inappropriate remarks" during his time as a research assistant. Haishan Yang supplemented that, "The graduate program director told me directly at the time that I should consider dropping out."

Currently, this case has not yet reached a conclusion, and the University of Minnesota has not made an official response. The school's public relations department stated in a statement: "The University of Minnesota always strictly adheres to academic integrity regulations and ensures that all disciplinary actions comply with procedural requirements."

In fact, cases similar to Haishan Yang's are not isolated incidents. In recent years, many universities around the world have encountered similar controversies in identifying AI cheating: the inaccuracy of AI detection tools, the similarity between AI-generated answers and human answers, etc., all of which have made the academic community increasingly vague about the standards for identifying "AI cheating". Some scholars believe that relying solely on the similarity of AI-generated answers to accuse cheating may lead to misjudgments; some also point out that the prevalence of AI has indeed brought new challenges to academic integrity, and for this, schools should quickly establish clearer standards for identifying AI cheating, rather than adopting an "guilty until proven innocent" attitude.

Reference links:

https://www.mprnews.org/story/2025/01/17/phd-student-says-university-of-minnesota-expelled-him-over-ai-allegation

https://www.kare11.com/article/news/local/kare11-extras/student-expelled-university-of-minnesota-allegedly-using-ai/89-b14225e2-6f29-49fe-9dee-1feaf3e9c068

This article is from the WeChat public account "CSDN Programmer's Life", compiled by Zheng Liyuan, and published with authorization by 36Kr.

Source
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments
Followin logo