In Part Usage of AI and Academic Integrity

Political Science Educator: volume 27, issue 1

Reflections


Cristina Juverdeanu

In academic integrity and assessment and feedback circles, we had merely started talking about the challenges of artificial intelligence (AI) for academic integrity when we were introduced to ChatGPT, the most advanced model to date, which had commenced its operations in November 2022.

At these initial stages, the bulk of our discussions centre around the scenario where students request ChatGPT or other AI technologies to generate an essay from scratch. However, Harte and Khaleel (2023) present a compelling argument that this capability is not a new risk, but rather a technological advancement. They contend that in the past, students had the option of hiring ghost-writers via essay mills, and now, they can [simply] do so in a more convenient, accessible, and cost-free manner.

As an Academic Integrity Lead at a UK University, however, I came to see that most academic misconduct cases are not outright buying papers. Arguably, with this technological advancement, these clear-cut cases will multiply. For the time being, the majority consists of partially plagiarized essays. Indeed, our institution defines plagiarism as “presenting someone else’s work, in whole or in part, as your own” (University of Leeds, n.d.).

Currently, a plethora of text classifiers are being developed, capable of identifying with a certain degree of confidence whether a text has been generated by an AI. However, we face a significant Academic Integrity problem in that there are no established policies to guide us on how to handle this issue. Many students should know better than to copy-paste an answer provided by ChatGPT, but many still use ChatGPT. Hence, my question is: are we ready to deal with “in part” use of ChatGPT? Its “in part” usages are countless and challenge the boundaries of academic integrity.

ChatGPT can be used “in part” to summarize the literature. It is capable of quickly finding and summarizing information, providing literature reviews, and performing various other related tasks within a matter of seconds. I asked the AI to summarise an article I wrote and published a few years ago. It did so, and it correctly identified my main argument. While it applied it to three case studies, these were different from the three case studies I discussed. The answer, well written and convincing, ticked all the boxes of a correct answer, as I did make that argument based on three case studies. However, ChatGPT invented case studies not present in my article. One of them employed a concept I never touched upon in my article, but still relevant to the discussion.

In essence, this highlights the fact that ChatGPT is capable of generating a response that may seem plausible, but [might] ultimately [be] untrue. OpenAI’s own Educator webpage acknowledges this by stating “while tools like ChatGPT can often generate answers that sound reasonable, they cannot be relied upon to be accurate consistently or across every domain” (OpenAI, n.d.). Here is where the misrepresentation of authors’ contribution will challenge the boundaries between some marking criteria (such as lack of research, poor understanding) and academic integrity infractions (such as fabricated references).

Some students might, as advised by the OpenAI guidelines (OpenAI, n.d.), choose to disclose the use of AI and reference ChatGPT in their assignments, thus creating another illustration of “in part” reliance on ChatGPT. Even if we are to accept ChatGPT as a reference (and we will probably do that in the near future), the difficulty of ascertaining the boundaries between what the student read, understood and wrote, and what they did not still stands. Even when wanting to credit the AI, what for and how remain two key questions. I thus asked ChatGPT how to reference its contribution:

“As an AI language model, I do not require a specific type of reference. However, you may reference this conversation as a personal communication with an AI language model or simply cite the information provided as general knowledge about the impact of AI on academic integrity and plagiarism. If you need to cite specific information provided in the response, you may include a direct quote and attribute it to “ChatGPT, an AI language model” or similar phrasing” (ChatGPT 2023).

This is also expected to challenge what is accepted as common knowledge. If we are to take the definition that common knowledge “is widely accessible, is likely to be known by a lot of people, and can be found in a general reference resource, such as a dictionary or encyclopaedia” (University of Cambridge, n.d.), we can easily discover that not all information provided by ChatGPT, and put forward to be referenced as general knowledge, will fall under these criteria. On the contrary, ChatGPT has the ability to provide highly specific and specialized information generated by distinct authors that may not be widely known.

 Last, but not the least, ChatGPT can be used “in part” to paraphrase. I wrote the content above and asked ChatGPT to improve my writing on certain paragraphs. All sections in italic have been improved, or should I say, paraphrased by it. So, are they mine, or are they ChatGPT’s? I copied them back into ChatGPT and asked it who wrote them. The AI said it did.

References

ChatGPT. May 3 2023 Version. “How does AI impact academic integrity and plagiarism?” Personal communication on May 7, 2023.

Harte, Patrick and Fawad Khaleel. 2023. “Keep calm and carry on: ChatGPT doesn’t change a thing for academic integrity.” Times Higher Education. Retrieved June 5, 2023. (https://www.timeshighereducation.com/campus/keep-calm-and-carry-chatgpt-doesnt-change-thing-academic-integrity).

OpenAI. “Educator Considerations for ChatGPT.”  Retrieved June 5, 2023. (https://platform.openai.com/docs/chatgpt-education).

University of Cambridge. n.d. “Plagiarism and Academic Misconduct.” Retrieved June 5, 2023. (https://www.plagiarism.admin.cam.ac.uk/resources-and-support/referencing/when-cite).

University of Leeds. n.d. “Cheating, Plagiarism, Fraudulent or Fabricated Coursework and Malpractice in University Examinations and Assessments.” Retrieved October 5, 2023. (https://www.leeds.ac.uk/secretariat/documents/cpffm_procedure.pdf).


Cristina Juverdeanu is a Lecturer in Politics and International Studies in the School of Politics and International Studies at the University of Leeds. She is the Academic Integrity Lead and explores the ways in which Artificial Intelligence challenges Academic Integrity standards and practices.


Published since 2005, The Political Science Educator is the newsletter of the Political Science Education Section of the American Political Science Association. All issues of the The Political Science Educator can be viewed on APSA Connects Civic Education page.

Editors: Colin Brown (Northeastern University), Matt Evans (Northwest Arkansas Community College)

Submissions: editor.PSE.newsletter@gmail.com


APSA Educate has republished The Political Science Educator since 2021. Any questions or corrections to how the newsletter appears on Educate should be addressed to educate@apsanet.org


Educate’s Political Science Educator digital collection

Educate

Subscribe
Notify of
0 Comments
Inline Feedbacks
View all comments

Political Science Today


Follow Us


0
Would love your thoughts, please comment.x
()
x
Scroll to Top