Skip to main content

Trio of Turner College Researchers Examine Ethics of Using Hidden AI Prompts to Detect AI-Assisted Cheating

New research by Turner College accountant Charles Boster and Turner College management professors Mark James and Laurence Marsh investigates the ethics of using hidden prompts to detect AI generated writing in student submissions in asynchronous online university classes. The study, which appears in the latest issue of the Journal of Higher Education Theory and Practice, points out that in higher education, writing assignments are currently viewed as a high impact teaching practice that assesses students’ understanding of course materials, develop critical reasoning skills, foster communication skills, and create deeper engagement with assigned materials. The authors add that difficulties in maintaining the integrity of written assessments associated with students' use of AI has led some instructors to start using the controversial method of embedding hidden AI prompts, which are covert commands or phrases placed in the instructions of writing assignments, in writing assignment questions as a way of detecting AI generated content. However, some argue that the focus on AI detection conflicts with the purpose of education in general.
The authors explain that consequentialism is a way of evaluating the morality of actions based on weighing the costs versus the benefits of the outcomes. Using a consequentialist perspective, embedding hidden AI prompts in writing assignments is acceptable if the outcomes are a net benefit. Those benefits might include detecting and deterring AI generated plagiarism, maintaining academic integrity, and helping maintain assessment fairness. However, and as the authors point out, this approach to the ethics of the detection of AI assistance assumes the costs and benefits are clearly and easily measurable and that the benefits clearly outweigh the costs. If students detect hidden AI prompts, they may feel deceived, anxiety, and mistrustful towards instructors. In other words, hidden AI prompts may foster a culture of suspicion, where surveillance displaces dialogue. This is particularly problematic in learning environments that strive for openness, experimentation, and mutual respect. In this scenario, the emotional and relational costs may be greater than the benefits.
According to deontological ethics, actions should not be judged by their outcomes but whether or not the actions are inherently right or wrong based on universal moral principles such as respect for human dignity. A key idea in deontological ethics is, as the Turner College researchers explain, that we should not treat other human beings as tools or methods for our own purposes or end goals. Every human being has an inherent internal value, and our actions must be respectful of that human value. When using deontological ethics to examine the question of hidden AI prompts in academic settings, the ethical issue is not whether the action leads to a positive outcome, but whether the action itself is consistent with a universal ethical principle. As the authors conclude, even if hidden AI prompts are effective in detecting academic dishonesty, using them is ethically suspect as they potentially undermine transparency and treat students as surveillance objects. In this view, the act of hiding information from students, even with positive outcomes and good intentions such as preserving academic integrity, can be morally wrong if it disrespects the students’ autonomy and dignity.
A third ethics framework considered by Boster, James and Marsh is the evaluation of actions according to the principles of justice and equity. As the authors explain, "In the context of online teaching this means evaluating if using hidden AI prompts in course writing assessments creates different benefits and burdens between students. For example, students may have differing language abilities and using hidden AI prompts may differentially impact students based on linguistic ability. Non-native English speakers (i.e., international students) may rely on stock textbook phrases, formal academic examples, or grammar editing software (i.e., Grammarly) to edit their writing submissions. Those actions may produce writing that is similar to AI created writing, resulting in a student’s writing submission being flagged as AI generated, creating a situation where they are unfairly suspected of submitting AI generated content. Equity of resources is also a concern. Students with high digital literacy, access to advanced technology tools, and or are institutionally adept may be more likely to avoid detection or to create a strong counter narrative if accused of AI use." Here, the researchers conclude that justice and equity demand that the implementation of hidden AI prompts ensures detection mechanisms are both accurate and do not disproportionately impact different students. Therefore, from a justice and equity perspective, hidden AI prompts should not be used as they may reinforce existing inequities rather than promoting justice and equity in academic assessments.
Boster, James and Marsh close their study by considering alternatives to the use of hidden AI prompts. One of these is the requirement that students include an AI usage statement with each submitted writing assignment, stating if and or how they used AI programs in their writing. This approach is a framing strategy signaling to students that use of AI must be acknowledged and properly credited for their writing inputs. This approach relies on students honestly reporting their AI usage. Of course, this may be a naive approach as students who intend to use AI to write assignments may be unlikely to admit to or accurately describe their AI program usage. An alternative is the requirement that students whose writing projects are flagged for high AI content contact their instructor and explain their writing or respond to follow-up questions. This approach forces students to demonstrate understanding and ownership of their writing, albeit in a time and labor-intensive fashion. Thirdly, instructors could require students to use technology that creates version-controlled histories of drafts of and revisions to a submitted writing assignment in order to identify AI generated content by comparing different versions of a writing assignment over time. However, this assessment approach greatly increases an instructor’s workload and there is potential resistance from students about using such programs. Lastly, another option is changing the writing assignments, such as asking students to use personal examples to demonstrate their understanding of course concepts or materials or using group projects that rely on student group feedback and interaction logs maintained by group members, so that employing an AI program is less feasible.

Comments

Popular posts from this blog

Seven Turner College Management and Marketing Faculty Have Combined to Produce Eight A-Level Journal Publications Between 2021 and the Present

A number of faculty in the Turner College's Department of Management and Marketing, which includes faculty in management information systems, have produced A-level journal publications in the last few years. This report covers that activity, starting with John Finley , the chairperson of the department. Professor Finley published a paper in the Journal of Computer Information Systems in 2022.      Finley is joined by Kirk Heriot , the Crowley Distinguished Chair in Entrepreneurship. Heriot, who earned a PhD in management from Clemson University, published in a 2021 issue of Small Business Economics . One of the study's co-authors, Andres Jauregui of Fresno State University, was previously a member of the Turner College's economics faculty.      Next is Johnny Ho , a professor of management, who has a 2022 publication in the Journal of Computer Information Systems . Ho has won CSU's Excellence in Research Award on multiple occasions, while he has compiled 2...

TSYS School, Jianhua Yang, Lixin Wang Each among Top Five in the World

New research by computer scientists in the School of Information Technology at Universiti Utara Malaysia that ranks institutions and individuals on the basis of scholarship in the area of stepping-stone attacks heaps praise on the Turner College’s TSYS School of Computer Science and two of its faculty – Jianhua Yang and Lixin Wang .   The article, published in the April 2023 issue of the International Journal of Research in Engineering and Science , provides a bibliometric analysis of both publication and citation data from 2000 to September of 2022 related to research on stepping-stone intrusion.   Among several results, it reports that Columbus State University ranks second worldwide, trailing only the University of Houston, using total publications on the subject as the basis of comparison.   A number of other U.S. institutions appear in the top 10, including third-ranked North Carolina State University, fourth-ranked University of Illinois, sixth-ranked Iowa State U...

New Butler Center Report Identifies Employment Gaps in the Columbus Area

Officials in the Turner College's Butler Center for Research and Economic Development recently put the finishing touches on an extensive report on trends in educational programs and occupations in the Columbus area. The report also includes data on business and technology trends.  According to Fady Mansour , Director of the Butler Center, there are several key takeaways from the report regarding 10 occupational gaps that currently exist in the Columbus area. First,  software development occupation exhibits the biggest labor shortage, with the report adding that the TSYS School has a bachelor's degree program in information technology along with a new AI track for the bachelor's degree in computer science, both of which can qualify students for this occupation. Other educational programs are in demand, such as computer programming and cloud computing. Second, there is a gap of 30 employees per year in general and operations management. This gap could be addressed by the Turn...