
Professors have long battled the usual suspects of academic dishonesty – Wikipedia, fraternity test banks and outright copy-and-paste plagiarism. But chatbots present a whole new challenges when it comes to discerning what is artificial intelligence (AI) and what is actually derived through a student’s critical thinking skills and intellectual acumen.
Colleges and universities across the country are developing a wide range of policies governing the use of AI by students to assist in coursework. Schools are primarily concerned that if students continue to outsource their critical thinking skills, they will be unprepared to handle tasks in their careers.
Educators say the goal of a degree-certified student should be to handle complex tasks, challenge assumptions, and be the source of original thought – abilities that are strengthened through traditional problem-solving exercises.
Institutions of higher education handle the use of AI in varying ways. Even within a college or university, different programs have different guidelines and policies, and instructors have widely been given authority to determine the appropriate use of chatbots and other AI technology in their courses. At one of this country’s most esteemed universities, Harvard, the guidelines spells that out. “We draw your attention to the fact that different classes at Harvard could implement different AI policies, and it is the student’s responsibility to conform to expectations for each course.”
Harvard University Provost Alan M. Garber wrote a letter recently to the faculty of the school advising them to “Adhere to current policies on academic integrity: Review your School’s student and faculty handbooks and policies. We expect that Schools will be developing and updating their policies as we better understand the implications of using generative AI tools.”
Harvard encourages instructors to include a policy in their course syllabi regarding AI, and offers sample language for instructors to use in developing such. There are three draft policies ranging from a “maximally restrictive policy” that prohibits the use of AI outright to a “fully-encouraging draft policy” that encourages students to explore the use of generative AI (GenAI) tools for all assignments to a “mixed draft policy” that allows for the use of AI for certain assignments.
Researchers at the University of Arizona conducted a study on what level of usage is permitted at the top 100 U.S. universities, as compiled through criteria by US News & World Report, in their 2024 Best National University Rankings. The information on policies and guidelines was collected up until April 2024. The study found that among those institutions, there were no outright bans of generative AI. Most institutions (54.8%) left it up to the instructor while 35.6% were undecided or unclear and 9.6% allowed use with conditions.
“Among the ‘instructor decides’ universities, 47.4% adopt a stance of Prohibition by Default only allowing the use of such tools when an instructor explicitly permits it. If the instructor has not presented any policy statements on the use of GenAI, using ChatGPT in homework and essays is generally not allowed and may be under the circumstance of plagiarism. If the instructor allows it, students must cite appropriately and take responsibility for their responses. This option reveals the universities’ more cautious perceptions and evident concerns about the impact of GenAI on academic integrity.”
Oregon State University’s Code of Student Conduct includes a list of what constitutes “academic misconduct,” and although it does not refer to the use of chatbots in regard to prohibited behavior, some usages of chatbots could be considered as prohibited behavior under those guidelines. The school actually uses visual aids in the form of comics in an attempt to eliminate any ambiguity. Under the heading of “Plagiarism,” the use of chatbots could be deemed as a violation.
OSU’s College of Engineering does specifically refer to AI with an “AI Chatbot Policy Example.”
“You ARE allowed to use ChatGPT, Google Bard, Bing AI, or similar AI chatbots as you would a library resource. For example, you can use ChatGPT to find solutions for errors the same way you would use Stack Overflow or other Internet resources, or to understand and improve software you are developing.” The policy example states that AI chatbots are allowed for small snippets of code and to verify algorithms.
“If you want to start to use AI chatbots for the described allowed purposes so you understand the capabilities and limitations of these tools, that’s good preparation for being in industry with a broad toolkit at your disposal, and behaviors that mimic industry best practices both technically and ethically. If you want to use AI chatbots to do your work for you so you can skate by at OSU with minimal thought and effort, you will limit your career opportunities to those that do not require the level of diligence, thoughtfulness, professionalism, integrity, and ethics that are the hallmarks of high- performing software engineers.”