University leaders and faculty are navigating the use of AI and ChatGPT. For now, professors decide if students may use it.
By Alicia Gomez
Sept. 29, 2023
Newsletter Course – UConn Journalism Department

The University of Connecticut has caught one student plagiarizing using artificial intelligence, and another student’s actions are under review, according to a UConn spokesperson.
But UConn’s policy can be as confusing as AI. Two students with the same major or professor may have different rules on whether they can use ChatGPT for their classes.
The rules around using ChatGPT for students often depend on the professor and the class itself.
Department of Accounting head George Plesko said in a recent interview that professors can choose how to approach AI use among their students, this may vary by course and instructor. In his courses students can use AI tools such as ChatGPT, provided they properly cite and fact-check the tool’s output.
Plesko said it’s likely accounting students will use AI in the workplace, as many accounting firms use the tools in limited ways. Plesko wants his students to have this experience.

“Well, given the fact that they are going to be working with it or using it in the real world … there’s no reason not to allow them to, with some restrictions, in the classroom,” he said.
However, Plesko said, students also must ensure they’ve gotten accurate information. Professional accounting firms use specialized AI tools that scrape information from carefully specialized databases unlike ChatGPT, which scrapes entire databases from the web.
“To the extent that ChatGPT goes out and tries to find information or tries to put together a coherent argument on something else, in many ways, it’s really no different than somebody trying to do other kinds of web searches, let’s say even going to Wikipedia and trying to find information,” Plesko said. “The difference, of course, is, as it says in the syllabus, layering on to that the responsibility for the student to know that whatever they’re citing or dealing with is accurate.”

ChatGPT is a generative model of AI and type of “large language” model. In large language models, randomness is involved when generating results, according to Dongjin Song, an assistant professor in the computer science and engineering department in machine learning and data mining.
“This randomness probably sometimes can give you a correct answer. Sometimes it can give you some incorrect answers,” he said.
For now, ChatGPT is trained with common questions that it can answer reliably, but some questions are outside its scope of knowledge. Due to limited data, these questions may produce inaccurate results that look authentic and reliable. It may take a careful eye to determine whether the information is true or false.
“Whether you can leverage ChatGPT’s output depends on whether you’re asking the proper questions,” Song said. “Also, you need to be an expert in that domain to judge whether the output is reliable. If you do not know anything, you’d probably think, ‘Oh, this is true.’ That can create a misconception, and that’s not good.”
Tom Deans, the director of UConn’s Writing Center, is part of a group that represents UConn in a 20-school consortium in a two-year research project put together by Ithaka S+R, a nonprofit organization. The research project aims to “assess the immediate and emerging AI applications most likely to impact teaching, learning, and research and explore the long-term needs of institutions, instructors, and scholars as they navigate this environment,” according to a press release.
Deans has also researched generative language learning models such as ChatGPT in higher education and how tutors may use it in the Writing Center.

Deans uses an article he wrote with two students, Noah Praver, a UConn Writing Center tutor, and Alexander Solod, the president of UConn’s AI Club which trains Writing Center tutors, on the best uses of AI. Together they have found that the best way to make use of ChatGPT in the Writing Center is through assigning a “role” to ChatGPT, the article says.
“That’s a smart use of the tool. Otherwise, you are using the tool in a kind of a dumb way. Or not in as smart a way as you could. If you’re going to use it, know something about how these models work, do a little bit of prompt engineering even if just to say, ‘Play the role of an anthropology graduate student or professor,’” Deans told The Husky Report in an interview this week.
Although Deans encourages his tutors to use ChatGPT in “small strategic ways,” the majority of Writing Center sessions do not involve the use of ChatGPT.
“It sort of comes out if there’s a problem to solve that two human beings are struggling with or a problem of speed like they just need to do something more quickly because someone has a really short appointment. Or the two of them are stumped, and they’re kind of like, ‘How do we rephrase this sentence? We could fiddle with it for the next twenty minutes, or we can ask ChatGPT to come up with three responses, and then maybe those will spark us,’” Deans said.
Although professors may be worried about students using the tool inappropriately, Deans said, detection tools such as ZeroGPT, may not be the right solution. He recommends UConn not pay for detection tools, which he says are often inaccurate.
“If a company is basically trying to profit off of the paranoia of faculty thinking students are cheating, it’s a game where I just think there are better ways to deal with it. … Student dishonesty is a real thing, and you’ve got to address it, but we already have academic dishonesty policies,” Deans said. “Every minute or dollar spent on enforcement is a minute or a dollar not spent on teaching or prevention.”
Song agreed they may be inaccurate. “So far, I haven’t seen a well-accepted tool that can be used to detect whether something is written by a machine.”
In January, the Office of the Provost at UConn provided guidance to faculty regarding ChatGPT’s impact on teaching and learning. University leaders and faculty are navigating the use of AI as a tool, working to ensure students have practical and responsible experiences with AI while upholding their commitment to academic integrity.
“Based on conversations to date, our faculty are simultaneously interested in learning how ChatGPT3 and similar chatbots might transform teaching, learning, and assessment in innovative ways and concerned about students’ use of ChatGPT3 to answer test and exam questions and generate content for written papers and assignments,” the message on UConn’s Center for Excellence in Teaching and Learning website says.
The solutions CETL provides to professors includes experimenting with ChatGPT to discover its “capabilities and limitations,” having discussions with faculty and students about ChatGPT, amending syllabi to mention ChatGPT, and amending assignments to be incompatible with the use of ChatGPT.
Department approaches: Psychology
The psychology department has not made a uniform decision on student use of AI. However, Etan Markus, the department’s associate head of graduate studies, feels wary about using it.
“Everything is baby steps. No one is trusting it at this point. But everyone’s exploring it. We’re all very excited about the options,” Markus said, adding that he believes it will get better. “There’s still kinks and problems.”
Markus uses AI in his own research, including a software that identifies the body parts of lab rats during tests. Markus draws the line with using AI at classroom assignments. Following the increase of online learning, he altered the exams for his graduate classes to be taken in-person and written in testing booklets to combat cheating.
Markus does not like reverting back in technology, but is relieved to have an option to even the playing field between students. He also combats cheating by asking questions specific to in-class material.
Although there is no department-wide policy on ChatGPT, he requires that students using the tool for help with their homework be transparent.
Next semester there will be no need for transparency. This spring, graduate students will be able to take a class called, “Using ChatGPT and AI as a tool in psychology”. The recently approved course will be experimental and geared towards student interests.
“I’m trying to persuade one of my undergraduates to give one of the lectures. It’s cool that you have an undergraduate that’s going to be able to teach the faculty and grad students stuff,” Markus said.
The course will offer students the opportunity to see how AI can assist with research and writing. ChatGPT can be particularly helpful in “diagnosis writing” where during clinicals, students can enter behaviors, which the tool will write into a personality profile. ChatGPT can also help students write codes for research data. The trial-and-error class will also have discussions for students to share their experiences using the software.
Markus is looking forward to the AI class, the structure of which he is still planning, next semester, and hopes to create an undergraduate version as well.
Department approaches: Economics

Kathleen Segerson, board of trustees distinguished economics professor at the University of Connecticut. / Courtesy Kathleen Segerson
Economics is facing the same issue: not having a department-wide policy, instead allowing professors to create their own guidelines. However, all professors must follow the same university procedure if they suspect misconduct, according to Kathleen Segerson, the board of trustees distinguished professor of economics.
Sergerson said ChatGPT can be used as a starting point for students. For example, if they wanted to ask an economic question, the tool can provide references, summaries and even sources. Students are strongly encouraged to read the original source to check its validity.
ChatGPT can also help students working with “production functions”, an economic tool showing the relationship between the physical inputs and outputs of goods. In this case, ChatGPT shows students what to enter into the equation, rather than providing an answer, according to Sergerson.
“One sign of cheating is when something a student hands in differs from what I would have expected,” Sergerson said. “This is not evidence of cheating, but it raises a question.”
ChatGPT’s rising popularity has complicated the department’s position on what is considered academic misconduct. The department continues to have discussions and hope to offer greater assistance to the faculty on this matter.
Colleen Lucey contributed to this report.