UConn’s Artificial Intelligence policy evolves as technology advances

University leaders and faculty are navigating the use of AI and ChatGPT. For now, professors decide if students may use it.

By Ali­cia Gomez
Sept. 29, 2023
Newslet­ter Course – UConn Jour­nal­ism Department

A stu­dent logs in to Chat­G­PT. / Pho­to by Colleen Lucey


The Uni­ver­si­ty of Con­necti­cut has caught one stu­dent pla­gia­riz­ing using arti­fi­cial intel­li­gence, and anoth­er student’s actions are under review, accord­ing to a UConn spokesperson. 

But UConn’s pol­i­cy can be as con­fus­ing as AI. Two stu­dents with the same major or pro­fes­sor may have dif­fer­ent rules on whether they can use Chat­G­PT for their classes. 

The rules around using Chat­G­PT for stu­dents often depend on the pro­fes­sor and the class itself.

Depart­ment of Account­ing head George Plesko said in a recent inter­view that pro­fes­sors can choose how to approach AI use among their stu­dents, this may vary by course and instruc­tor. In his cours­es stu­dents can use AI tools such as Chat­G­PT, pro­vid­ed they prop­er­ly cite and fact-check the tool’s output. 

Plesko said it’s like­ly account­ing stu­dents will use AI in the work­place, as many account­ing firms use the tools in lim­it­ed ways. Plesko wants his stu­dents to have this experience.


George Plesko, head of the Depart­ment of Account­ing at UConn. / Pho­to cour­tesy of George Plesko

 


“Well, giv­en the fact that they are going to be work­ing with it or using it in the real world … there’s no rea­son not to allow them to, with some restric­tions, in the class­room,” he said.

How­ev­er, Plesko said, stu­dents also must ensure they’ve got­ten accu­rate infor­ma­tion. Pro­fes­sion­al account­ing firms use spe­cial­ized AI tools that scrape infor­ma­tion from care­ful­ly spe­cial­ized data­bas­es unlike Chat­G­PT, which scrapes entire data­bas­es from the web.

“To the extent that Chat­G­PT goes out and tries to find infor­ma­tion or tries to put togeth­er a coher­ent argu­ment on some­thing else, in many ways, it’s real­ly no dif­fer­ent than some­body try­ing to do oth­er kinds of web search­es, let’s say even going to Wikipedia and try­ing to find infor­ma­tion,” Plesko said. “The dif­fer­ence, of course, is, as it says in the syl­labus, lay­er­ing on to that the respon­si­bil­i­ty for the stu­dent to know that what­ev­er they’re cit­ing or deal­ing with is accurate.” 


Dongjin Song, assis­tant pro­fes­sor in the com­put­er sci­ence and engi­neer­ing depart­ment, work­ing on machine learn­ing and data min­ing. / Pho­to­graph by Ali­cia Gomez

 


Chat­G­PT is a gen­er­a­tive mod­el of AI and type of “large lan­guage” mod­el. In large lan­guage mod­els, ran­dom­ness is involved when gen­er­at­ing results, accord­ing to Dongjin Song, an assis­tant pro­fes­sor in the com­put­er sci­ence and engi­neer­ing depart­ment in machine learn­ing and data mining.

“This ran­dom­ness prob­a­bly some­times can give you a cor­rect answer. Some­times it can give you some incor­rect answers,” he said. 

For now, Chat­G­PT is trained with com­mon ques­tions that it can answer reli­ably, but some ques­tions are out­side its scope of knowl­edge. Due to lim­it­ed data, these ques­tions may pro­duce inac­cu­rate results that look authen­tic and reli­able. It may take a care­ful eye to deter­mine whether the infor­ma­tion is true or false.

“Whether you can lever­age ChatGPT’s out­put depends on whether you’re ask­ing the prop­er ques­tions,” Song said. “Also, you need to be an expert in that domain to judge whether the out­put is reli­able. If you do not know any­thing, you’d prob­a­bly think, ‘Oh, this is true.’ That can cre­ate a mis­con­cep­tion, and that’s not good.” 

Tom Deans, the direc­tor of UConn’s Writ­ing Cen­ter, is part of a group that rep­re­sents UConn in a 20-school con­sor­tium in a two-year research project put togeth­er by Itha­ka S+R, a non­prof­it orga­ni­za­tion. The research project aims to “assess the imme­di­ate and emerg­ing AI appli­ca­tions most like­ly to impact teach­ing, learn­ing, and research and explore the long-term needs of insti­tu­tions, instruc­tors, and schol­ars as they nav­i­gate this envi­ron­ment,” accord­ing to a press release.

Deans has also researched gen­er­a­tive lan­guage learn­ing mod­els such as Chat­G­PT in high­er edu­ca­tion and how tutors may use it in the Writ­ing Center. 


Tom Deans, direc­tor of UConn’s Writ­ing Cen­ter. / Cour­tesy of Tom Deans 


Deans uses an arti­cle he wrote with two stu­dents, Noah Praver, a UConn Writ­ing Cen­ter tutor, and Alexan­der Solod, the pres­i­dent of UConn’s AI Club which trains Writ­ing Cen­ter tutors, on the best uses of AI. Togeth­er they have found that the best way to make use of Chat­G­PT in the Writ­ing Cen­ter is through assign­ing a “role” to Chat­G­PT, the arti­cle says.

“That’s a smart use of the tool. Oth­er­wise, you are using the tool in a kind of a dumb way. Or not in as smart a way as you could. If you’re going to use it, know some­thing about how these mod­els work, do a lit­tle bit of prompt engi­neer­ing even if just to say, ‘Play the role of an anthro­pol­o­gy grad­u­ate stu­dent or pro­fes­sor,’” Deans told The Husky Report in an inter­view this week. 

Although Deans encour­ages his tutors to use Chat­G­PT in “small strate­gic ways,” the major­i­ty of Writ­ing Cen­ter ses­sions do not involve the use of ChatGPT.

“It sort of comes out if there’s a prob­lem to solve that two human beings are strug­gling with or a prob­lem of speed like they just need to do some­thing more quick­ly because some­one has a real­ly short appoint­ment. Or the two of them are stumped, and they’re kind of like, ‘How do we rephrase this sen­tence? We could fid­dle with it for the next twen­ty min­utes, or we can ask Chat­G­PT to come up with three respons­es, and then maybe those will spark us,’” Deans said. 

Although pro­fes­sors may be wor­ried about stu­dents using the tool inap­pro­pri­ate­ly, Deans said, detec­tion tools such as ZeroG­PT, may not be the right solu­tion. He rec­om­mends UConn not pay for detec­tion tools, which he says are often inaccurate. 

“If a com­pa­ny is basi­cal­ly try­ing to prof­it off of the para­noia of fac­ul­ty think­ing stu­dents are cheat­ing, it’s a game where I just think there are bet­ter ways to deal with it. … Stu­dent dis­hon­esty is a real thing, and you’ve got to address it, but we already have aca­d­e­m­ic dis­hon­esty poli­cies,” Deans said. “Every minute or dol­lar spent on enforce­ment is a minute or a dol­lar not spent on teach­ing or prevention.”

Song agreed they may be inac­cu­rate. “So far, I haven’t seen a well-accept­ed tool that can be used to detect whether some­thing is writ­ten by a machine.” 

In Jan­u­ary, the Office of the Provost at UConn pro­vid­ed guid­ance to fac­ul­ty regard­ing ChatGPT’s impact on teach­ing and learn­ing. Uni­ver­si­ty lead­ers and fac­ul­ty are nav­i­gat­ing the use of AI as a tool, work­ing to ensure stu­dents have prac­ti­cal and respon­si­ble expe­ri­ences with AI while uphold­ing their com­mit­ment to aca­d­e­m­ic integrity. 

“Based on con­ver­sa­tions to date, our fac­ul­ty are simul­ta­ne­ous­ly inter­est­ed in learn­ing how ChatGPT3 and sim­i­lar chat­bots might trans­form teach­ing, learn­ing, and assess­ment in inno­v­a­tive ways and con­cerned about stu­dents’ use of ChatGPT3 to answer test and exam ques­tions and gen­er­ate con­tent for writ­ten papers and assign­ments,” the mes­sage on UConn’s Cen­ter for Excel­lence in Teach­ing and Learn­ing web­site says.

The solu­tions CETL pro­vides to pro­fes­sors includes exper­i­ment­ing with Chat­G­PT to dis­cov­er its “capa­bil­i­ties and lim­i­ta­tions,” hav­ing dis­cus­sions with fac­ul­ty and stu­dents about Chat­G­PT, amend­ing syl­labi to men­tion Chat­G­PT, and amend­ing assign­ments to be incom­pat­i­ble with the use of ChatGPT. 

Department approaches: Psychology

The psy­chol­o­gy depart­ment has not made a uni­form deci­sion on stu­dent use of AI. How­ev­er, Etan Markus, the department’s asso­ciate head of grad­u­ate stud­ies, feels wary about using it.

“Every­thing is baby steps. No one is trust­ing it at this point. But every­one’s explor­ing it. We’re all very excit­ed about the options,” Markus said, adding that he believes it will get bet­ter. “There’s still kinks and problems.”

Markus uses AI in his own research, includ­ing a soft­ware that iden­ti­fies the body parts of lab rats dur­ing tests. Markus draws the line with using AI at class­room assign­ments. Fol­low­ing the increase of online learn­ing, he altered the exams for his grad­u­ate class­es to be tak­en in-per­son and writ­ten in test­ing book­lets to com­bat cheating.

Markus does not like revert­ing back in tech­nol­o­gy, but is relieved to have an option to even the play­ing field between stu­dents. He also com­bats cheat­ing by ask­ing ques­tions spe­cif­ic to in-class material. 

Although there is no depart­ment-wide pol­i­cy on Chat­G­PT, he requires that stu­dents using the tool for help with their home­work be transparent. 

Next semes­ter there will be no need for trans­paren­cy. This spring, grad­u­ate stu­dents will be able to take a class called, “Using Chat­G­PT and AI as a tool in psy­chol­o­gy”. The recent­ly approved course will be exper­i­men­tal and geared towards stu­dent interests. 

“I’m try­ing to per­suade one of my under­grad­u­ates to give one of the lec­tures. It’s cool that you have an under­grad­u­ate that’s going to be able to teach the fac­ul­ty and grad stu­dents stuff,” Markus said. 

The course will offer stu­dents the oppor­tu­ni­ty to see how AI can assist with research and writ­ing. Chat­G­PT can be par­tic­u­lar­ly help­ful in “diag­no­sis writ­ing” where dur­ing clin­i­cals, stu­dents can enter behav­iors, which the tool will write into a per­son­al­i­ty pro­file. Chat­G­PT can also help stu­dents write codes for research data. The tri­al-and-error class will also have dis­cus­sions for stu­dents to share their expe­ri­ences using the software. 

Markus is look­ing for­ward to the AI class, the struc­ture of which he is still plan­ning, next semes­ter, and hopes to cre­ate an under­grad­u­ate ver­sion as well. 

Department approaches: Economics


Kath­leen Segerson, board of trustees dis­tin­guished eco­nom­ics pro­fes­sor at the Uni­ver­si­ty of Con­necti­cut.  / Cour­tesy Kath­leen Segerson


Eco­nom­ics is fac­ing the same issue: not hav­ing a depart­ment-wide pol­i­cy, instead allow­ing pro­fes­sors to cre­ate their own guide­lines. How­ev­er, all pro­fes­sors must fol­low the same uni­ver­si­ty pro­ce­dure if they sus­pect mis­con­duct, accord­ing to Kath­leen Segerson, the board of trustees dis­tin­guished pro­fes­sor of economics.

Serg­er­son said Chat­G­PT can be used as a start­ing point for stu­dents. For exam­ple, if they want­ed to ask an eco­nom­ic ques­tion, the tool can pro­vide ref­er­ences, sum­maries and even sources. Stu­dents are strong­ly encour­aged to read the orig­i­nal source to check its validity. 

Chat­G­PT can also help stu­dents work­ing with “pro­duc­tion func­tions”, an eco­nom­ic tool show­ing the rela­tion­ship between the phys­i­cal inputs and out­puts of goods. In this case, Chat­G­PT shows stu­dents what to enter into the equa­tion, rather than pro­vid­ing an answer, accord­ing to Sergerson.

 “One sign of cheat­ing is when some­thing a stu­dent hands in dif­fers from what I would have expect­ed,” Serg­er­son said. “This is not evi­dence of cheat­ing, but it rais­es a question.”

Chat­G­P­T’s ris­ing pop­u­lar­i­ty has com­pli­cat­ed the department’s posi­tion on what is con­sid­ered aca­d­e­m­ic mis­con­duct. The depart­ment con­tin­ues to have dis­cus­sions and hope to offer greater assis­tance to the fac­ul­ty on this matter. 


Colleen Lucey con­tributed to this report.

Leave a Comment

Your email address will not be published. Required fields are marked *