This spring, as debates were raging on college campuses about the proper role of generative AI in higher education, Diablo Valley College adjunct professor Frako Loden created an assignment to see how students in her American Cinema class interacted with ChatGPT.
For their final opinion piece of the semester, they were to pick a discussion question about the 1950鈥檚 movie 鈥淎 Place in the Sun,鈥 insert it into ChatGPT as a prompt, and then grade the response themselves. The AI got key details of the plot wrong in some cases, Loden said.
In the film, for example, protagonist George takes his girlfriend to a lake and she falls in and accidentally drowns, but ChatGPT says that he purposely killed her there. 鈥淭hat may be a subtle point, but it really does figure at the end when you evaluate his character,鈥 said Loden, 鈥淐hatGPT kind of runs rough over that and suggests that he was planning it from the start and that he鈥檚 an evil dude.鈥
Loden鈥檚 assignment illustrates not only the limitations of ChatGPT 鈥 Loden said she found in her own research that many details of movie plots it gives are not only false, but 鈥渋deologically loaded鈥 and 鈥渕aybe even racist鈥 鈥 but how professors are increasingly experimenting with its use in the classroom. California鈥檚 public higher education systems have not yet created a formal policy regarding the use of generative AI, which can create images and text that are nearly indistinguishable from those made by humans. That leaves professors in the role of watchdog, preventing breaches of academic integrity. While some focus on cracking down on cheaters, a growing number have decided that the technology is here to stay, and are assigning work that seeks to convey to students the benefits of AI as a research tool while acknowledging its limitations and propensity for error.
鈥淔aculty have to come to a decision, whether it鈥檚 in California or nationwide. And the decision is, do you want to adopt?鈥 said Tony Kashani, a professor of education at Antioch University who is writing a book about the use of AI in the classroom. 鈥淥n campus there鈥檚 a lot of contention about this.鈥
When it comes to AI, technology has moved more quickly than ethics and policy, said Kashani. He said bots like ChatGPT show great promise as a 鈥渨riting consultant鈥 for students. 鈥淚t鈥檚 not often that students have a chance to sit down with a professor and have long discussions about how to go about this paper, that paper, how to approach research on this topic and that topic. But ChatGPT can do that for them, provided鈥hey know how to use the right ethics, to use it as a tool and not a replacement for their work.鈥
That鈥檚 the approach taken by Stanford sociology professor David Grusky, whose syllabus for a recent public policy class allowed the use of AI-generated text in assignments under the stipulation they be cited in the same way a conversation with a human would be.
鈥淚t鈥檚 a conversation that can be evoked at will. But it鈥檚 not different in the content,鈥 said Grusky. 鈥淵ou still have to evaluate what someone says and whether or not it鈥檚 sensible.鈥
He believes that AI can help teach students to evaluate the quality of sources, serving academia well in the long term. 鈥淚 believe our job typically in kind of the world of undergraduate instruction is to try to help people become more thoughtful, more rigorous, more analytic.鈥
Stanford, after a push from professors, created a forbidding the use of AI to aid in the completion of assignments unless otherwise allowed in a class syllabus. And some California college professors remain skeptical.
鈥淚 see it more of a problem than a benefit,鈥 said Santa Rosa Junior College history and political science instructor Johannes Van Gorp.
The advent of generative AI has increased the workload of instructors who seek to stop cheating, he said, especially since .
鈥淔aculty have to come to a decision, whether it鈥檚 in California or nationwide. And the decision is, do you want to adopt?鈥TONY KASHANI, PROFESSOR OF EDUCATION, ANTIOCH UNIVERSITY
Van Gorp has adopted a policy forbidding the use of artificial intelligence in his classes, running nearly every assignment that gets turned in through three different AI checkers to build confidence in the results he gets.
鈥淎t first I was reporting (AI use) through the system, but it was so ubiquitous that I just started, as bad as it sounds, giving zeros on the assignments with a note: 鈥楾his is AI generated.鈥欌
Still, Van Gorp said he has to acknowledge that 鈥渢he world is shifting.鈥
鈥淭hings like (the grammar-checking tool) Grammarly or whatnot, those are AI programs as well. And so where do you draw the line? And I鈥檓 not quite sure I鈥檝e figured that one out. And certainly the institutions haven鈥檛.鈥
California State University鈥檚 Academic Senate, which represents faculty, passed a in March calling for a working group on artificial intelligence in higher education, to be formed by the end of August. The working group would examine AI鈥檚 limitations, opportunities for professional development of faculty, and how to ensure academic integrity, coordinating the university鈥檚 response across campuses.
To make their point, faculty used ChatGPT to draft part of the resolution itself. 鈥淲hat level of academic dishonesty would this constitute on a CSU campus?鈥 the writers asked, adding, 鈥淭his resolution calls upon the CSU to consider how best to leverage this technology, understanding that AI will inevitably change the nature of education independent of any action the system takes.鈥
Generative AI is out there and will be here in the future, said Academic Senate Chair Beth Steffel in an interview. 鈥淚f we ignore it or try to ban it, it is probably to everyone鈥檚 detriment.鈥
Faculty at the California Community Colleges have also pledged to develop a framework that colleges can use to create policies on AI by spring 2024. The University of California has had an AI working group since 2020, which has in the past recommended the technology鈥檚 use in counseling, student retention, admissions and even test proctoring, as well as calling for individual UC campuses to set up councils to oversee their use of AI.
A March by the college-ranking website BestColleges found that 43% of college students say they have experience using AI, such as Chat GPT, with 22% saying they鈥檝e used it to complete exams or assignments.
鈥淚 imagine that number is going to grow,鈥 said Camille Crittenden, executive director at UC Berkeley鈥檚 Center for Information Technology Research in the Interest of Society and a member of the UC workgroup. 鈥淪o the teachers might as well be involved in helping them to use it responsibly, figuring out how to actually double check citations and make sure that they鈥檙e real.鈥
As universities grapple with setting policy, professors are flocking to social media to vent and ask questions. Many of the conversations show a split between professors who want to integrate the use of AI and those who fear allowing it into the classroom.
鈥淚 just caught a student using ChatGPT to answer questions on online quizzes,鈥 one professor posted to Pandemic Pedagogy, a Facebook group made to assist faculty in navigating online teaching. 鈥淥n my syllabus, I say that students鈥 work must be their own and plagiarism will result in a failing grade, but I don鈥檛 mention using these kinds of platforms鈥hat should I do?鈥
(The Facebook group is invitation-only, but some posters gave CalMatters permission to cite their comments.)
Some wrote about the seeming futility of trying to catch cheaters, given the unreliability of software designed to flag AI-generated content.
鈥淲e should avoid assignments that try to 鈥榟arness鈥 ChatGPT or other AI鈥檚,鈥 another commenter argued, adding that the services might not remain free of charge and could start returning answers that are shaped to benefit advertisers.
Elizabeth Blakey, an associate professor of journalism at Cal State Northridge, allowed master鈥檚 students in her mass communications class to use ChatGPT to help draft research proposals. 鈥淚t鈥檒l give you information, it鈥檒l give you names, maybe some ideas or vocabulary words that you didn鈥檛 think of,鈥 she said in an interview. 鈥淎nd then you can take it from there and use your own creativity and your own further research to build on that.鈥
She believes it helped reduce her students鈥 anxiety about the tool and taught them a new skill they can take into the workforce.
Beatrice Barros, one of Blakey鈥檚 students, said ChatGPT came in handy when she changed her project topic halfway through the semester but was nervous about not having enough time to complete it. Using the AI, she said, 鈥渉elped me with the head start, like a motivation.鈥
But she learned how to navigate what the AI gave her with skepticism. 鈥淪ometimes it was very, very wrong,鈥 she said. 鈥淚t made me more aware that ChatGPT can sometimes trick you, maybe get you in trouble if you don鈥檛 read content.鈥
Her overall takeaway? 鈥淪ometimes it鈥檚 better to do your homework.鈥
鈥淐hatGPT can sometimes trick you, maybe get you in trouble if you don鈥檛 read content. 鈥BEATRICE BARROS, CAL STATE NORTHRIDGE STUDENT
Blakey鈥檚 colleague David Blumenkrantz gave students in his visual communications class a choice about whether to use AI to design a magazine. They could write their magazine鈥檚 proposal and premise, or have ChatGPT write it for them. AI-generated images could grace the magazine鈥檚 cover, with students adding in the typeface and titles over it. The only stipulation: that students explain which parts were AI-generated and why.
About a third of the class chose to use AI for the assignment, he said.
Blumenkrantz said he is currently partnering with a Nairobi University in Kenya to build up their photojournalism program and that his 63-page curriculum was mostly compiled from AI-generated content. He gave ChatGPT prompts, changed the responses to go more in depth into each topic, and fact checked them, he said. He spent weeks making the curriculum, he said, when it would have taken months without the AI-generated research, a result he called 鈥渁stonishing.鈥
Jenae Cohn, the executive director of the UC Berkeley Center for Teaching & Learning, which helps professors design effective instruction, said she and her staff often hear from faculty like Blumenkrantz, who 鈥渨ant to understand better how to use AI in creative ways in their teaching.鈥
鈥淥n the other end of the spectrum, we have a lot of questions about how students are using AI to cheat. There鈥檚 a lot of concerns about academic integrity.鈥
As for her own take, she said, 鈥淚 don鈥檛 think that AI is going to necessarily destroy education. I don鈥檛 think it鈥檚 going to revolutionize education, either. I think it鈥檚 just going to sort of expand the toolbox of what鈥檚 possible in our classrooms.鈥
Walker is a fellow with the , a collaboration between CalMatters and student journalists from across California. This story and other higher education coverage are supported by the College Futures Foundation.