College students ‘cautiously curious’ about AI, despite mixed messages from schools, employers
For 21-year-old Rebeca Damico, ChatGPT’s public release in 2022 during her sophomore year of college at the University of Utah felt like navigating a minefield.
The public relations student, now readying to graduate in the spring, said her professors immediately added policies to their syllabuses banning use of the chatbot, calling the generative artificial intelligence tool a form of plagiarism.
“For me, as someone who follows the rules, I was very scared,” Damico said. “I was like, oh, I can’t, you know, even think about using it, because they’ll know.”
Salt Lake City-based Damico studied journalism before switching her major to public relations, and saw ChatGPT and tools like it as a real threat to the writing industry. She also felt very aware of the “temptation” she and her classmates now had — suddenly a term paper that might take you all night to write could be done in a few minutes with the help of AI.
“I know people that started using it and would use it to … write their entire essays. I know people that got caught. I know people that didn’t,” Damico said. “Especially in these last couple weeks of the semester, it’s so easy to be like, ‘Oh, put it into ChatGPT,’ but then we’re like, if we do it once, it’s kind of like, this slippery slope.”
But students say they’re getting mixed messages – the stern warning from professors against use of AI and the growing pressure from the job market to learn how to master it.
The technological developments of generative AI over the last few years have cracked open a new industry, and a wealth of job opportunities. In California, Governor Gavin Newsom recently announced the first statewide partnership with a tech firm to bring AI curriculum, resources and opportunities to the state’s public colleges.
And even for those students not going into an IT role, it’s likely they will be asked to use AI in some way in their industries. Recent research from the World Economic Forum’s 2024 Work Trend Index Annual Report found that 75 percent of people in the workforce are using AI at work, and that some hiring managers are equally prioritizing AI skills with real-world job experience.
Higher ed’s view of AI
Over the last few years, the University of Utah, like most academic institutions, has had to take a position on AI. As Damico experienced, the university added AI guidelines to its student handbook that take a fairly hard stance against the tools.
It urges professors to add additional AI detection tools in addition to education platform Canvas’ Turnitin feature, which scans assignments for plagiarism. The guidelines also now define the use of AI tools without citation, documentation or authorization as forms of cheating.
Though Damico said some professors continue to hold a hard line against AI, some have started to embrace it. The case-by-case basis Damico describes from her professors is in line with how many academic institutions are handling the technology.
Some universities spell out college-wide rules, while others leave it up to professors themselves to set AI standards in their classrooms. Others, like Stanford University’s policy, acknowledge that students are likely to interact with it.
Stanford bans AI from being used to “substantially complete an assignment or exam,” and says students must disclose its use, but says “absent a clear statement from a course instructor, use of or consultation with generative AI shall be treated analogously to assistance from another person.”
Virginia Byrne is an associate professor of higher education and student affairs at Morgan State University in Baltimore, and she studies technology in the lives of learners and educators, with a focus on how it impacts college students. She said the university allows professors to figure out what works best for them when it comes to AI. She herself often assigns projects that prompt students to investigate the strengths and weaknesses of popular AI tools.
She’s also a researcher with the TRAILS Institute, an multi-institution organization aiming to understand what trust in AI looks like, and how to create ethical, sustainable AI solutions. Along with Morgan State, researchers from University of Maryland, George Washington University and Cornell University conduct a variety of research, such as how ChatGPT can be used in health decision making, how to create watermark technology for AI or how other countries are shaping AI policy.
“It’s cool to be in a space with people doing research that’s related, but so different,” Byrne said. “Because it expands your thinking, and it allows us to bring graduate students and undergraduate students into this community where everyone is focused on trustworthiness and AI, but from so many different lenses.”
Byrne hopes that her students can see the potential that AI has to make their lives and work more easy, but she worries that it creates an “artificial expectation” for how young people need to perform online.
“It might lead some folks, younger folks, who are just starting their careers, to feel like they need to use (social media tool) Canva to look totally perfect on LinkedIn, and use all these tools to … optimize their time and their calendars,” Byrne said. “And I just worry that it’s creating a false expectation of speed and efficiency that the tools currently can’t accomplish.”
Theresa Fesinstine is the founder of peoplepower.ai, which trains HR professionals on ways AI can be used efficiently within their organization. This semester, she instructed her first college course at the City University of New York on AI and business, and taught students of all years and backgrounds.
Fesinstine said she was surprised how many of her students knew little to nothing about AI, but heard that many other instructors warned they’d fail students who were found to have used it in assignments. She thinks this mixed messaging often comes from not understanding the technology, and its abilities to help with an outline, or to find research resources.
“It’s a little scary, and I think that’s where, right now, most of the trepidation is centered around,” she said. “It’s that most people, in my opinion, haven’t been trained or understand how to use AI most effectively, meaning they use it in the same way that you would use Google.”
Real-world applications
Shriya Boppana, a 25-year-old MBA student at Duke University, not only uses AI in her day-to-day life for schoolwork, but she’s also pursuing a career in generative AI development and acquisitions. She wasn’t initially interested in AI, she said, but she worked on a project with Google and realized how the technology was set to influence everyday life, and how malleable it still is.
“Once you kind of realize how much that the tech actually isn’t as fleshed out as you think it is, I was a little more interested in … trying to understand what the path is to get it where it needs to go,” Boppana said.
She said she uses some form of AI tool every day, from planning her own schedule, to having a chatbot help decide how students in a group project should divide and complete work, based on their availability. Because she works with it regularly, she understands the strengths and limitations of AI, saying it helps her get mundane tasks done, process data or outline an assignment.
But she said the personalized tone she aims to have in her writing just isn’t there yet with the publicly available AI tools, so she doesn’t completely rely on it for papers or correspondence.
Parris Haynes, a 22-year-old junior studying philosophy at Morgan State, said the structure and high demand of some students’ coursework almost “encourages or incentivizes” them to use AI to help get it all done.
He sees himself either going into law, or academia and said he’s a little nervous about how AI is changing those industries. Though he leans on AI to help organize thoughts or assignments for classes like chemistry, Haynes said he wouldn’t go near it when it comes to his work or career-related objectives for his philosophy classes.
“I don’t really see much of a space for AI to relieve me of the burden of any academic assignments or potential career tasks in regards to philosophy,” Haynes said. “Even if it could write a convincing human-seeming paper, a philosophical paper, it’s robbing me of the joy of doing it.”
Gen Z’s outlook on their future with AI
Like Haynes, Fesinstine knows that some of her students are interested, but a little scared about the power AI may have over their futures. Although there’s a lot of research about how older generations’ jobs are impacted by AI, those just about to break into the workforce may be the most affected, because they’ve grown up with these technologies.
“I would say the attitude is — I use this term a lot, ‘cautiously curious,’” Fesinstine said. “You know, there’s definitely a vibe around ethics and protection that I don’t know that I would see in other generations, perhaps … But there’s also an acknowledgement that this is something that a lot of companies are going to need and are going to want to use.”
Now, two years since ChatGPT’s release, Damico has started to realize the ways generative AI is useful in the workplace. She began working with PR firm Kronus Communications earlier this year, and was encouraged to explore some time-saving or brainstorming functions of generative AI.
She’s become a fan of having ChatGPT explain new business concepts to her, or to get it to suggest Instagram captions. She also likes to use it for more refined answers than Google might provide, such as if she’s searching for publications to pitch a client to.
Though she’s still cautious, and won’t use generative AI to write actual assignments for her, Damico said she realizes she needs the knowledge and experience after graduation — “it gives you kind of this edge.”
Boppana, who sees her career growing in the AI space, feels incredibly optimistic about the role AI will play in her future. She knows she’s more knowledgeable and prepared to go into an AI-centered workforce than most, but she feels like the opportunities for growth in healthcare, telecommunications, computing and more are worth wading into uncertain waters.
“I think it’s like a beautiful opportunity for people to learn how machines just interact with the human world, and how we can, I don’t know, make, like, prosthetic limbs, like test artificial hearts … find hearing aids,” Boppana said. “There’s so much beauty in the way that AI helps human beings. I think you just have to find your space within it.”
Colorado Newsline is part of States Newsroom, a nonprofit news network supported by grants and a coalition of donors as a 501c(3) public charity. Colorado Newsline maintains editorial independence. Contact Editor Quentin Young for questions: info@coloradonewsline.com.