Donald Keough, News Editor–

AI use of campus continues to grow with DenAI and LLM based courses

As the use and development of artificial intelligence continues to increase, students and faculty have become more involved with the technology. 

“Since the emergence of generative AI, Denison has been and will continue to operate at the forefront of this technological revolution,” Dr. Jeff Thompson, the dean of faculty and professor of biology, and Lori Robbins, head of AI strategy said in a joint statement. “AI clearly provides powerful tools that can enhance the work we do, and we need to ensure that students have the skills and training to use them effectively as they move forward in their careers. We also need to engage with AI properly so that it does not undermine the learning process.”

This semester, 17 faculty members and their classes have been piloting an artificial intelligence platform called DenAI. Around 600 students are participating in the program, according to the same statement. They also said that professors can still use their own discretion to “determine if AI can enhance student learning or critical thinking skills.”

DenAI provides access to over 15 large language models (LLMs), including a model of ChatGPT. Faculty do not have access to student prompts used in these LLMs. The program is intended to be launched for all students this spring.

“Building the platform was Denison’s response to providing LLM access to everyone in an affordable way,” Thompson and Robbins said in the same statement. “In addition to giving students access to these tools, we also want to give them the education and training they need to use them effectively and responsibly.” 

In order for current students to keep their access to the current platform, they must complete an AI microcredential through Denison Edge and ITS. Denison Edge, a program which works to offer career based skills as an extension of the career center, is also offering new AI workshops and classes.

There have also been a number of events centered around AI this semester. This year’s fall faculty symposium, titled Integrating AI in Teaching and Learning, featured a slew of panels which professors could go to that discussed topics related to AI. Dr. Zhe Wang, an assistant professor in the Data Analytics department, held a presentation at the event called the Existential Crisis of Data Analysis in the age of AI. Her presentation covered some of the limitations of AI due to its occasional bias. 

“Suppose you’re HR in the company, and you’re using AI for some screening for your applicants, and in [the company]’s history, there are no female candidates for this role, and your AI model will just rule out female applicants to create some bias,” Zhe said. 

In addition to her presentation, Zhe has integrated AI topics and activities into the classes she teaches. One such activity she did was in her class sequential analysis and applications when she asked students to summarize a reading while also asking them to have AI summarize the same reading. They found that AI focused more on background instead of the general picture. 

“AI does summarize information but it usually just over interprets it as [general] questions about the topic instead of generating information from the actual reading,” Zhe said. 

She also noted that AI can be useful for generated code, however it will include information that isn’t needed, such as aesthetic changes like font color. But, Zhe has done assignments for students where she asks them to write code with an LLM.

“For this type of designing, there is a lot of tedious work about layout, about information visualization,” Zhe said. “So if students go from scratch, it will be a nightmare. That’s why I asked them to use AI to generate it.”

The goal in these types of assignments is for students to learn which parts of the code are useful while negating unnecessary design parts. Zhe also said it’s important that students learn how to use AI effectively.

“If you cannot do better than AI, you’re not going to find a job,” Zhe said. “I can still see room for students not to use AI, because not all companies want to make their data publicly available. If they use ChatGPT for analysis, they are at a risk of some privacy leakage.”

She also said that for simple tasks, AI does better most of the time than her students, however with more complicated tasks, students’ work is better. But she says that students should know how AI works thoroughly since “they will need it eventually” if AI surpasses the level of student skills. 

Another example of events surrounding AI was hosted by the Denison Libraries for this year’s National Media Literacy Week from Oct. 27-31, featured topics on AI. 

One of these events, on Oct. 30, was an AI Symposium which featured panelists from across the campus. The panelists were Dr. Lew Ludwig; a professor in the Mathematics and Computational Science departments, Sangeet Kumar; an associate professor in the communications and international studies departments, Melanie Murphy; the executive director of the Knowlten center and Yuimi Hlasten; the electronic resources and scholarly communication librarian. 

The event was curated by student communication fellows and moderated by Fiona Kogan ‘26. Kogan said she was pleased with how the event went. 

“It was a really great attendance, I think the panelists were very engaged,” Kogan said. 

She chose the panelists because she thought they brought a number of different views.

“I thought that was a great kind of amalgamation of all the different university perspectives. It’s been great to have kind of a wide net cast and get to engage with a lot of different people,” Kogan said.

The discussion covered a diverse range of topics regarding AI. Kogan said she came up with questions for the panelists partially because of her own interest. 

“I’m taking the class with Dr Kumar, so this has all been kind of on my mind,” Kogan said. “The questions were drawn a lot from his coursework, and then I’m a creative writing major as well, so I’m deeply averse to AI from that perspective, so it was very easy to draw up [questions] based on my own fears.” 

Two other students at the event, Susannah Snell ‘26 and Elliot Harpham ‘26 shared similar apprehension about AI after the event.

“I don’t have any AI account, and I’ve used it probably ten times in my life, which I don’t think is necessarily a good thing, but I get nervous because I take pride in my own academic prowess,” Snell said. “I know that I have habits of excess, and I worry that if I started using AI and relying on it that I wouldn’t be able to use my own brain.”

She also said although AI can do small tasks that help “take weight off” doing work, she still “would rather have those human relationships” give her information because it helps her learn more. 

“I think I’m probably an extreme case of someone who hasn’t used a lot of it,” Snell said. “You also see how you are behind in some ways, and I think that makes me nervous, especially in a job landscape. There are definitely things where I’m like, ‘okay, I really should use it for small tasks like that. But there’s a part of me that would rather be imperfect in that way, and take my losses.”

Harpham also shared a similar sentiment to Snell. He says that he has done his best not to use AI in the past month and a half except when it can help him comprehend a certain topic. For example, sometimes he will use AI to create podcasts out of readings so that it’s more easily digestible. 

“It helps my comprehension go up and I can participate more in class because I’m more confident in my knowledge,” Harpham said. “But in terms of chatGPT or Gemini, I don’t use any of that.”

Part of his apprehension stems from his summer experience working on a farm, as he said he enjoyed being off the grid. He also is taking a course titled Google and the global culture of search, taught by Kumar, which has influenced some of his beliefs. 

“I think the further I get into that course, the more ethical dilemmas I face that I can’t really ignore anymore,” Harpham said.

He also is critical of AI’s potential mistakes, saying that it’s not always reliable enough. But he also sees the advantages of using it. 

“In the past, I used AI more frequently,” Harpham said. “What I used to consider as something that’s pretty easy, like a 500 or 750 word response to now knowing that I could do it in less than a minute and probably still get a similar grade makes it feel like so much more of a task.” 

Despite the added ease to complete tasks, Harpham is still against using AI. And even though both employers and students are using AI to get an edge in job applications, he would rather stick to the basics.

“If a company is going to discredit me for not using the perfect keywords because they’re using ChatGPT, or they’re using Gemini or some other AI tool to sort through their resumes, that tells me that’s a place I don’t want to work,” Harpham said. “I would rather be valued for my abilities than my ability to use an AI tool.” 

Both Harpham and Kogan are in Kumar’s course on Google. Although it isn’t specifically on AI, it does cover generative AI in some ways as according to Kumar. 

Kumar has been teaching in this field of technology for several years, and said that he sometimes has to change his course content during a semester to keep up with recent developments. 

“I tell [students] that the readings and the schedule that are on the syllabus are somewhat tentative,” Kumar said.

Additionally, he also builds assignments into the course which are topical, such as an exercise where students bring in news from the technology world. 

“That’s one of the most enjoyable exercises for students,” Kumar said. “One has to have a little bit of open endedness in a course such as this. You have to have significant structure, significant kinds of goals and learning objectives. But at the same time, the brass tacks, the logistical things about readings and all that will be shaped, because who knows what the world will be like in three months from now.”

Next semester, Kumar is teaching a new course called the geopolitics of AI. Part of his own interest regarding global dynamics around media and technology sparked his inspiration for the course. He has also written a book called the digital frontier: infrastructures of control on the global web, which discusses similar topics. He says that AI often centers around this interconnected online culture. 

“I think AI brings all of it together,” Kumar said. “I think that students should understand how this power rivalry in the world is going to take place in the realm of developing the next smartest AI, whatever that form is… and the countries that get to it will have an edge in all other areas, like in sports, in military, in commerce, in culture, in education and research.”

In the course, he plans for students to use AI not as a tool but as an object of study. 

“The class will take a critical perspective,” Kumar said. “It will be about… how the race for chips, the race for energy, the race for data, is shaping the AI world. And then, of course the ethics and other guardrails that are important, fall by the wayside when you’re in a competition.”

Another point he emphasized was that LLM models can contain bias, and he said that “my sense is that not too often, national imperative will override the quest for truth.” He also said that he believes there is no such thing as an unbiased AI model. 

“It’s good for students to be aware that generative AI is not some kind of new oracle that speaks the truth,” Kumar said. He also applied this idea to his class on Google, saying “most students start the class by saying, ‘yeah, we think of [Google] as giving us the best answer… and if you ask my students week 12, or week 13, they’re so much more aware of how many agendas, priorities, self interests shape Google search results.” 

The rise of AI models has also increased the demand to understand it. There are a number of new courses that professors have begun teaching, similar to Kumar. Dr. Regina Martin, an associate professor in the english, global commerce and digital humanities department who is also the director of the writing program is one such example. She is teaching a class titled AI and society next semester. 

Many of the goals of this class fall under the wide-ranging umbrella often termed AI literacy. AI literacy can be broadly defined as how AI is interpreted and applied. 

“One goal of the class is to make sure that students are critically engaging with AI and not just accepting the output at face value and copying and pasting it into whatever it is they’re doing,” Martin said. “Another goal of the class is to experiment with potential legitimate uses of AI and another goal of the class… is understanding how it is shaping the world around us.” 

Additionally, she said she believes learning how AI tools can be used involves experimentation. Similarly to her upcoming class, some of her assignments in the classes she currently teaches involve experimenting with AI. In her first-year writing workshop, one of her upcoming writing assignments compares human peer-reviewed results compared to AI ones. Despite the use of the technology, she still says that she hasn’t forgotten the potential pitfalls of student AI use in writing. 

“In my writing 101, I’m very worried that using an AI model to produce text that students simply copy and paste into their own essays will absolutely prevent them from learning how to write well,” Martin said. “So in that class, I take a very different tack, and that is trying to help students discern between using AI as a tool that can help them learn how to be better writers rather than using AI to replace their own cognitive labor.”

In her other class, utopian and dystopian novels, she has also had some assignments centered around using AI, and she has found that these assignments often highlight the potential downsides of using LLMs. For example, she had students use an AI model to interpret an older text versus stories that were more recent, and found that “the AI model was very good at incorporating elements of the older novel and the newer stuff was very, very superficial.” She also had students ask AI to summarize a text continuously until it repeated answers, showing how it often can’t find every important point in a story. 

Since she started using AI in classes, one thing she said is that students have to be able to refine or assess an output if they want to use the technology, especially due to its drawbacks. 

“A fear of teachers is that it takes a tremendous amount of work to learn how to write, and you can’t learn how to write in a semester,” Martin said. “It is a lifelong endeavor, and if you never get to the point where you can produce your own writing, then the question is, do you have the skills that you need to refine AI output to represent what you actually want to say?”

Outside of her class assignments, Martin said that she uses AI to complete some tasks, albeit not very often. Much of her use of the technology includes searching for texts, for use in class or finding bodies of work to use with student researchers who are interested in a field she doesn’t have close familiarity with. She also said that she uses AI to help edit writing in mundane tasks. But she doesn’t use AI to help write or review her own research or work.

Four out of the five professors interviewed for this article also agreed that they use AI to assist them with certain tasks, including three out of the four panelists. 

Like Martin, Dr. Ashwin Lall, a professor and chair of the computer science department is teaching a class next semester called AI: basics and big questions. The class is a continuation of a previous course of the same name. By the end of senior registration, all seats for the class were filled. 

“I think students are realizing that before they go out into the world, or go for interviews and stuff like that, they want to be able to say that they can speak knowledgeably, at least a little bit about what AI is and how they’ve used it in a course,” Lall said.

The course has a number of aims, but Lall said that the class will go into the conceptual ideas behind how AI works and its effects. Lall hopes to bring faculty from across the college to discuss current topics. 

He also says that the class will lean into the more practical side of using LLMs. 

“This is a technology that you’re expected to use in the workplace,” Lall said. 

He also said that he’s less concerned about students limiting their learning through the use of AI since there’s not as much of a technical aspect to the class like in other Computer Science courses he teaches. 

“We do worry about that in many of the courses where the goal of the course is very much for students to learn some basic computer science,” Lall said. “Every single course has to have some kind of message in terms of what the AI policy is. But in this course we will be a lot more liberal with AI use. But the goal is to use AI with guardrails so there will be parts of what the students do… that they are responsible for.”

Beyond the coursework, Lall said that AI has helped him in some ways with his research. He says that he often uses it as a tool for finding methods to approach problem solving without extensive literature searches.

“This past year, I’ve basically phrased what my question is to ChatGPT, for instance, and it will come back with a term,” Lall said. “Then I’ll take that term and I’ll go look it up and be like, ‘yes, this is exactly the mathematical formula of what I wanted to figure out. So I just saved myself what could have been an afternoon of digging around.”

After finding certain topics while using AI, he often still performs the computational aspects himself. He also says that you have to be careful when writing code with AI.

“This is one of those things that you have to be really cautious about,” Lall said. “One of the principles we have, particularly with using Gen AI to produce code, is that there seems to be this foundational rule… which is that one should never use Gen AI for something that you cannot actually do yourself.”

Lall also went on to stress this idea for students who use AI to write code or complete tasks. 

“AI can produce something that looks like a pretty good indication of what you want,” Lall said. “But there’s just still an important concept there of having a human that is involved, because you need a human who’s got the sort of critical thinking and critical reasoning skills to be able to look at the output of whatever Gen AI created and say, ‘yes, that actually works.’”

Ludwig, another speaker during the panel who also works in the computational science as well as math department, also teaches an AI related course called liberal arts meets AI. His course covers similar topics, but also focuses on solving problems on campus caused by AI such as students hindering their own thinking because of AI. 

Although he intended to delve into discussion topics early on into the course, he found that some students didn’t know how to properly use AI.

“That was a real learning curve for me,” Ludwig said. “I had to kind of back up and be like, ‘you know, we should probably go through some of these things.’”

One of these topics he revisited was prompt engineering, which involves conveying questions to AI efficiently and concisely. He says that the process is like an art. 

“I wanted to do this with my prompt, but I needed to give it the context of who I was, so it knew what to give me,” Ludwig said. “If it doesn’t give you what you want… use your teaching skills and say ‘hang on a second. Let me ask it a different way.’” 

He also said, as Lall and Martin have stressed, that it’s “important to know what you’re talking about,” otherwise it will likely be harder to craft accurate responses. 

“You’re the human in the loop, and you need to suss out the ideas that it’s given you,” Ludwig said. “You can’t wholly trust it.”

In addition to learning more about using AI, Ludwig said that the class aims to help students communicate about how they’re dealing with the new technology. 

“Students are struggling,” Ludwig said. “I mean, they just are… I think [most] of our students want to be right by us, and we’re not having conversations with them. So that was one of the main impetus for this course.”

Commenting on his discussions with students, he said that “it’s been a really interesting kind of pulling back the curtain for me, to have an honest conversation with students and for them to know that they’re not going to be judged.”

The effect that AI will have on students will be hard to judge because of its wide-ranging implications. Harpham feels that higher education overall is at a disadvantage in terms of readying students for the workforce. 

“There are so many more steps to get things implemented,” Harpham said. “By the time all that stuff happens, something else has come out that makes whatever you’re trying to do easier.”

The advent of technology also leaves him asking questions as someone who doesn’t want to use AI in the workforce.  

“What’s the path for somebody that doesn’t want to use AI in my job search process?” Harpham said. “I don’t want to use AI in the workplace, I’m not interested in it. I’d rather produce my own work. I’d rather be separated from that. I wonder what the path forward is.”