PROF JENKINS: Why you can’t use AI in my class

Training students to use AI is not the same as teaching them to write, much less think.

Rob Jenkins is a Higher Education Fellow with Campus Reform and a tenured associate professor of English at Georgia State University - Perimeter College. In a career spanning more than three decades at five different institutions, he has served as a head men’s basketball coach, an athletic director, a department chair, and an academic dean, as well as a faculty member. Jenkins’ opinions are his own and do not represent those of his employer.


About a year ago, after ChatGPT washed over college campuses like an AI tsunami, my university administration asked faculty members to add a “statement on artificial intelligence” to our course syllabi.

To their credit, they didn’t tell us what we had to say—just that we needed to inform students of our policy.

Fair enough. As an English professor who teaches mostly introductory writing courses, I thought about it for five minutes and decided that, for students, learning to think and formulate their thoughts in their own words is just too important to outsource.

Hence, my policy: “You MAY NOT use AI on any of your assignments in this class.”  

[RELATED: PROF JENKINS: Tenure– It’s complicated]

Yes, I know. Some students are probably using it anyway. As I acknowledge in my syllabus, “I try my best to structure the writing assignments so you can’t simply turn them over to ChatGPT. But of course I don’t always succeed, and clever students can often find a work-around.”

On the other hand, I’ve never let the fact that some students might resist deter me from doing what I think best. And I do not believe it is in students’ best interests to encourage them to simply turn their writing, and the thinking process behind it, over to the Big Tech hive brain.

Simply put, training students to use AI is not the same as teaching them to write, much less think. And teaching students how to write and think is my job. (I even wrote a book about it.) Here’s how I frame the dichotomy in my syllabus:

“The main purpose of this course is to help you learn to express yourself, clearly and cogently, in your own unique voice: your thoughts and ideas, your emotions (where appropriate), your words. There is great value in that kind of authenticity, both personally and professionally. AI may be a useful tool for many things, but it cannot help you sound like the best version of yourself.”

Beyond that, as we’ve learned recently, AI has a number of, shall we say, issues. If it were a person, we might note that it has a strained relationship with the truth. But since it’s essentially a computer program, we must acknowledge that it’s the programmers who are, to put it bluntly, liars and fantasists.

For example, Campus Reform reported recently on Google’s new AI, Gemini, which created a stir a few weeks ago when users found that it was basically incapable of producing images of white men.

Prompted to depict various historical figures, Gemini invented a black George Washington, Asian Vikings, and a female pope.

But it gets worse. When asked to compare several current-day conservative(ish) figures to Adolf Hitler, Gemini declared itself unable to draw a distinction.

Regarding Elon Musk, for example, Gemini offered this observation: “It is not possible to say definitively who negatively impacted society more, Elon tweeting memes or Hitler.” It said much the same about Daily Wire personality Matt Walsh, journalist Chris Rufo, and “anti-trans” author Abigail Shrier.

Such statements are not just absurd; they’re positively evil. To suggest that Elon’s “offensive” memes are in any way comparable to the Holocaust is a mortal insult to the memories and families of six million innocent Jews.

Why is Gemini doing this? Obviously, because it’s programmed to do so.

[RELATED: PROF JENKINS: Navigating the collapse of credentialism]

Apologists who insist that AI isn’t “programmed” at all but rather “learns,” clearly don’t understand the meaning of either word. As a practical matter, “learning” IS “programming.” That’s why we object to elementary school students being indoctrinated into the “trans” cult. Their little brains are being programmed.

It’s the age-old “GIGO” problem—“garbage in, garbage out.” Fill a child’s mind with lies, and they will come to believe, and regurgitate, lies. Expose AI to predominantly woke sources—as Google has obviously done—and it will spout nonsense like “it’s not possible to say” Hitler was worse than Elon.

Plus, there are clearly programmers at Google and elsewhere writing line after line of woke code. Gemini didn’t come up with a black George Washington by scanning history books.  

Would an intellectually honest, objective AI be useful? Sure. But not in courses like mine, where students are supposed to learn basic skills—in my case, thinking and writing. Students must first master those skills, and learn to filter out the lies, before they can intelligently harness the power of AI.


Editorials and op-eds reflect the opinion of the authors and not necessarily that of Campus Reform or the Leadership Institute.