Skip to main content Skip to local navigation

Blog 239

Blog 239

From Obstacle to Opportunity: Reinventing Assignments in the Age of Generative AI

This blog is the first of many posts arguing for an adaptive approach to artificial intelligence and assessment design. Should you be interested to get into the conversation, please consider contributing a blog post (email us at teaching@yorku.ca) and/or joining Teaching Commons' new community of practice on Generative AI Pedagogies. The first virtual gathering will take place on October 3, but you are welcome to join at any time.

By Tyler Totten

If you have not yet had this uncanny experience, I assure you that you will have one like it soon enough. 

You’re reading through a student essay. It’s cogent, well-structured, well-argued, even thoughtful and nuanced at times. But something – something you can’t quite put your finger on – just seems a bit “off” about it. At first, it seems small: that claim is just a hair away from the truth, isn’t it? Then, it’s every third or fourth sentence that seems just not quite right – inaccurate in small ways that are starting to build up to something big. Maybe you should take a closer look at the sources the paper is citing?

A quick glance at the names of the authors in the bibliography doesn’t raise any red flags. But, wait: did those two scholars really write a book together? Isn’t that a journal about a completely different subject? Didn’t that academic author die a full 10 years before the article being attributed to her here was even published?

It’s a well-written essay, with perfectly formatted citations appearing throughout. However, as you may have guessed, this essay was not laboriously composed by a student; rather, it was produced in seconds by ChatGPT or similar generative AI. 

If you’re not familiar with Large Language Model (LLM) chatbots like these, think of predictive text and autocomplete when you’re messaging a friend on your phone. It “guesses” the next word you’ll want – and, a lot of the time, it’s right. When you tell ChatGPT to write an essay on a particular topic, the same thing happens at a much larger scale: it guesses what words would appear in what order in a full-length essay and instantly writes them out for you.

Yet, because ChatGPT is only ever “guessing” – because it doesn’t understand the words it assembles in a sequence – it puts together phrases that seem like they go together, but may have no basis in fact. As OpenAI (the company behind ChatGPT) has phrased it, ChatGPT “hallucinates” many of the claims it makes with apparent certainty. To that end, if an essay is replete with inaccurate statements and citations to sources that do not exist, chances are you have an AI-generated document in front of you.

When I first encountered the scenario I described at the beginning of this post, I sensed something along the lines of “academic dishonesty” was afoot (not yet realizing the potential of ChatGPT to have written the entire thing). I went through the essay with a fine-tooth comb. I found all the inaccuracies. I dug deep to try and find every source that was being cited. I produced the evidence to show that this paper was mostly full of inaccurate information and citing mostly fake sources. While I could not concretely prove it had been written by ChatGPT (which would certainly constitute “plagiarism” or “cheating” under York’s Senate Policy on Academic Honesty), it was undeniably a piece of work trying to pass false claims and false citations off as true. To this end, essays written by ChatGPT will often be prima facie violations of York’s policy against “dishonesty in publication.”

Importantly, if I were not a subject matter expert, I would not have known to look for any of this. The sneaky thing about ChatGPT is that it “guesses” just as well – or better, in most cases – as the layperson when stringing together the words of an essay on any topic imaginable. To this end, there is a live debate over whether or not the advent of ChatGPT will “kill the student essay” – especially since, as OpenAI has pointed out, it will become harder and harder to distinguish student-written essays from AI-generated ones as generative AIs become better and better at what they do.

Yet, what if this apparent “obstacle” to the essay is actually an opportunity for something new?

Recall what I did when I was faced with an AI-generated essay. I had to go through it line-by-line to find the mistakes. I had to check every one of its sources to verify its existence or non-existence. In short, I had to fact-check something that would seem to the untrained eye to be a perfectly acceptable academic paper.

So, I asked myself: why not make that the assignment instead?

Like anyone with an Internet connection, I could type in a prompt to get ChatGPT to generate an essay on one of my course topics within seconds. The task left up to a human being is not the writing of the essay anymore – it’s fact-checking it.

While I would never condone abandoning the essay completely, the value of teaching essay-writing as a skill is increasingly questionable in a world where an LLM can produce dozens of essays at a moment’s notice. As Boris Steipe has observed, our students will encounter these AI-generated materials out in the wild – websites, blogs, and even social media posts with seemingly correct and superficially well-cited arguments supporting whatever claim a person wants to advance. In such circumstances, the more important skill to teach is not how to produce such material, but how to question it. In other words, teaching fact-checking as a skill is more important now than ever before.

When it comes to ChatGPT and similar generative AIs, let’s see what these tools can do. Let’s treat near-perfect AI-generated essays not as an obstacle to the way we’ve always done things, but as an opportunity to teach something new.

(Check out my sample fact-checking assignment here.)

About the author

Tyler is an Assistant Professor, Teaching Stream, in the Department of Social Science’s Law & Society program, as well as the Coordinator of the Foundations program in the same department. He regularly teaches courses that focus on contemporary social issues related to the law, fundamental learning skills, and my area of speciality: animal law. Complementing his academic background regarding other-than-human entities, he is also especially interested in digital pedagogies that interrogate the place of “the human” in technologically-mediated spaces.

Headhsot