John C. Ickis (MBA, DBA Harvard University) is Emeritus Professor of Organization and Strategy at INCAE Business School. He is co-author of the book The Octagon: a Model to Align CSR with Strategy among others, and from articles in World Development and The Harvard Business Review.
ChatGPT and the Case Method
No. 22, July-August 2023.
Before each class, the professor asked students to submit short answers to questions about the case of the day. He generally received cursory responses until this year, when the quality took a surprising leap. The reason: ChatGPT, the OpenAI product capable of passing law exams and is already turning to higher education. To ban generative AI in classes or not? asks the headline of El Financiero (Costa Rica).
In search of an answer, I asked the following question to Florian Federspiel, colleague and expert in quantitative methods and decision making who teaches with cases: What is your experience in teaching and grading tests when students have access to ChatGPT?
“That's a great question,” he replied. “In short, I have not noticed a major impact in any of my classes. I have passed my exams on ChatGPT and to my surprise it didn't answer any question correctly... after testing it a bit I can understand how easy it would be to get wrong in the subjects I teach unless you ask the questions so precisely that you already know how to answer them."
“ChatGPT is one of several 'Large Language Models' (LLMs), and although it is the most advanced, its impact on our case-based educational model should not be exaggerated. Its ability in certain areas—such as helping with coding—is impressive, but it does have some limitations:
- ChatGPT does not know the truth and perhaps estimates what is considered the truth according to the data to which it has access (the entire Internet), with all its biases;
- he still 'hallucinates' a lot, offering information that doesn't exist or is wrong; and
- It is not yet useful for complex analysis and decision making because it lacks the necessary critical thinking.”
Others point to the virtues of ChatGPT: In an HBP webinar, Professor Mitchell Weiss, a self-described “lover of the case method,” explains how to use it to maximize learning in case preparation and concept clarification, with the aim of Example of a student who is struggling to understand the difference between “network effects” and “virality.”
But Professor Weiss does not address the dilemma of the teacher who wants to distinguish between what the student knows how to do independently and what ChatGPT knows how to do. In his case, students are submitting text as their own, with no attribution to the true source. This is plagiarism, and there are ways to control it, like GPTzero.
There is a consensus among the universities interviewed by El Financiero that generative AI such as ChatGPT is an everyday reality that facilitates a variety of tasks, and that it is important that students know what they can and cannot expect from it. A good practice is to allow the use of ChatGPT, but the student must explain why and how they are using it, and how its use enriches their educational experience.
ChatGPT can be wrong, but it has a certain humility. When I asked questions about a case, he confessed that he had not received training in written cases after a certain date. He offered a generic solution to address the situation I described, but cautioned that it may not be relevant since he didn't know the environment. And so it was: his recommendations were irrelevant.
In the end, ChatGPT can be a valuable academic assistant, but like many useful tools, it can be misused.
-John C. Ickis
Image by pch.vector on Freepik (edited)