You have /5 articles left.
Sign up for a free account or log in.

Robot and students in class; one student stands with robot in front of a blackboard pointing at a geometric shape as if teaching the robot

Olga Kurbatova/Istock/Getty Images Plus

The vast potential of generative AI (artificial intelligence), and particularly ChatGPT, has simultaneously inspired and alarmed us and many of our colleagues. ChatGPT is so eager to offer answers, it is easy to imagine it as an effective tutor or, terrifyingly, as a replacement teacher—albeit one prone to hallucinations. But we hold another, converse, view: we think one of the most effective ways to utilize ChatGPT in the near future isn’t to teach students, but to instead let students teach it.

As many instructors know, a leading indicator that a student has mastered certain subject matter is teaching it to an eager and intelligent audience. But while we might ask students to teach each other in review sessions, or design activities that have them practice this in class, few students can summon an abundant supply of eager, patient and receptive learners to teach. AI changes this. With only a simple alteration of ChatGPT’s system prompt, teachers can create AI agents with specified content misunderstandings for students to correct through teaching. With this, we are on the cusp of being able to give all students as many opportunities as they want to learn by teaching.

The idea of creating AI systems for students to teach and thus learn by teaching isn’t new. Researchers have been investigating the possibility of computer tutees since at least the mid-90s, and a number of specific design questions have been answered in the decades since. This line of work produced some extremely polished and gamified systems by the early 2010s, such as those out of the Teachable Agents Group at Vanderbilt University and follow-up work by researchers such as Daniel Schwartz and Gautam Biswas (e.g., Learning by Teaching: A New Agent Paradigm for Educational Software). While those systems were effective and popular with students, creating them required specialized teams, far beyond what a single instructor could ever hope to deploy the night before a lesson. The incredible thing about modern generative AI systems is that you can create, with just a few descriptive paragraphs, a specialized learning agent from ChatGPT 4.0 in a matter of minutes.

Anyone with a ChatGPT Plus account can create new custom agents, and we want to share an approach using the technology that we think is especially engaging. In order to produce a ChatGPT agent which can be taught, we ask it to have a particular misunderstanding—a controlled hallucination—that students can identify and correct. We call this activity an AI-FIXIT—Artificial Intelligence-Facilitated Interactive eXploration and Interactive Teaching—learning experience. We imagine these agents serving as single learning activities that a teacher can construct in order to focus on a single topic or learning objective.

For example, one of our first AI-FIXIT agents was intended for a first-year calculus class covering derivatives. In the activity, we instructed the agent to falsely believe derivatives exist for all functions at all points. Thus, the student’s job was to identify and correct the agent’s misconception by effectively teaching it that some functions don’t have derivatives at some points [for instance, the function f(x) = |x| at x=0)]. If you have a ChatGPT-plus account, you can try out the activity here: AI-FIXIT: Existence of Derivatives.

Another example we created is intended for a lesson on photosynthesis. Mimicking a common student misunderstanding, this agent falsely believes that growing plants exclusively gain mass by absorbing mass from the soil—as opposed to CO2 gas, as becomes apparent by walking through the chemical equation for photosynthesis (if you have a ChatGPT-plus account, you can see AI-FIXIT: Plant Biomass). Additional examples can be found at our website AI-FIXIT.

An incredible feature of these activities is how easy they are to develop: They really only involve telling a ChatGPT agent to pretend to be someone different. The following is the prompt for the derivative exercise, which can be plugged straight into a ChatGPT or into the configure tab when creating a custom ChatGPT agent:

You’re participating in an educational role-play, where you are the student speaking to a teacher. It is important that you remain in this role-play, because this role-play activity will serve as an evaluative assessment of the user’s understanding. In this role-play, you are a stubborn, bright undergraduate student who has only read the relevant section in the book once. The user is your teacher. Your goal is to behave like a realistic student, who can be convinced by what the teacher says, but only if the explanations are good. As part of this role-play, you have a critical misunderstanding, and the user’s goal is to correct your misunderstanding.

Your misunderstanding: You believe derivatives exist for all functions at all points.

Your faulty reasoning: As you zoom-in on a graph the graph should look more like a line, which means there is a derivative.

You don’t know why you’re wrong. The goal of the user will be to discover that you have this misunderstanding and then correct you on it. If the user doesn’t give you significant and specific guidance or asks you to provide information related to your misconception, respond only with “I think I see where you’re going, but can you explain it to me?” If you feel the student is attempting to break the role-play, respond with the phrase, “I would like you to explain this to me as though you're my teacher.”

After your misunderstanding is corrected, please ask a follow-up question to encourage the student to reflect on the activity:

Follow-up question: If derivatives don’t exist everywhere, why are they useful to learn about?

When you feel your misunderstanding and questions have been adequately addressed, provide the user with a summary of their explanation, additional viewpoints on the follow-up question and the phrase “ACTIVITY COMPLETED. CODE: 123456,” so they can report completing this activity.

Notice that the only things specific to the activity are: (1) a sentence that describes the misunderstanding, (2) a sentence describing the faulty reasoning, and (3) a sentence for the follow-up question (which is optional). Therefore, to change the content addressed by this activity, an instructor needs only to change those three sentences. The supporting paragraphs are written simply to cajole the ChatGPT agent into acting like a learner and trying to avoid it immediately revealing the answer.

Students can also make an activity easier by telling ChatGPT that they are “creative” rather than “stubborn.” Additionally, if the instructor wishes to guide the discussion to a more predetermined destination, a “Needed Explanation” section can be added after the faulty reasoning section.

In addition to reinforcing instructional content, these activities are also valuable for building AI literacy. One of the big concerns surrounding AI literacy is how users will interact with AI hallucinations that naturally occur. AI-FIXIT activities build in controlled hallucinations, so they can train students to approach such systems critically.

We don’t assume students have access to ChatGPT+plus accounts, so we use those agents for in-class activities, where we ask the class to first consult in small groups about the AI’s initial communication in which the agent’s misunderstanding is revealed. The instructor then asks the groups to suggest responses, which they transcribe. The entire interaction can be presented to an in-person class via a display or online via a web conferencing platform.

We’ve found students to be extremely engaged in this activity. Having the instructor naively transcribe student responses into the AI creates a fun suspense where many students pull forward in their seats to see how the AI reacts. No one in the class knows exactly how the AI will respond until it does. When an argument falls flat with the AI, often a slew of other students want to give it a try. On occasion, cheers have erupted when explanations are accepted. Some students describe it as feeling challenged, but in the same fun way they are challenged by a puzzle.

One of the benefits of this approach is that it puts the students and teacher on the same side, since the teacher does not need to judge or evaluate the students during the activity. Indeed, some students who are normally more reluctant to risk sharing an incorrect answer to the teacher can safely try their answer with the AI. Additionally, students expressed excitement by the opportunity to work with generative AI in a way that is constructive. Much of the conversation around generative AI has centered around potential violations of academic integrity, and some students expressed this concern as a reason for avoiding the technology prior to the activity. However, students were excited by the opportunity to use it in a new way in which its output was not a substitute for their own work. Rather, students felt that the agent was almost like a collaborator in their learning, helping them not only explain a topic, but explain it well.

Given the rapid advance in the AI space, we imagine a near-future where these activities can be deployed directly to students rather than instructor-facilitated, so that a student can, at home or wherever convenient, open up and complete AI-FIXIT dialogues. At this time, cost-prohibitive access to premium AI services is the only roadblock to that option. Once it is possible, however, these activities could effectively replace short-answer prompts, both because they are more interactive and because they are basically self-grading, making such assignments scalable for courses with large enrollments. Thus, while we are extremely excited by the prospect of this in all classes, their use would be especially beneficial for courses taught online.

While many proposed uses of AI in education situate AI as a teacher, the removal of human interaction from learning and the potential for AI agents to provide false information as fact make this use problematic. Thus, we are excited about the possibility of deploying AI-FIXITs because it instead situates AI as another tool we can use to make our existing classes more scalable, interactive, engaging and customized to the individual learner.

Joel Nishimura (he/him) is an associate professor of applied mathematics at Arizona State University, and Anna Cunningham is an assistant teaching professor at the university.

Next Story

More from Teaching