CETLSS Spotlight
The CETLSS Spotlight is a blog authored by CETLSS staff that highlights happenings in teaching, learning, and career support, both at App State and beyond.
Spotlight: Chat GPT
1/27/23
by Lindsay Masland
Although artificial intelligence and related language processors are not new, they didn't occupy any space in my "pedagogical brain" until exam week of Fall 2022. It was then that people started talking about a new technology called ChatGPT on Twitter. At first, I ignored it. I didn't have the time or energy to care. Until I saw screenshots of the types of material this "bot" was able to produce... Example after example of serviceable essays, worked math problems, and "original" poems began to fill my social media feed. It was then that I realized that this sort of technology could have wide-ranging impacts on teaching and learning. (And if you'd like to see some samples of ChatGPT products without having to sign up for an account yourself, check out these slides, which both explain ChatGPT as a technology and which include many examples of generated work.)
I think we've found ourselves in a bit of a pressure cooker when it comes to Chat GPT and related technologies. We are learning new things about it every day, and yet, we need policies and solutions now. Or yesterday, really, since the semester has already begun. Because of the pace at which things are evolving, my hunch is that the pedagogical response we have to ChatGPT in Spring 2023 will be very different than the one we have a few semesters from now. Like everything in teaching, I think it's best if we view our response to this technology as a marathon and not a sprint.
So, what are the options now? What do we say to our students about this technology? What do we allow or disallow when it comes to ChatGPT? At this "pressure cooker moment," I think we have four potential options:
1. We ignore it.
Let's be honest. We've been through a lot these last few years. The last thing we need right now is a new, disruptive technology. So if you feel like you're at your wit's end when it comes to modifying your teaching approach, this might the right course of action for you in the short term. Does this mean that a few students might use this technology to support or supplant their learning in ways you don't love? Maybe. But I think you should be honest with yourself and your capacity for or interest in surveilling student behavior. I'm personally never interested in policing students, as it seems like the least effective way to help them learn, but even if I were interested in that, I just don't have the capacity to add another item (i.e., catching students using ChatGPT) to my teaching to-do list!
Or maybe you still have a lot of gas in the tank, but this sort of technology isn't really disruptive for you, your field, and/or the way you've designed your courses. Maybe it's hard for students to effectively use this sort of technology to complete their assessments. That's the case for a statistics class I'm teaching this semester, for example. Although it's true that this technology can solve math problems, my class has students working with very large, real-world data sets. They learn how to run statistics so they can make recommendations for real community partners. Since ChatGPT doesn't consult the internet and since it only contains information up to 2021, it's not likely to be very helpful for my students who are being asked to make real-world recommendations about large sets of real data. Maybe you have a class like that too? Then you might be a candidate for ignoring.
2. We strongly discourage it.
I'll admit that this response was the one I saw many folks jump to at first. I saw lots of examples of syllabus policies expressly forbidding the use of ChatGPT at all. I'm hesitant about this response, though, for multiple reasons. First is the issue raised above—anything you disallow in your classes has to be surveilled in some way. The result? An extra task for our already too-full plates. Second, I didn't get into teaching to police student behavior. Especially at App State, where a primary focus is transformational teaching and learning—trying to catch students in doing, well, anything feels like the opposite of that. Third, we talk a lot about preparing students for the "real world," and it's pretty clear that the real world into which they will be graduating is just like the real world they currently inhabit — a world in which ChatGPT exists. Better prepare them to thoughtfully engage (or avoid engaging) with this technology than to ban it outright, I think. All of that being said, this is just my perspective, which is a reflection of the personal values I hold and the various situational and positional constraints I am subjected to. Others with different situational factors may come to a different conclusion about how a ban supports student success.
3. We softly discourage it.
Once folks started to shift out of emergency mode, "soft discouragement" got attention. For example, as we continued to play with it, we learned that although ChatGPT's products are serviceable and often somewhat correct, they're almost never exemplary. The responses lack creativity and a unique voice. Even if you ask the bot to write an explanation of photosynthesis in the form of a rap or in the style of Mary Oliver's poetry (both of which it will do), the outcome is often fine. But that's it. Just fine. And not only that, ChatGPT is sometimes wrong. If it only partially knows the answer to your question, it will fill in the rest. Which sometimes means generating completely fake citations or writing the same idea over and over, but in slightly different ways. If students were invited to critique the outputs of this technology, they might quickly come to the conclusion that it isn't as magical as it seems on the surface.
And if its just-fine-ness isn't enough to dissuade someone, maybe the ethical argument is. Investigative reporting has revealed that this currently free technology was built, in part, using Kenyan workers who were paid $2 an hour to attempt to cull all of the explicit, traumatic, and/or pornographic material that the earliest versions of the bot produced. Students (and professors) might be horrified to know that the contract to do such culling was terminated 8 months early because too many of the workers were experiencing PTSD symptoms connected to the material they were tasked with removing. Do we feel comfortable using technology that was built through such exploitative means? And not only that, but there is the argument that each time we interact with ChatGPT, we are providing free labor that improves the bot's ability to produce. Eventually, this technology will be monetized, so if we want to continue to use it, we will have to pay for access to something we helped to build.
Or, to take a completely different tack, we could try to design it out of utility. Meaning, we can select class assessments that don't benefit from ChatGPT input. For example, some instructors have decided to shift to all handwritten, in-class assignments. Although this might solve the ChatGPT problem, it generates new ones—requiring handwritten responses is not inclusive of some student learning needs. And besides, do you really want to go back to grading handwritten essays? So perhaps a different angle might work. As mentioned above, the technology was only fed information up to 2021, so it is unable to make connections or applications to current-day happenings. It also has a limited ability to generate good citations, and it's not particularly good at synthesizing ideas across multiple sources. ChatGPT can't produce multimodal products (i.e., it can only produce written word), nor can it generate authentic products like websites, videos, or slides. Its ability to produce metacognitive or process-based reflections is severely limited, so assignments that ask students to explain their process of completing an assignment or how they incorporated specific pieces of instructor feedback aren't good candidates for the technology. So, if we are creative with our assignment formats, we could softly discourage the use of ChatGPT in the first place.
4. We encourage it.
Some instructors have decided to fully embrace the technology and incorporate it directly into their courses. I'll admit that this was where my own head went as soon as I heard about the technology. And once I started playing around with it and realized that most of the output was fine, I started envisioning assignments that might invite students to generate ChatGPT output and to critique it or modify it to be better than fine. This seemed like an excellent way to support students in critical thinking and to prepare them to use a technology that would likely be a part of their post-graduate experiences! After I heard some of the ethical arguments discussed above, I dialed back my enthusiasm, but I still think that there are many inventive ways to introduce students to the affordances and constraints of technology like this. A great example of such an assignment comes from Morton Ann Gernsbacher, Psychology professor at University of Wisconsin-Madison. Or perhaps you don't want to give students free rein when it comes to the use of ChatGPT, but maybe you recognize there are certain phases of an assignment like outlining or brainstorming where ChatGPT input makes sense. Maybe you're OK if students use it, as long as they cite it. There are lots of examples of classroom policies that work with ChatGPT (and policies that expressly forbid it) here.
So there we have it. Four possible responses to ChatGPT, none of which is perfect.
I think the best thing we can do right now is to ask ourselves these questions:
- What do I want to model to my students about this technology, which surely isn't going away?
- What additional labor am I willing to invite into my teaching (i.e., devising new assignments, trying to catch use of ChatGPT etc.)?
- How can I ensure that my response to this threat isn't bigger than the threat itself (i.e., if only a few students might use this, does it make sense to completely overhaul things?)
- What decisions can I make that will maximize BOTH my student's success and my own enjoyment of this facet of my career?
As long as we are making thoughtful decisions, I think we are on right path, even if the path is longer and more winding than we might like.