ChatGPT Is No Threat to a Learning Community
A MiddleWeb Blog
And it was true: she had asked ChatGPT to write “love letters” to various subjects, which it did, including donkeys (“Your large and loving eyes, your soft and gentle fur”), boa constrictors (“your quiet and confident demeanor”), and chairs (“your sturdy legs, your supportive back”) – but mosquitos? That was going too far.
“Mosquitos are known to carry diseases and pose a threat to human health, and expressing love and affection towards them is not a respectful or appropriate thing to do,” ChatGPT had declared primly.
My Foray into ChatGPT’s AI Tendencies
I was hooked. What else would ChatGPT refuse to praise in a love letter? I spent a couple of hours playing with this question, throwing universal evils at it (Pol Pot, Hitler), other common household items, not so common items (Chinese weather balloons), and concepts with mixed moral consequences (fossil fuels, politicians).
The results were fascinating. If the item triggered what appears to be an algorithm against overt harm to human beings or the environment, ChatGPT refused to write the letter. It also chided me for the weather balloon letter, telling me essentially that I was wasting my time mooning over a serious scientific instrument. (Also fascinatingly, I have not been able to reproduce this result. With the same prompt, ChatGPT dove right in this morning with “As I sit here and gaze up at the clear blue sky, I am reminded of how much you mean to me… “ ChatGPT is also prone to remind us that its last data dump was for 2021, so there’s that…
Most curiously, when confronted with a mixed moral good like fossil fuels, ChatGPT began to use a boilerplate I could easily anticipate. It would list the advantages of the item glowingly in the first paragraph, itemize its problems sadly in the second, and then conclude with a cheerful, noncommittal statement like: “So here’s to you, [kitchen knife]. May you continue to benefit and confound us in the years to come.”
And this is why everything is going to be fine with our kids using ChatGPT in the ELA classroom. ChatGPT is a machine, following previously determined mechanical formulae. It is not a student in a learning community, taking stances, experimenting with grammar, syntax and organization, discovering their personal voice, fiddling with what words excite their emotions and challenge their brains, determining what language makes their hearts ache or gives them joy.
Corraling ChatGPT in the Classroom
What does ChatGPT-proof student writing look like and sound like in the language classroom? Consider these moves.
►The writing process should happen physically in the classroom.
This means that pretty much the only writing you’re working with is what the students demonstrably generate right in front of you and their peers. If it’s appropriate and supportive of your kids, you may even consider limiting drafts or graphic organizers to handwriting only, which not only is supported by some research as instructionally superior to typing, but is a method of generation that ChatGPT can’t hope (yet?!) to approximate.
►The writing process should involve tangible feedback from models, teachers and peers.
Many eyes are on the work: yours, student peers, and other experts, resulting in kind, specific, and helpful suggestions for revision and strengthening. These suggestions will be as unique to the student’s process as the students are unique themselves. Their generation, and their results, come from human relationship. They cannot be duplicated mechanically.
►The writing process should involve deep reflection on the part of the writer.
Student writers should not only be writing, but also be consistently writing about their writing. Why did moving the first paragraph down in my essay strengthen the introduction? Why did I choose the words I chose? If a student tries to shortcut the writing process by turning in an AI-generated piece, they will be unable to engage in this kind of reflection because – simply – the writing isn’t theirs.
►The writing process could fruitfully include ChatGPT as a (fallible) resource.
I’ve switched from “should” to “could” in this move because ChatGPT is frankly scary in its capacity, and teachers may not be ready to “make their enemy their friend.” Nonetheless, it’s worth it to consider ChatGPT as a tool that could be used to assist the writing process by generating models to analyze, simplifying prompts, summarizing basic research on a topic, or even offering some writing tips and tricks. (Speaking of fallibility, when I asked it for samples of strong essay introductions it gave me a long repetitive list – and every one used the first person. Hm.)
►Finally, the writing process should be robustly iterative.
The common practice of “one and done” writing, with one prompt, one due date, and no revisions or refining, is arguably the most ineffective way of teaching writing – and it is the easiest way for a student to game your system by turning in an AI-produced piece.
I don’t know what else to say here except “don’t do it.” All classroom writing should be the result of a commitment to high quality products that require the students to look at and change their work again, and again, and again.
So here’s to you, ChatGPT,,,
Thanks for your thought-provoking presence, but we’ve got your number. If we’re grounding our instruction in doing what we should be doing as teachers – knowing our students – there will be no way we can mistake ChatGPT products for the learning and growing of a human child.