It’s a Wednesday afternoon. I’m teaching ethics—deontology, intuitionism, utilitarianism—to a room full of eager, intermittently agitated high school Seniors. The morning has been cold and rainy, but G Period is particularly enthused. A student I’ll call Eva (all names have been changed to protect student privacy) spills an entire cup of tea over her trousers whilst reaching for the discussion balloon. Natasha raises her hand after every single question. Manuel, who is normally quiet, passionately defends his conception of violence as a means towards retributive justice. Akari is so engaged in the discussion that he accidentally bites the discussion balloon.
These are the ways a teacher senses engagement from teenagers. Engagement is the holy grail of teaching, as it is of most AI and new media companies. Education exists within the context of a vast attention economy, increasingly a battleground, forcing teachers to ask new questions about how they measure student success.
I’ll miss the small wins most when I leave teaching. Despite the daily rewards of educational practice, teachers are leaving their careers at alarming rates. The BBC reports that 30% of teachers leave the profession within the first five years. Generation Z was challenging to teach, but whispers in the faculty lounge suggest that Generation Alpha (also known as Generation iPad) seems entirely beyond the reach of current teaching methods.
One of the reasons for faculty departures may be that technology has significantly disrupted student behavior and curriculum development. User-friendly generative AI is pushing teachers to refine their methods in uncomfortable ways as they simultaneously attempt to mitigate a new set of antisocial behaviors coming from their screen-obsessed students. While the challenges created by new technologies are undeniable, AI has also generated exciting questions about how learning is evaluated.
Generative AI has pushed schools to become more creative and more flexible in how they define student learning, which is positive—but schools are still failing to match new technology when it comes to student engagement.
If an artificial intelligence system can ace the Turing test that is high school, maybe high school is doing something wrong. Parents and teachers seem to see any engagement as positive, because students seem so much more interested in their phones than in their physical reality. Technology sets a very low bar for engagement. To an AI system, user engagement can simply mean pressing a button or watching a 90-second video. To a teacher, engagement means focus away from a screen. It means sustained attention on something deemed difficult, complex, or “boring”. As a result of this competition with technology, it seems as if standards for engagement have slumped as new technologies are being integrated into the classroom with varying rates of success.
Cutting a piece of paper, clicking an emoji, or writing a question on a Post-It note should not be seen as signs of learning. To survive the relevance test of an uncertain future, education is going to have to embrace different models for academic success.
It’s the end of the third quarter. The deep winter frost is thawing and my students are handing in their essays on the ethics of Chat GPT. Their writing prompt is: “To what extent is it ethical to use Chat GPT to write this essay?” I receive a late submission from Rose, and run it through a plagiarism scan. The scan results are disappointing: “100 percent AI generated”.
A few days later, Rose is in my office. Her first question is: “what app did you use to catch me? I have used every app and they all gave me a pass.” The irony of this situation is unfortunately lost on Rose, who is required by school plagiarism policy to resubmit the essay with a grade cap of 75%.
While some students jump at the chance to use AI to produce quick results, others complain about the way this option disadvantages those who make the diligent effort to create original content.
In a qualitative poll on the subject of AI in the classroom, one of my students comments: “I personally hate it when other students use [Chat GPT] to write essays and do presentations, it’s pretty obvious when they do. When they get away with it, it’s frustrating [because] I gave up time I would rather [have] spent doing something else.” While some students promote manual writing practices, others express that the 20th Century education model is “pointless”, because in their eyes teachers are training them to execute tasks that an app can complete more efficiently. Are they wrong, or is something about their schooling being made irreversibly redundant?
During my tenure as faculty of Humanities and English Literature, I have been asked to retool my curricular approach twice over. During COVID, teachers were asked to learn a new set of technologies and implement lessons online. Hilarity ensued and mental health plummeted. A+ students descended into the depths of existential crises, as D- students suddenly flourished, skyrocketing to new heights of academic stardom. Intelligence is shaped by the learning environment, but the learning environment, we all learned, was subject to instability.
Change the environment and you cannot help but change the standards used to measure success.
Where formerly we were asked to view technology with suspicion—minimizing screen time and shaming colleagues for the use of videos to enhance learning—suddenly, technology was our lifeblood. Teachers recorded, Loomed, Zoomed, Pear Decked, and YouTubed. Every week somebody from the faculty was emailing a new app or programme to try out. We fast-tracked dozens of apps to our smart boards and iPads. We became a single, techno-savvy hive mind.
Then the pandemic ended. Teachers were thrown back in the classroom, faced with a curricular Frankenstein and a restless group of screen-addled minds. One day after COVID, the electricity in our school was briefly interrupted. I met Mr. Detroit in the corridor and asked him how his day was going. He was panicking about what to teach in his next lesson. I suggested he use the library. The thought had not occurred to him. We had become so used to the presence of technology in every lesson that we had almost forgotten how to use physical books.
Since the end of the COVID closures and quarantines, we have been asked to reevaluate approaches to learning yet again. How many videos are appropriate for a ninety minute lesson? The students are addicted to their screens! Get everyone outside and have them walk around. Nature is restorative! Just as teachers recover from the digital shockwaves of the COVID classroom, a new monster appears on the horizon: ChatGPT.
Noam Chomsky describes Large Language Model systems like ChatGPT as “a way of avoiding learning.” The system undeniably rewards laziness. Students use this automation vampire to write essays and organize ideas on their behalf. Administrative responses over the past few years have ranged from draconian to bohemian. Should we ban iPads? Let’s put their phones in lockers! Let’s scan for plagiarism! Free them! Let them use whatever they feel like! New protocol is swiftly introduced to monitor inappropriate uses of technology and questionable browsing histories. Technology is inverting our educational rules almost annually. One year we master the machine, the next year it masters us.
What do students make of technology in the classroom? Poll responses describe the integration of technology like Zoom as “boring” but “helpful”. One student writes: “[During COVID] Everything was done using technology and it got boring, [but] we had a lot more time to sleep”. Others complain: “My worst memory is being called in class while I was sleeping or waking up and being the last person on call with the teacher.” “Being online 24/7 gives me headaches but I like the flexibility.” When discussing new technologies integrated into classroom environments, there are dozens of references to falling asleep.
It is fascinating to note that classroom-based technologies are described as tedious, whereas gaming environments or social media can consume a teenager’s focus for hours on end. Education simply can’t compete—and maybe it shouldn’t compete—with virtual reality.
Some look back on online learning with fondness. “I had a wonderful COVID experience. I had online [classes] from 9:00-12:00. The rest of the day I spent biking with my friends, spending time outdoors with my dad, or skateboarding” one student writes. Another disagrees: “The only thing I enjoy about working online during COVID is the fact that my classes are shorter, but I do think online learning ruined my way of learning forever, and I have found it incredibly hard to focus since the 8th grade.” They recall “sitting at the computer for hours” and “Gym Zoom. Terrible.”
When asked about online learning, another student says they “hate pretty much all of it: the social isolation, when classes glitch out, the monotony of classes, and the frustration of having to complete work when you do not understand it because you can hardly ask any questions.” Some express facing difficult off-screen truths at home. “The worst aspect for me is by far the loneliness and lack of human interaction” one comments. Another student explains that the hardest thing about being at home is witnessing “my father’s depression. Seeing that is rough.” Do these difficulties with online learning translate to a suspicion of AI in the classroom?
On the contrary, Generation Z seems to largely embrace AI as a tool for learning—or perhaps for replacing learning (learning, after all, can be difficult and time consuming).
“It is a great resource because it gives unbiased opinions” one student writes. “If I have dyslexia… it’s easy for me to input my ideas into Chat GPT and get a summarized version of my ideas. If I have dyscalculia I can have Chat GPT explain to me a math problem how it would explain it to a 9th grader. I think the possibilities are really endless and will only get better as AI evolves.” One student points out that Microsoft HoloLens is already being used by medical students to analyze organs without harming human tissues. 80% of students polled say that technology is generally beneficial to their education. Only 16% describe technology as a hinderance, and 4% say they feel neutral about it.
Not one student sees AI as an outright threat to their learning.
As my fellow English and Humanities teachers would be swift to point out, there is still a clear difference between a student essay and the generic, soulless pieces of work produced by ChatGPT. “I don’t buy it”, one colleague in the English faculty says in response to the idea that AI is a threat to learning “and I’m not losing any sleep—we will always have a job.” Another argues “it seems to be used by the same handful of students over and over.”
Despite this optimism, AI will force educators to change their approaches. New technology has proven to be winning the engagement game with incoming students, and governments seem not to question the quality of this new form of engagement from a curricular point of view. The United Kingdom is currently investing 2 million GBP in AI toolkits for schools, as a test case to see whether teachers will benefit from the use of AI (to help, for instance, create lesson plans and quizzes). In justifying this approach, the government preemptively writes:
“Does this mean pupils could be taught by AI? Absolutely not. Teachers are irreplaceable, and AI could never be a substitute for teachers’ professional judgement and the personal relationship they have with their pupils.”
According to the World Economic Forum’s Future of Jobs Report 2023, AI is most unlikely to replace “jobs requiring human skills such as judgement, creativity, physical dexterity and emotional intelligence”. The House of Lords published a largely positive assessment of AI in 2019 called “AI in the UK: Ready, willing and able?” encouraging industries to align with AI systems.
Employers (and subsequently educators) are beginning to see the relevance of the Humanities. Which jobs are resistant to redundancy? Social workers, farmers, musicians, researchers, performance artists, judges, leaders, and therapists are all seen as durable and valuable careers. Accountants, economists, copywriters, cashiers, marketing teams, and even lawyers are starting to look less resilient. In other words: out with the measuring and calculating, in with the creating and imagining.
As a Humanities teacher, I understand that off-screen human interactions in the classroom are irreplaceable. The context of our learning cultivates our quality of insights and knowledge. In a classroom, bonds are formed over shared beliefs. Friendships develop out of ideological differences. Socratic seminars, group projects, discussions, presentations, and debates cannot be replaced wholesale by artificial imitations or digital platforms.
The qualitative experience of being physically present with others is priceless and the organic development of new ideas in a group setting is likely to always be of significant social value.
It may be that formal education has simply been focused on the wrong sorts of “outputs”. Large Language Models such as ChatGPT can now structure a five paragraph essay, but these essays still seem to be “missing something”. What exactly are they missing? For decades, the average high school student has been painstakingly copying a formula for the standard English essay: Claim, Evidence Analysis, Claim Evidence Analysis. And while formulas do work, they do not guarantee that a writer is a better thinker for having mastered the technique.
Maybe now that the robots can write perfectly-formed five paragraph essays, we can dare to be creative again. We can start by asking classically unfashionable questions such as: “how does this poem make you feel?” or “What does this passage remind you of, when it comes to your own experiences?”
Let’s imagine an optimistic world in which technology supports our creative development, rather than enslaving us, making us lazy, and outsourcing critical thinking skills. Perhaps the question is not: “how do we keep up with all of this technology” or “how do we compete for engagement”? But rather: “what can humans do that robots can’t?”
For the past three hundred years, educational systems focused on cultivating minds that are good at measuring, evaluating, memorizing, calculating, and spitting out data sets. But if a robot can measure, evaluate, memorize, calculate, and produce data just as well as a student can, what else might we be doing with our intellectual efforts?
It’s my last year of teaching. During faculty gatherings, teachers at our international school are given crisp, brightly-coloured handouts with new learning targets. These new objectives will ask students to do things like “make original connections between ideas” and “develop public speaking skills”. In other words, we are now being asked to develop the sorts of skills a robot hasn’t quite mastered: all of those messy, creative, original aspects of human thought that behaviorism abandoned for its dream of a high functioning industrial society.
New technology has pushed teachers to encourage creativity over compliance. To future-proof our educational system, the system itself will inevitably need to change its metrics of success. Maybe that’s a good thing.
Dr April Pawar is an educator and writer living in the United Kingdom. She is the Founder and Director of Oxford Writers’ House. She holds an Interdisciplinary MA from New York University and a DPhil from the University of Oxford, where she taught as a Rothermere American Institute Fellow. She is the former President of the Oxford University Poetry Society and a founder of Oxford’s chapter of English PEN. She publishes both fiction and non-fiction and is interested in the intersection between literature and philosophy.