Artificial Intelligence and Education: What Will Change — and What Will Not
This is ChatGPT's summary of the slide deck that I used for my presentation for Learning Network New Zealand on September 12th, 2025, and shared via DropBox at https://bit.ly/DylanWiliamPowerpoints
Artificial intelligence (AI), particularly in its recent generative forms, has prompted sweeping claims about the imminent transformation of education. The presentation on which this article is based offers a more measured, evidence-informed account. Its central argument is that AI will profoundly reshape how work is done and how teachers can be supported, but that it will not—on its own—solve the core challenges of teaching, learning, or assessment. The real question is not whether AI will change education, but where, how, and under what conditions it will do so productively.
A first theme concerns what AI actually is, and why recent developments matter. Modern generative AI systems, such as large language models, differ fundamentally from earlier educational technologies. They do not merely retrieve information but generate text, images, music, and code by modelling statistical patterns across vast datasets. This capability, enabled by enormous increases in computing power and training data, explains both their striking fluency and their limitations. These systems do not “understand” in a human sense; rather, they predict plausible continuations of input. Appreciating this distinction is essential for avoiding both inflated expectations and misplaced fears.
A second theme addresses AI’s impact on the world of work, which provides important context for education. Across domains as varied as medicine, law, consulting, and recruitment, research shows that AI can dramatically increase productivity—but unevenly. Tasks that lie “inside the AI frontier,” such as summarising documents or generating first drafts, are often completed faster and to a higher apparent standard with AI support. By contrast, tasks requiring judgement in unfamiliar contexts, integration of messy evidence, or ethical responsibility often suffer when AI is relied upon uncritically. This pattern suggests that AI is best understood not as a wholesale replacement for human expertise, but as a tool that reshapes tasks within jobs. Most occupations will persist, but the number of people required to do them, and the skills they need, will change.
This leads to a third, crucial theme: education must focus less on preparing students for specific tasks and more on developing the capacity to learn. As the presentation emphasises, the only genuinely durable “21st-century skill” is the ability to respond intelligently to situations for which one has not been explicitly prepared. AI accelerates the obsolescence of narrow procedural knowledge while increasing the value of judgement, adaptability, and learning how to learn. This reframes the educational challenge: success lies not in competing with AI at what it does well, but in cultivating forms of understanding and agency that AI cannot replicate.
The presentation then turns explicitly to schools, highlighting areas where AI is likely to have its greatest near-term impact. Administrative work is an obvious example. Scheduling, drafting communications, summarising documents, preparing reports, and translating materials for parents can all be done more quickly with AI assistance. While such tasks are peripheral to learning itself, they consume large amounts of teachers’ time. Reducing this burden has the potential to improve teachers’ working lives and free attention for instructional work, provided that schools resist the temptation simply to add new demands.
In teaching, the evidence reviewed paints a nuanced picture. AI can be highly effective as a teacher support tool: generating worked examples, creating parallel practice tasks, producing quizzes, drafting rubrics, or offering alternative explanations. Used in this way, AI amplifies teachers’ expertise rather than replacing it. However, claims that AI can reliably personalise learning at scale or function as an autonomous tutor are not supported by current evidence. Performance on tasks—whether by students or AI systems—should not be confused with learning. In some studies, heavy reliance on AI during learning activities is associated with weaker retention and understanding, underscoring the need for careful design.
Assessment represents perhaps the most challenging area. Generative AI makes it increasingly difficult to treat unsupervised written work as secure evidence of individual achievement. At the same time, AI enables richer forms of assessment—such as analysis of extended performance or authentic tasks—at lower cost. The implication is not that assessment becomes obsolete, but that its design must change. Attempts to “detect AI use” are unlikely to succeed in the long term; more promising approaches focus on redesigning tasks so that AI use is either irrelevant or explicitly incorporated.
Ethical considerations cut across all of these themes. Issues of data privacy, bias, transparency, and accountability are not peripheral but central to responsible AI use in education. Teachers and students need not only access to AI tools but also the expertise to use them critically and ethically. This requires investment in professional learning that integrates technological knowledge with disciplinary and pedagogical understanding.
The presentation concludes with a cautionary but hopeful message. AI is neither a silver bullet nor an existential threat to education. Its effects will depend on the choices educators, policymakers, and institutions make. If AI is used to automate low-value tasks, support high-quality teaching, and refocus education on learning rather than performance, it could be genuinely transformative. If, however, it is treated as a substitute for professional judgement or as a shortcut around the hard work of teaching and learning, it is likely to disappoint. The future of education in an age of AI remains, emphatically, a human responsibility.



Hi Dylan,
Thank you for the summary of your presentation. It prompted me to publish a short blog in response. What particularly caught my attention was the idea that AI systems “just predict” and do not understand, with the implicit contrast that humans do something more than prediction.
In exploring how human cognition actually works, I have been drawn to recent developments in contemporary cognitive science, where Predictive Processing and Active Inference frame the brain itself as a prediction system, continuously generating best guesses about what comes next. This raises the slightly uncomfortable possibility that what we call human “understanding” may be nothing more, and nothing less, than especially successful, action-grounded prediction.
https://predictablycorrect.substack.com/p/prediction-understanding-and-the
In the blog, I explore this idea and draw on recent papers in AI and cognitive science to highlight both the deep similarities and the important differences between human and artificial predictive systems, including why AI predictions currently lack the robustness and grounding of human ones. I would be very interested in your thoughts. Adam