What the Explore Learning report revealed about the risks of AI in education
Artificial Intelligence is making its way into classrooms around the world at full speed, and the pace of that expansion is already raising a major red flag. A recent report from Explore Learning, one of the largest tutoring providers in the UK, delivered a straightforward message: adopting AI tools in education without solid scientific evidence can do more harm than good to student learning. The document, published as a white paper by the company, points out that while the technology has enormous potential to transform teaching, rushing to implement solutions without real proof of results can weaken critical thinking, create a false sense of content mastery, and breed excessive dependence on automated systems.
The warning comes at a time when governments, schools, and EdTech companies are accelerating the integration of AI tools into teaching, assessment, and student support processes. Explore Learning argues that this movement needs to be accompanied by scientific rigor equivalent to what is expected of any other educational intervention. Otherwise, the outcome could be the exact opposite of what was intended: less prepared students, less empowered teachers, and more fragile educational systems.
Among the most concerning findings in the report is the fact that many AI-based educational platforms are being marketed with ambitious promises but without concrete data proving actual improvement in student performance. This means schools and governments are investing considerable resources in technologies that, in practice, may just be generating pretty dashboards and reports packed with numbers, without any of it translating into real learning. The report notes that many of these tools prioritize easily measurable performance metrics at the expense of deeper indicators of learning, creating what the document describes as metric fixation, where educational progress gets reduced to oversimplified results rather than fostering broader cognitive development.
Another point that really stands out is the risk of cognitive dependency. When a student gets used to receiving ready-made answers or pre-chewed pathways from an automated system, the tendency is for them to develop less intellectual autonomy. The Explore Learning report highlights that this dynamic can be especially harmful for children and teenagers in their formative years, a period when cognitive effort and learning from mistakes are fundamental to solidifying knowledge. In other words, making the path too easy might look positive in the short term, but it can exact a steep price on the development of essential skills like problem-solving and logical reasoning.
Assessment and evidence need to be at the center of the conversation
The Explore Learning report is emphatic in stating that meaningful personalization of teaching depends on continuous and precise assessments of a student’s level of understanding, and not on generic automated recommendations. The difference between these two approaches is huge. A continuous, well-structured assessment captures the nuances of each student’s progress, identifying real knowledge gaps. An automated recommendation based only on superficial patterns of right and wrong answers can mask important difficulties and create what Lisa Haycox, CEO of Explore Learning, calls a mirage of false mastery, where short-term gains vanish as soon as the technology is removed.
Haycox argues that more rigorous standards of evidence are needed as AI adoption expands across the education sector. According to her, the UK education system is under more pressure than ever, and AI has significant potential to ease those challenges, but only when backed by strong evidence and proven to improve outcomes, with the same rigor expected of any educational intervention. This statement reinforces the idea that the technology itself is not the problem, but rather the way it is being adopted, often without the necessary care.
Scientific evidence should function as a mandatory filter before any large-scale implementation. Just as in healthcare, where medications must go through rigorous clinical trials before being cleared for use, AI-based educational systems should be subjected to independent and longitudinal evaluations. This involves studies with control groups, impact analysis across different socioeconomic and cultural contexts, and transparency about the algorithms being used. The Explore Learning report reinforces that this approach is not an obstacle to innovation, but rather a guarantee that the technology will deliver on its promises, protecting students from becoming unwitting guinea pigs in experiments without a solid foundation.
Innovation needs to go hand in hand with responsibility
The central point of the debate is exactly this: there is a growing tension between the desire to innovate and the need to prove that these innovations actually work. Governments, schools, and educational technology companies are speeding up AI adoption in teaching, assessment, and student support processes. But without rigorous criteria and well-defined pedagogical foundations, the risk is trading real progress for pretty metrics that do not reflect what the student actually learned. This scenario puts educators, administrators, and policymakers in front of a choice that cannot be made on autopilot: embrace the technology with enthusiasm or demand that it prove its value before entering the classroom.
It is also worth noting that responsibility does not fall solely on technology developers. Governments crafting policies for digitizing education need to establish clear standards for evaluating and certifying AI-powered educational tools. Schools adopting these solutions need to invest in teacher training so that educators know how to integrate technology into the curriculum in a critical and thoughtful way. And families themselves need to understand that not every digital tool is synonymous with pedagogical advancement. The chain of responsibility is long and collective, and ignoring any single link can compromise the entire educational ecosystem. 🎯
The potential of AI for students with special needs
Not everything in the report is warnings and concerns. Explore Learning also identifies areas where carefully designed AI systems can deliver concrete benefits, especially for students with special educational needs and disabilities, known by the acronym SEND in the British context. The company reports a 35% increase in the number of students with SEND accessing its tutoring services between 2024 and 2025, which highlights the growing pressure on existing support systems.
The white paper suggests that evidence-based AI systems can help identify learning difficulties earlier and adjust tasks to meet each student’s individual needs. For children with dyslexia, autism spectrum disorder, or attention deficit, for example, a system capable of dynamically adapting difficulty levels, presentation formats, and activity pacing can make a significant difference in the learning experience. But here, once again, the caveat is essential: this personalization only works when it is built on solid pedagogical foundations and validated by special education professionals.
Dr. Hisham Ihshaish, Head of Data and AI at Explore Learning, explains that the company’s approach has been focused on applying established learning theories to the design of its AI systems. According to him, the question at Explore Learning was never whether to use AI, but how to use it. The company’s technology is grounded in established learning theories and informed by 25 years of longitudinal student data. Ihshaish further details that the most recent updates to the Compass 2.0 platform dynamically model not only what children know, but how they learn and the pace at which they develop, recalibrating pedagogical scaffolding in real time.
This kind of approach shows that it is possible to use AI responsibly and effectively in education, as long as the starting point is science and not hype. The difference between a tool that truly helps and one that only appears to help lies precisely in the depth of the modeling and the quality of the data feeding the system.
Personalized learning with AI works, but with caveats
One of the biggest promises of Artificial Intelligence applied to education is personalized learning, that model where the system identifies each student’s individual needs and adapts content, pacing, and approach according to their profile. In theory, this is revolutionary. A student who struggles more with math would receive exercises more focused on their specific gaps, while another who already masters the content would advance more quickly to challenges matching their level. The problem, according to the report, is that most platforms available today offer a shallow version of this personalization, based on patterns of right and wrong answers, without considering emotional, contextual, and motivational factors that directly influence the learning process.
For personalized learning to truly work, systems need to go beyond simple recommendation algorithms. This requires integration with formative assessments conducted by teachers, qualitative analysis of student progress, and most importantly, transparency about how algorithmic decisions are being made. A system that decides a particular student should skip certain content or focus on something else needs to be able to explain that decision in a way that is understandable to the educator, so they can validate, adjust, or challenge that recommendation. Without this layer of interpretability, the teacher loses control over the pedagogical process and the student is left at the mercy of automated decisions that may not reflect their reality.
The evidence available so far shows mixed results. Some research indicates modest performance gains in very specific contexts, usually when the AI tool is used as a complement rather than a substitute for human interaction. Other studies show that in scenarios of rushed implementation without teacher training, outcomes can actually get worse. The Explore Learning report reinforces that effective personalization is not just a matter of technology, but of pedagogy. AI can be a powerful ally when properly calibrated and validated, but on its own it does not solve the structural challenges of education, such as overcrowded classrooms, lack of resources, and unequal access.
The role of the human teacher remains irreplaceable
Perhaps the most important message emerging from this debate is this: no Artificial Intelligence tool replaces the presence, sensitivity, and adaptability of a human teacher. Technology can automate repetitive tasks, offer valuable data on student progress, and create more dynamic learning experiences. But the act of teaching involves dimensions that AI is still far from replicating, like the ability to sense that a student is unmotivated due to personal issues, to adjust an explanation based on a confused facial expression, or to spark genuine curiosity about a topic through a well-told story. These skills are deeply human and remain the foundation of quality education.
The Explore Learning report is clear in concluding that AI works best when used alongside human educators, not in place of them. While technology can help sustain fundamental learning skills, human oversight remains essential for interpreting progress and managing the limitations of algorithmic systems. In this model, the teacher uses data generated by the platform to make more informed pedagogical decisions while maintaining the lead role in conducting lessons, mediating conflicts, and providing emotional support to students. This combination of human intelligence and artificial intelligence is what defines a balanced approach, and it is exactly this balance that many current implementations are ignoring in the race to digitize the classroom.
Lisa Haycox closes the discussion with a reflection that deserves attention: we cannot sacrifice children’s futures for the sake of hype, and we must continue encouraging a healthy debate to protect against that, while at the same time embracing the full potential of transformative technology. This statement captures the spirit of the report well: it is not about being against AI in education, but about demanding that it be implemented with seriousness, responsibility, and above all, with respect for each student’s learning process.
What this means for the future of education with AI
The Explore Learning report is a timely reminder that technological innovation in education cannot be treated as a race without rules. The pressure for quick results, combined with legitimate enthusiasm around the possibilities of AI, can lead to hasty decisions that directly affect millions of students. The document does not propose putting the brakes on innovation, but rather channeling it down paths that guarantee real, measurable outcomes.
For educators, school administrators, and policymakers, the takeaways are clear:
- Demand robust scientific evidence before adopting any AI tool in educational settings
- Prioritize solutions that work as a complement to the teacher’s work, not as a replacement for it
- Invest in teacher training so educators know how to critically interpret and use data generated by AI platforms
- Ensure algorithmic transparency, allowing educators to understand and question the recommendations made by systems
- Monitor long-term impact, avoiding the mistake of confusing short-term gains with real educational progress
That is why, more than ever, the discussion about AI in education needs to be guided by robust evidence, transparency in methodologies, and respect for the central role of the educator. Technology advances fast, and that is great. But education deals with something that cannot be treated as an off-the-shelf product: the cognitive, emotional, and social development of human beings in their formative years. Every decision made without a solid foundation can impact an entire generation. And that is precisely why demanding proof before scaling solutions is not being against innovation — it is being in favor of innovation that actually works and puts the student at the center of everything. 🚀
