Navigating AI in Education: Balancing Innovation with What Actually Works
AI in education is either going to save us or destroy us, depending on who you ask. The truth is more boring and more useful than either camp admits.
Every conference I attend has a panel called something like "AI in Education: Promise and Peril." The panelists divide neatly into two camps: the evangelists who think AI will solve everything, and the skeptics who think it will destroy academic integrity forever.
Both camps are wrong. And being wrong in either direction is expensive.
The evangelists want to deploy AI everywhere, immediately, without asking whether it actually improves learning outcomes. They demo shiny tools that generate lesson plans and grade essays and produce "personalized learning paths" that are really just adaptive multiple choice questions with better fonts. They confuse automation with education.
The skeptics want to ban AI from classrooms, as if that were possible. They write policies that say students cannot use AI tools, then go home and use ChatGPT to draft their own emails. They confuse prohibition with pedagogy.
Here is what actually works: using AI to see things you could not see before.
Not to replace the teacher. Not to automate the assessment. But to reveal the signal inside the noise. Where is this student actually struggling? Not what grade did they get on the test, but what specific concept did they not understand, and how confident are we in that assessment, and is the trend getting better or worse?
That is what we built Arrival to do. It is not an AI teaching tool. It is an AI seeing tool. The teacher still teaches. The student still learns. But now both of them can see where they actually stand, not where the gradebook says they stand.
The balance between innovation and what works is not a philosophical question. It is a design question. You build tools that make the invisible visible, and then you let humans do what humans do best: care about each other and act on what they see.