Can Machines Think (for Me)?

 The Day a Student Made Me Rethink Artificial Intelligence in the Classroom.

UAE: Robot teachers enhance learning by 8% over human ...

“Can I use artificial intelligence for this assignment?”

The first time a student asked me that question, I acted as if it were normal, but I was unsettled. It was 2023, and tools like ChatGPT were still new to me, both intriguing and unsettling.

Before answering, I realized the question was not really about AI. It was about goals. Was the student trying to learn or simply trying to finish?

In many classrooms, students are trained to optimize for completion, points, and grades. Artificial intelligence just makes that optimization easier. If the goal is submission, AI makes sense. If the goal is learning, the conversation needs to change.

That moment took me back to an older question, famously posed by Alan Turing. Can machines think? In schools, I have found a more productive version to be this: What do students learn when we ask them to understand how machines think?

Everything you need to know about AI (for PMs and builders)

Before generative AI entered classrooms, my work already focused on teaching about artificial intelligence, not just using it. In 2020, my master’s research explored how machine learning concepts could be introduced in high school through educational robotics. Across workshops, two approaches became clear. One is teaching with AI, using systems that support learners. The other, which became central to my practice, is teaching the concepts behind AI.

Teaching about AI does not require advanced coding. It requires making invisible processes visible. When students work with data, pattern recognition, training, and decision making, they begin to see that AI systems are shaped by human choices, assumptions, and limitations.

This approach first took shape through the Frankie project, where students trained a small robot to recognize visual patterns and act on them. Errors were not problems to fix quickly, but moments to analyze. Was the data flawed? Was the training insufficient? Those technical questions soon expanded into ethical ones.

Não foi fornecido texto alternativo para esta imagem

To deepen that work, I later created an Artificial Intelligence Club. The structure was simple. Students alternated between hands-on experimentation and discussion. Many of the most meaningful debates were student-driven. They raised questions about Target’s use of customer data to predict behavior and connected those ideas to the Cambridge Analytica case and its influence on elections. Understanding how systems work naturally led them to question power, responsibility, and data ownership.

When generative AI tools became widely accessible in 2022, I did not ban them, nor did I embrace them as answer machines. Instead, I clarified expectations around how and why AI could be used. Some tasks required no AI. Others allowed AI for research, revision, or creative exploration, always with transparency and reflection.

In practice, I now combine different tools with distinct roles. Platforms like Flint are used to support thinking, prompting students to explain reasoning rather than receive ready-made answers. Google Teachable Machine allows students to collect data and train simple models. Microcontrollers like the GoGo Board connect sensors, data, and automation in tangible ways. Lessons inspired by the MIT Day of AI help frame ethical and societal questions. Each tool serves the same purpose: helping students understand how machines learn.

Cognita Partners with Flint to Roll Out Personalized AI Learning Across Global School Network

Rather than asking whether students may use AI, I ask what they can explain about it. If they rely on AI-generated output, they must describe how such systems are trained, what data they depend on, and where their limits are. The focus shifts from product to understanding.

This work is challenging. It takes time and comfort with uncertainty. Yet the results are consistent. Students stop seeing AI as a black box or a shortcut. They begin to see it as a human-made system that can be questioned, tested, and critiqued.

Ultimately, this is a holistic approach to teaching artificial intelligence. It integrates technical understanding, ethical reflection, and human judgment. When students learn how machines “think”, they are also invited to think more carefully themselves about learning, decision making, and agency in a world increasingly shaped by data and algorithms.

Leave a Reply

Your email address will not be published. Required fields are marked *