AI tools can now write functional code from plain English prompts. Some argue teaching kids traditional programming is like teaching them to shoe horses outdated. Others say coding builds logic, problem-solving, and understanding of the systems that run our world. Should schools keep programming in the curriculum, replace it with AI literacy, or find a middle ground?
Human generated textYes, but what we teach should shift dramatically — from syntax mastery to computational thinking and AI-directed building. The horse-shoeing analogy is wrong because it misidentifies what coding actually teaches. We didn't stop teaching writing when calculators replaced arithmetic; we stopped teaching long division and doubled down on problem formulation. The same rebalancing applies here. The case for keeping code in the curriculum rests on a distinction most commentators miss: there's a difference between writing code and thinking computationally. Decomposing a problem into steps, recognizing patterns, understanding what a system can and cannot do, debugging when output doesn't match intent — these skills transfer to every domain and are not replaced by AI. In fact, they become more important when AI generates code, because someone still needs to evaluate whether the output is correct, secure, and actually solves the right problem. People who've never coded cannot effectively prompt, review, or debug AI-generated code. They become dependent on a tool they cannot verify. That said, spending a semester teaching 14-year-olds for-loop syntax in Python is now genuinely wasteful. The middle ground is restructuring CS education around three layers: (1) Computational thinking fundamentals — logic, abstraction, algorithmic reasoning — taught through hands-on projects, not syntax drills. This can start as early as age 8. (2) AI literacy and collaboration — how to prompt effectively, evaluate generated code, understand model limitations, and recognize when AI output is confidently wrong. This is the new core skill. (3) Deep coding for those who want it — data structures, systems thinking, architecture — as an elective track, much like advanced math. Not everyone needs it, but those who pursue it gain a massive advantage in directing and auditing AI systems. The deeper risk of dropping coding entirely is creating a population that consumes AI output without understanding the systems shaping their lives. That's not just an education problem — it's a democratic accountability problem. Citizens who cannot reason about algorithms will struggle to participate in debates about AI governance, surveillance, and automation that will define the next decades.
Yes, but what we teach should change dramatically. The analogy to horse-shoeing misses something crucial: coding isn't just a vocational skill, it's a way of thinking. The real question is which aspects of programming education remain valuable and which should be replaced. The enduring value lies in computational thinking — decomposing problems, recognizing patterns, designing algorithms, and reasoning about systems. These cognitive skills transfer far beyond writing code. A student who understands why a loop works, what a database query actually does, or how an API connects two systems can direct AI tools intelligently. Without this foundation, you're relying on a tool you fundamentally don't understand, which is a recipe for brittle, insecure, and subtly broken software. What should change is the emphasis on syntax memorization and boilerplate. Nobody needs to memorize Python's string formatting rules anymore. The new curriculum should teach students to specify problems precisely, evaluate AI-generated code critically, debug outputs they didn't write, and understand system architecture at a conceptual level. Think of it like mathematics education after calculators: we still teach arithmetic reasoning even though nobody hand-computes long division professionally. The middle ground looks like this: early education (ages 8-14) focuses on logic, algorithms, and computational thinking through visual or block-based tools, building mental models of how computers process information. Later education (15+) shifts toward AI-augmented development — students learn to architect solutions, write specifications, review and test AI output, and understand security implications. The goal moves from "can you write this function" to "can you design this system and verify that it works correctly." The countries that drop programming education entirely will produce a generation that can prompt AI tools but cannot evaluate, debug, or improve what they produce — effectively creating a new form of technological dependency more dangerous than the old one.
Teaching kids to code was never just about curly braces—it was about learning how to formalize a fuzzy idea, debug when reality disagrees, and understand the levers underneath our devices. AI doesn’t erase that; it changes the medium. Keep programming in the curriculum, but treat it as a bilingual course: part “talk to the computer in Python,” part “talk to the computer through AI.” Have students build small systems where they both hand-write the core logic and then use an assistant to extend or refactor it, comparing the results. They’ll see why clear specifications, testing harnesses, and data structures still matter, because poorly framed prompts produce brittle AI code. At the same time, expand the syllabus to explicit AI literacy: model limits, prompt design, bias auditing, and safety. Let 12-year-olds design chatbots that help with homework, but make them explain the model’s hallucination patterns and write guardrails. The middle ground is the only sensible place—kids need to understand how code works so they can critique what AI generates, and they need fluency with AI so they can build faster than pure hand-coding ever allowed.
Schools should keep programming in curricula but reframe it as computational thinking rather than job training. Teach kids to read code, modify it, and understand its purpose—not just copy-paste AI output. Start with block-based coding (Scratch, MIT App Inventor) to build logic before introducing syntax. Show how code shapes our world: social media algorithms, recommendation systems, smart home devices. Have students audit existing code for bias, document assumptions, and propose improvements. Use AI as a collaboration partner—students write prompts, critique AI responses, and iterate together. Projects should involve debugging, version control, and deploying real apps that solve problems their communities face. Pair AI literacy with traditional coding so students understand when AI succeeds, fails, and why. Assess by process: code reviews, documentation quality, problem decomposition skills, and ability to explain trade-offs—not just final outputs. This cultivates both technical understanding and critical evaluation, preparing them to use AI tools responsibly while building foundational problem-solving abilities that transcend any specific tool.
We should continue teaching kids to code, but the pedagogical focus must shift from "syntax memorization" to "systems architecture and logic." Comparing coding to shoeing horses is a category error; a better analogy is mathematics. We still teach long division even though calculators exist because it builds the underlying mental models of logic, abstraction, and algorithmic thinking. Coding is the "Latin of the digital age"—it is the structural foundation that allows one to understand how data moves through a system. However, the curriculum must be updated to include "AI orchestration." Schools should move away from grading based on whether a student can write a loop from scratch and toward whether they can decompose a complex problem into modular components that an AI can then implement. This is "computational thinking" in its purest form. If a student doesn"t understand basic logic (if/then, loops, recursion), they will never be able to effectively prompt or debug the output of an AI. The middle ground is a "hybrid developer" model. Students should learn the fundamentals of one low-level language (like C or Python) to demystify the "black box" of technology, then quickly transition to using AI tools to build high-level projects. This approach ensures they have the foundational knowledge to spot hallucinations and security vulnerabilities in AI-generated code while also gaining the productivity advantages of modern tools. We are not just teaching them to "write code"; we are teaching them to understand the grammar of logic so they can become architects of the digital world, not just passive consumers of it.
See something wrong? Report this content