Don't Let a Dentist Talk Your Kid Out of Learning to Code
Hey. Before we get into this — this one's for the normies.
And I mean that with zero disrespect. Genuinely. You deserve to understand what's actually going on with all this AI noise just as much as anyone else does. So I left the technical deep-dives at the door for this one. No jargon. No acronyms. Just the truth, in plain English, with some metaphors that hopefully get you in the trenches me.
When you're ready to see what the real stuff looks like — the actual code, the actual architecture, the actual war stories from building a production app — the rest of my articles are right there waiting for you. But this one's yours. Let's go.
My wife’s dentist — good dude, actually. We went to the same high school. They were making small talk while she was in the chair and he asked whether our son was still learning to code. She said yes. And he goes, “Isn’t AI going to take over that job?”
My wife, bless her, went: “Maybe? I don’t know.”
She wasn't uncertain. She watches me fight with these things every single day. She knows exactly what I go through. She was just being nice to our friend.
But here’s the thing — he’s not wrong to ask. He’s just working with bad information. And that’s not his fault.
That’s the media’s fault.
Let's Talk About What the Media Actually Does
Picture a room. There are a hundred important things that need to be reported today. Real things. Things that actually affect your life. The people in that room look at all hundred of them, stick a finger in the wind, and think: “Okay — what are people already talking about? What’s going to sell enough ad space to make this worth our time?”
That’s it. That’s the whole process. That’s the editorial meeting.
It’s not malicious. It’s not a conspiracy. It’s just a business making a business decision. And right now, “AI is coming for your job” sells a lot more ad space than “AI is a useful tool with some pretty significant limitations that require years of experience to navigate.” One of those headlines makes you feel something. The other one makes you change the channel.
So instead of telling you the boring true thing, they tell you the exciting false thing. With the confidence of people who have never written a line of code, never built anything real, and never had to sit with the consequences of being wrong. They are selling you a feeling, not a fact. And the feeling they’re selling right now is: AI is magic, your job is gone, the robots are coming, be afraid, click here to learn more.
They can take a guy who jaywalked across the street and make him look like a serial killer. And they can take a tool that’s genuinely useful for certain things and turn it into the Second Coming of Skynet — all before lunch, with seventeen ads in between.
My friend the dentist heard the Skynet story. He never heard the jaywalking story. Nobody made the jaywalking story into a headline because who clicks on that?
So he asked my wife a completely reasonable question — and she said “maybe, I don’t know” because she didn’t want to get into it while someone had their hands in her mouth.
I, however, have all the time in the world. And no one’s hands are in my mouth right now. So let’s get into it.
Example 1: The AI That Argued With a Receipt
Here's a single line of code. The only thing it does is round a number:
round(2.675, 2)You don’t need to know anything about programming to understand this. It’s asking the computer: “Round 2.675 to two decimal places.” That’s it. Third grade math. The answer is 2.68, right? The 5 rounds up. Everyone knows that.
The answer is 2.67.
Before you throw your phone — this isn’t a trick question. It’s not a riddle. The reason has to do with something deep in how computers store numbers, and the short version is this: computers can’t actually hold the number 2.675 perfectly. They store something just barely less than 2.675 — so tiny you’d never notice it — until you round, and the answer drops instead of rounds up. There’s also a whole separate thing where computers are designed to round a certain way specifically to stay fair over millions of calculations. The details don’t matter right now. What matters is: 2.67 is correct, 2.68 is wrong, and the reasons have been understood by mathematicians for decades.
Ask an AI. It will tell you 2.68. Confidently. It will explain why 2.68 is correct with the calm, authoritative energy of a man who has never been wrong in his life and finds the very concept personally offensive.
It’s not lying. It’s doing what it always does — it looked at the shape of the question, recognized a rounding problem, remembered that 5 rounds up, and produced the most familiar-sounding answer. It has never understood the rule. It’s just seen the pattern so many times it thinks it owns the rule. There’s a difference. A massive one.
Imagine you hire a math tutor who has read every math textbook ever printed — every single one — but has never once sat down and actually solved a problem with a pencil. He can talk about math all day. He sounds phenomenal. He’ll explain rounding to you with warmth and confidence and maybe a little diagram on a napkin.
And then he’ll hand you a bill that’s wrong by ten cents and argue with you about it while you’re literally holding the receipt.
Now here’s where it gets interesting — because the “AI experts” you see on your feed? The ones with the big audiences, the podcast, the newsletter, the LinkedIn posts getting fifty thousand likes? They’re the same tutor. They’ve read everything. They can talk about AI all day and night. They know all the right words. They sound like they’ve been doing this since before you knew what a computer was.
But here’s what they’re not doing: the actual work. They’re not sitting down with these tools and grinding through a real problem at midnight when everything is broken and the deadline is tomorrow. They’re not finding out the hard way that the AI confidently got the rounding wrong. They’re not living in the gap between what the AI says it can do and what it actually does when real people start using your product.
The people who are doing that work? The ones who actually know where these tools shine and where they completely fall apart? They’re too busy building. The engineers working on the next version of Claude, the next version of ChatGPT — they’re not on YouTube making reaction videos. They’re in the lab. They don’t have time for an audience because they’re too busy being genuinely useful.
So what you’re left with is this: the people who own the audiences don’t always tell the whole truth. Not because they’re evil — but because the whole truth doesn’t serve their bottom line. Their audience wants to feel like they’re on the edge of something historic. Their audience wants to believe the robots are coming, or that there’s a secret prompt that unlocks superintelligence, or that the guy in the thumbnail has figured out something everyone else missed. The whole truth — “it’s a useful tool with specific limitations that requires real expertise to use well” — doesn’t keep people subscribed. It doesn’t sell the course.
You know what it reminds me of? Those As Seen On TV ads. The ones you’d catch at midnight after a peanut butter and jelly sandwich, half asleep on the couch, watching some guy in a headset tell you this one weird trick will change your life for four easy payments of $19.99. You’d laugh, change the channel, go to bed. It was easy to ignore because it was only on at midnight and you stumbled into it by accident.
Except now it’s 2026. Social media is a TV in everyone’s pocket. And it’s not midnight once in a while — it’s every single second of every single day. The same pitch. The same energy. The same guy in the headset. Just with better lighting, a ring light, and an algorithm making sure you never accidentally change the channel.
So they tell the version that works for their audience. And the real experts? We’re exhausted. And honestly — even when we want to engage, the game is rigged before it starts.
Here’s how it goes. Someone wants to interview you. Great. You show up ready to give them the full picture — A to Z. Every nuance, every caveat, every “yes, but here’s the thing they’re not telling you.” Because context is everything in this conversation. Without context, the truth becomes a lie by omission.
But the interviewer doesn’t want A to Z. They want A, B, and C. They’ve already written the segment. They already know what their audience wants to hear. They need you for one thing — your name on the lower thirds. Your credibility as a prop. “Engineer with 40 years of experience says...” and then they finish that sentence however they want. They got what they came for. You gave them the ammunition and they aimed it however suited them.
So after a while, the real experts stop raising their hands. Not because we don’t care — but because we’ve learned that the format doesn’t allow for the truth. You can’t explain a race condition in a fifteen-second soundbite. You can’t explain why the rounding answer is wrong in a tweet thread designed to go viral. The full story is too long, too boring, and too unlikely to end with someone feeling scared or excited enough to share it.
And the real experts stay quiet, and the noise fills the vacuum, and somehow that noise gets mistaken for the signal.
That’s what my friend the dentist is reading. That’s what’s in his feed. That’s the math tutor with a million followers who has never once graded his own homework.
And that brings me back to the rounding problem — because it’s the perfect illustration of exactly what I just described. The AI gave the wrong answer with complete confidence, the same way the guy on the podcast gives you the wrong picture of AI with complete confidence. Neither of them has ever had to be accountable for being wrong. Now imagine that same confident wrongness running your billing system. That’s not a hypothetical. That’s a lawsuit.
Example 2: The AI That Answered a Question Nobody Should Answer
Here's another snippet. Again — you don't need to understand the code itself:
printf("%d %d %d\n", i++, ++i, i);All this is doing is asking the computer to change a number in three different ways simultaneously and then print the result. Sounds simple enough.
What does it print?
The correct answer — the one any experienced programmer knows immediately — is: nothing that means anything, and whoever wrote this needs to have a long think about their choices.
There’s a category of question that’s not just hard to answer — it’s broken. Like asking “what’s north of the North Pole?” The question itself is malformed. The rulebook that governs this type of code looks at it and explicitly states: we refuse to define what happens here. The computer can do whatever it wants. Print a number. Crash. Catch fire. This one’s on you.
The right answer is to recognize the question is broken and say so. Clearly. Immediately.
Ask an AI. It prints a number. A specific, real, confident number. It might even walk you through how it arrived there — step by step — like a doctor delivering a detailed, professional diagnosis for a disease that has never existed in the history of medicine. Confident. Thorough. Completely made up.
Here’s the picture. You ask someone: “What time does the next train leave from Union Station in my hometown?” But your hometown doesn’t have a train station. Never had one. There’s a Dairy Queen where a train station might have been, if anyone had ever cared enough to build one.
A person who actually knows your town says: “There’s no train station there.”
The AI says: “3:47. Platform B. Bring your own snacks — that line doesn’t have a café car.”
Specific. Confident. Entirely fabricated. Because it’s learned that train questions have train answers, and it has no mechanism for recognizing when the question itself is the problem.
The media says this thing is about to replace software engineers. Software engineers are over here watching it invent train schedules for cities with no trains.
Example 3: The Movie Critic Who Reviewed a Film That Was Never Made
Last one. Here’s some code:
let mut v = vec![1, 2, 3];
let a = &v[0];
v.push(4);
println!("{}", a);Plain English translation: make a small list of numbers, remember where the first one lives, add a new number to the list, then go back and read the first one.
Ask an AI what it prints. It’ll say: 1. No hesitation.
This code doesn’t run. It can’t. The programming language it’s written in looks at it before anything happens and refuses to proceed — because it can already see that something dangerous is about to occur.
Here’s why, with zero technical knowledge required. When you add a new item to a list that’s already at capacity, the computer sometimes has to pick up the entire list and move it to a bigger space — like moving out of a studio apartment into a two-bedroom because you finally bought a couch. When that happens, the address you saved for “the first item on the list” is now pointing at the old apartment. Which is empty. Which may have already been rented to someone else entirely. If you go looking for your stuff there, you’re not going to find it. You might find someone else’s stuff. You might find nothing. In the real world of software, this is how data gets corrupted and how security holes get created.
The language in this example is smart enough to look at your plan before it executes and say: “I see what you’re about to do. I’m not going to let you do it. Go fix it first.” Like a good friend who grabs your arm before you send the drunk text.
The AI says it prints 1. Because it’s seen a million examples of code that reads the first item in a list, this looked like one of those, and it told you what those usually do. It had no idea the whole thing was rejected before it ever started. It’s reviewing a movie that was never made. The studio said no. There are no actors. There is no footage. There is no film.
The AI just gave it four stars and called the third act emotionally devastating.
The Bullshit Is So Thick — I’m Forced to Wade Through It.
Here’s the part that actually exhausts me.
People are out here — on television, in major newspapers, in TED Talks — genuinely debating whether we’re close to building the Terminator. The actual Terminator. Skynet. The robot apocalypse. And after nearly three years of AI being shoved in our faces every single day on every platform, we are standing here watching it get a rounding question wrong. One line of code. Not a complex system. Not a massive algorithm. Three examples. One. Single. Line. Of. Code.
GET. THAT. IN. YOUR. HEAD.
Believe it or not, I have hundreds of examples of this once you get to 5-10 lines of code or more. The volume of nonsense floating around about AI right now is so extreme it makes dinosaur feces look like bird droppings. It’s not a little misinformation. It’s a full avalanche. It’s been building for three years and every new breathless headline adds another layer. And it doesn’t live out there in the abstract — it lands directly on my doorstep on a regular basis.
I have clients who’ve read the headlines and come to me asking whether they even need to invest in building software anymore, or if they should just “wait for the AI to handle it.” I have new programmers asking me if it’s worth learning to code or whether they’re already obsolete before they’ve written their first working program. And I have early customers of BenchBoard — the app I’ve spent the last eight months building — who wonder out loud whether some AI is going to show up and make the whole thing irrelevant before it even launches.
I have to talk every single one of them off the ledge. Every time. With actual examples. With actual facts. Because the headlines sure aren’t providing any.
And where are those headlines coming from? Let’s be honest. They’re coming from journalists who have never built software. Talking heads who learned the phrase “artificial intelligence” sometime around 2022 and have been dining out on it ever since. Venture capitalists who need a story to tell their investors. Consultants who rebranded overnight and declared themselves “AI strategy experts” because they used ChatGPT to write an email once. These people are speaking with total authority about something they have never done, and they are being handed microphones, publishing deals, and conference stages to do it from.
They are the ones telling my friend’s son not to bother learning to code.
They are the ones telling my clients to wait for the magic to arrive.
They are the ones manufacturing fear and awe and urgency out of thin air because fear and awe and urgency pay the bills — and the actual engineers, the people who use these tools every day and know exactly where and how they fail, are too busy fixing the problems to write the headlines.
So yes, this is partly therapeutic. But it’s also — I hope — genuinely useful for anyone who’s been swimming in that swamp and trying to find solid ground. These three examples are what I reach for every time someone comes to me with that look. You know the look. The one that says they just read something on the internet and now they think the robots have already won.
They haven’t. Not even close.
What's Actually True — And Why It's More Interesting Than the Headlines
I use these AI tools every single day. I’ve been building BenchBoard — a live scorekeeping and lineup management app for youth baseball and softball — and I genuinely could not have moved this fast without them. They are remarkable. I am not here to tell you they’re worthless.
But I’ve also watched an AI introduce a bug into my app that caused two players to show up at Second Base on a real field during a real game — because it created two separate systems writing to the same data at the same time with nobody coordinating them. The screen looked fine. Everything appeared to save correctly. It only blew up at exactly the worst moment, when a coach switched between games on their phone. You would never catch it in a demo. You catch it on a real field with real kids and a real umpire staring at you.
I’ve watched the AI design something that looked clean and logical on paper and turned into a disaster the moment actual people started using the app in actual ways — requiring 36 separate fixes before I finally scrapped it and rebuilt it myself. The AI didn’t come up with the solution. I did. Because I’ve been doing this for forty years and I know what breaks in the field.
I’ve watched the AI serve one user’s data to a completely different user because it stored everything in a flat unlabeled pile. Works great when you’re the only person testing it. Falls apart the moment a second human being shows up.
None of these are failures the AI could warn me about, because the AI doesn’t know what it doesn’t know. It doesn’t know what a coach looks like in August heat trying to swap two players before the umpire yells play ball. It doesn’t know what happens when a parent watching a live scoreboard on their phone sees the wrong lineup because the coach just made a change on theirs. It doesn’t know any of that because it’s never been there. It’s read about software. It’s never lived in it.
That’s the gap. And it’s not a small one. It’s the whole game.
What I'd Actually Tell My Friend the Dentist
Here’s what I would actually tell my buddy the next time it comes up — and hopefully after the checkup.
Learning to code is absolutely still worth it. More than ever, actually. And here’s why.
These tools are real and they’re genuinely useful. My son will probably use them from day one and move faster than I could at his age. Never having to spend three hours on something I spent three hours on. No more small dumb errors that come up that used to take hours to troubleshoot. At least almost. That’s real, and that’s good.
But he still needs to understand why the rounding answer is 2.67 and not 2.68. He still needs to recognize when a question is broken, and the right answer is “don’t do this.” He still needs to understand why a programming language threw his code in the trash before it even ran — not because he’ll be solving these things manually every day, but because when something breaks in the real world at the worst possible moment, “I asked the AI and it said everything was fine” is not going to cut it as an explanation. Not with a client. Not with a user. Not with a coach on a field who wants to know why two kids just showed up in the same spot.
The AI is a brilliant, tireless, occasionally confident assistant that’s sometimes, well, wrong. It is not a substitute for knowing what you’re doing.
The media will keep writing the Terminator story because the Terminator story gets clicks. My friend will keep reading it over breakfast. And somewhere, a kid is going to decide not to learn programming because of a headline written by someone who has never built a single thing in their life.
That kid deserves better than that. And honestly — so does my son.
I write about what it actually looks like to build real software with AI tools — the wins, the failures, and everything the headlines leave out. If that sounds useful, subscribe and come along.





