Too many think of AI as perfectly predictable code
I enjoyed Boyd Kane's "Why Your Boss Isn't Worried About AI" for honing in on how many non-tech people assume AI can be debugged and fixed like traditional software.
It can't.
The core insight: AI problems come from training data, not code bugs.
The software that runs AI acts very differently to the software that runs most of your computer or your phone.
Five key differences that matter:
- Training data vs. code bugs: Issues stem from trillions of words in training data, not reviewable lines of code
- Undebuggable: You can't trace misbehavior back to specific training data
- Unfixable: Retraining doesn't guarantee the problem won't resurface
- Non-deterministic: Tiny prompt changes produce dramatically different outputs
- Unpredictable capabilities: Unknown abilities emerge after release
"Just fix it" doesn't work with AI the way it does with software. The architecture is fundamentally different.
Earlier take: AI shouldn't even be called software#
Jon Stokes made a similar but more philosophical argument in his 2024 post "AI is not software". Where Kane focuses on practical debugging differences, Stokes goes deeper into creation method: AI is "grown" from training data using mathematical algorithms, not coded with explicit human instructions.
Stokes argues the software label itself confuses ethical and political discussions, while Kane focuses on why practical expectations (debugging, predictability) fail when you treat AI like traditional code. Both land on the same insight from different angles.