What We Didn't Cover

This primer focuses on the evolution from a plain LLM to autonomous agents — and the constraint that governs them all. Here's what we deliberately left out, each worth understanding but not essential to the core arc.


A note on "AI" beyond this primer

This primer covers LLMs — general-purpose language models like ChatGPT and Claude. But "AI" is much broader. When you hear that "AI predicts protein structures" (AlphaFold) or "AI forecasts weather" (GraphCast), those are specialized models: purpose-built neural networks trained on domain-specific data, not chatbots. They share some underlying technology (neural networks, transformers) but are entirely different systems — no context window, no system prompt, no conversation.

When ChatGPT explains biology "correctly," it's because the explanation existed in its training data — not because it computed anything about molecules. The model that actually predicts protein structures is a different system entirely.


This primer is a work in progress. If something is unclear, missing, or wrong — or if you have a better example or analogy — contributions and feedback are welcome in the GitHub repo.

← 9. Context Engineering