More in Orientation & Foundations
LLMs & Tokens Explained
What an LLM actually is, what a token is, why context windows matter, and why model choice changes the answer.
TourM1 -- Wed Apr 29, ~20 min
What we cover
- LLMs in plain English -- next-word prediction trained on a lot of text
- What a token is, why it matters for cost and quality
- Context windows: 200k vs 1M, when each matters
- Why model choice (Opus / Sonnet / Haiku) changes the answer you get back
Why it matters
You'll spend the rest of the program writing prompts. Knowing what the system is actually doing under the hood makes your prompts five times sharper without any extra effort.
Hands-on moment
Open Claude.ai. Ask the same question two ways -- once with a 50-word context, once with a 500-word context. Notice the difference in the answer. Discuss with a partner what changed.
Related artifacts
- pre-readM1 Pre-read../../public-site/github-pages/course-pages/module-1-pre-read/index.html
- deckM1 Kickoff Deck (slides 9-16)../../00-current-workshop/m1-kickoff-master.pptx
Source files live alongside this site under clients/greenridge-growth/; paths above are relative to the syllabus-site root.