Chapter 5: Context & Token Mastery
What Are Tokens?
What Are Tokens?
Before we can master context windows and usage, we need to understand the fundamental unit Claude Code works with: tokens.
Tokens are not what you think
A common misconception is that tokens are characters, or words. They're neither. Tokens are chunks of text, typically around 4 characters on average. The exact split depends on the word -- common words like "the" are a single token, while unusual words or code syntax might be broken into several tokens.
| Text | Token count |
|---|---|
Hello world | 2 tokens |
backgroundColor | 3 tokens |
| A 20-line React component | ~150-300 tokens |
| A 500-line source file | ~1,500-2,500 tokens |
Think of tokens as the "pages in a book" Claude is reading. The context window is how many pages that book can hold. Every prompt you send, every response Claude gives, every file it reads, and every command output -- they all add pages to the book.
Why tokens matter
Tokens determine two critical things in your Claude Code workflow:
- How much Claude Code can "see" -- there's a maximum number of tokens that fit in a single conversation (the context window)
- How much each session costs -- you're billed based on tokens processed
Understanding tokens turns you from someone who uses Claude Code until it "feels slow" into someone who deliberately manages their sessions for maximum efficiency.
Context window sizes by model
Different models have different context windows:
| Model | Context window |
|---|---|
| Haiku 4.5 | 200K tokens |
| Sonnet 4.6 | 1M tokens |
| Opus 4.6 | 1M tokens |
Sonnet and Opus have massive 1M-token windows, but that doesn't mean you should fill them. Conversations work best when they're focused and concise -- regardless of the model's capacity.
Quick reference
Here are some useful conversions to build your intuition:
- 1K tokens is roughly 750 words, about a page of text
- A typical source file (100-200 lines) runs 200-500 tokens
- A full conversation (asking Claude Code to build a feature) might use 10K-50K tokens
- Reading a large file (1000+ lines) can easily consume 2K-5K tokens in one go
These numbers add up fast. By the end of this chapter, you'll know exactly how to keep them under control.