CC
0 XP
0

Chapter 5: Context & Token Mastery

Writing Efficient Prompts

practice3 min

Writing Efficient Prompts

Every token in your prompt is a choice. The techniques below help you write prompts that are both effective and efficient -- getting better results while using fewer tokens.

1. Use @ instead of pasting

When you need Claude to look at a file, reference it with @ instead of copying and pasting the contents:

@src/app/page.tsx explain the state management in this file

Claude reads the file on-demand. If you paste the file contents into your prompt, the tokens are consumed whether or not Claude needed to see every line. The @ reference lets Claude read exactly what it needs.

2. Let Claude read, don't describe

You do not need to describe your code to Claude. It can read the actual source.

Instead of:

"I have a function called addTodo that takes a title, category, dueDate, and priority. It creates a new todo object using generateId() and spreads the parameters into it, then prepends it to the todos array..."

Just ask:

"What does the addTodo function do?"

The description wastes tokens because Claude will read the file anyway. Worse, your description might be slightly inaccurate, which can confuse Claude more than helping.

3. Use CLAUDE.md to avoid repeating context

If you find yourself saying the same thing in multiple prompts -- "remember, we use Tailwind, not CSS modules" or "follow the existing pattern of callback props" -- that context belongs in CLAUDE.md.

Put it there once. It loads every session. You never have to type it again.

💡Info

CLAUDE.md is the most token-efficient way to give Claude persistent context. One line in CLAUDE.md replaces hundreds of repeated prompt tokens across your sessions.

4. Be specific but concise

Compare these two prompts:

15 tokens: "Add a delete button to TodoList that calls onDelete with the todo ID"

45 tokens: "I want you to add a way for users to delete items from the todo list. There should be a button, and when they click it, it should remove the item. Make sure it asks for confirmation first."

The first prompt is a third of the length. It tells Claude exactly what to build, where to put it, and how it should work. The second prompt wanders through the same requirements with three times the words.

Specificity beats verbosity. Give Claude the constraints it needs, skip the filler.

5. Break large tasks into focused prompts

One prompt asking for five features generates a massive response. Claude has to hold all five features in mind simultaneously, increasing the chance of errors or missed details.

Five focused prompts generate five targeted responses. Total tokens might be similar, but accuracy is much higher because Claude focuses on one thing at a time.

  1. "Add a delete button to each todo item"
  2. "Add a confirmation dialog before deleting"
  3. "Add an undo option after deletion"
  4. "Add keyboard shortcut (Delete key) for the selected todo"
  5. "Add bulk delete for completed todos"

Each prompt is clear, reviewable, and testable on its own. If prompt 3 does not work perfectly, you iterate on just that piece without re-generating prompts 1, 2, 4, and 5.

6. Use XML tags for structure

When you need to include errors, requirements, or reference content in your prompt, wrap them in XML tags:

Fix this error:
<error>
TypeError: Cannot read properties of undefined (reading 'map')
  at TodoList (src/components/TodoList.tsx:14:22)
</error>

The tags help Claude parse the boundary between your instruction and the data. Without them, Claude has to figure out where the error text starts and your instruction ends, which can waste processing tokens on ambiguity.

Quick reference

Instead of...Try...Why
Pasting file contents@filepathOn-demand read, no duplication
Describing code verbally"What does X do?"Claude reads the actual code
Repeating conventionsPut it in CLAUDE.mdLoaded once, used everywhere
One massive promptMultiple focused promptsBetter accuracy, easier to review
Vague instructionsSpecific with constraintsFewer back-and-forth iterations
Important

The most token-efficient prompt is one that gets the right answer on the first try. Clarity saves more tokens than brevity. A 20-token specific prompt that works is cheaper than a 10-token vague prompt that needs 3 follow-ups.