DaleSchool

The Power of Context

Beginner20min

Learning Objectives

  • Explain what a context window is
  • Understand the difference between /compact and /clear and use each appropriately
  • Check token usage and manage costs

Working Code

In a Claude Code session, run:

> /cost

You'll see the number of tokens used and the cost for the current session. Now have a longer conversation — read a few files, request some edits, then check /cost again:

> Describe all the files in the src/ directory
> /cost

Notice how the token usage has jumped. Let's clean up the context:

> /compact

Claude summarizes the conversation so far. Check /cost again — you'll see that input tokens decrease for the next request.

Try It Yourself

Follow this sequence in your session:

  1. Check current usage with /cost.
  2. Ask Claude to read a large file.
  3. Check /cost again — how much did the tokens increase?
  4. Run /compact.
  5. Ask a new question and check /cost — is it more efficient than before?
  6. Run /clear to completely reset the conversation.

"Why?" — Why You Need to Understand the Context Window

Every time you chat with Claude Code, it sends everything from the conversation so far. This is called the context window, and it currently supports about 200K tokens.

Problems When Context Grows Too Large

  1. Increased cost — more input tokens means higher API costs.
  2. Slower responses — more text to process means longer wait times.
  3. Lower accuracy — with too much context, important details can get lost.

Context Management Strategies

| Command | Behavior | When to Use | | ---------- | ------------------------------------------ | ------------------------------- | | /compact | Summarizes and compresses the conversation | When the conversation gets long | | /clear | Completely deletes conversation history | When switching to a new topic | | /cost | Shows token usage | Check costs regularly | | /context | Shows context size | See how full the window is |

Rule of thumb: /compact when a task is done, /clear when starting a completely different task.

Deep Dive

What exactly is a token?

A token is the basic unit AI uses to process text. In English, roughly one word equals 1–2 tokens. In Korean, one character is about 1–3 tokens. The word "hello" is about 1 token.

The input tokens shown in /cost represent text sent to Claude (conversation history + file contents). Output tokens represent Claude's generated response.

Does /compact lose information?

Yes, to some extent. /compact summarizes the conversation, so details may be lost. If there are important decisions or context, it's best to record them in CLAUDE.md (covered in lesson 08).

  1. Start a new session and check /cost (it starts at 0).
  2. Ask Claude to read the largest file in your project.
  3. Check the cost change with /cost.
  4. Run /compact, then ask the same question again. Any difference in response quality?

Q1. When your conversation is getting long and token usage is high, what's the best command?

  • A) /clear — completely delete the conversation
  • B) /compact — summarize and compress the conversation
  • C) /cost — check the cost
  • D) /help — view help

Further Reading