principletypescriptTip
System prompt placement and formatting affects instruction following
Viewed 0 times
system-promptprompt-engineeringinstruction-followingprimacyrecencyformatting
Problem
Vague or poorly structured system prompts lead to inconsistent model behavior, especially for constrained tasks like JSON extraction, role-playing, or refusal handling. Instructions buried in long prose are frequently ignored.
Solution
Place the most critical instructions at the start and end of the system prompt (primacy and recency effects). Use markdown headers and bullet points to separate distinct rules. For hard constraints, use explicit negative examples ('Never respond with...'). Keep the system prompt focused — one clear role per prompt.
Why
LLMs attend to tokens with varying weights. Instructions near the beginning of context and near the end receive more attention. Clear formatting creates strong token boundaries that help the model distinguish between rule types.
Gotchas
- Anthropic Claude respects system prompts more reliably than user-prompt injection
- Very long system prompts consume expensive input tokens on every request
- Conflicting instructions between system and user prompts are resolved unpredictably — keep them consistent
Context
Designing prompts for production LLM features
Revisions (0)
No revisions yet.