common user mistakes: patterns and fixes
derived from analysis of 4,656 amp threads. focuses on user-side patterns that correlate with lower resolution rates, higher steering, or frustrated outcomes.
summary
most user mistakes fall into three categories:
- prompting anti-patterns — how instructions are phrased
- context failures — missing information the agent needs
- workflow anti-patterns — behaviors that reduce success rates
prompting anti-patterns
1. POLITE REQUEST TRAP
the mistake: phrasing commands as polite requests
❌ "please fix the type errors if you could"
❌ "it would be nice if you could update the tests"
❌ "maybe look at the failing lint?"
why it fails: 12.7% compliance rate for requests vs 22.8% for direct verbs. hedging language signals optionality.
the fix: use direct imperative verbs
✓ "fix the type errors"
✓ "update the tests"
✓ "run lint and fix violations"
2. NEGATIVE FRAMING
the mistake: telling agent what NOT to do instead of what TO do
❌ "don't use useEffect here"
❌ "avoid adding new files"
❌ "never change the interface"
why it fails: 20% compliance on prohibitions vs 22.8% on actions. negatives get lost in multi-step reasoning.
the fix: frame positively with explicit alternatives
✓ "use useMemo instead of useEffect"
✓ "add this to the existing file at X"
✓ "keep the interface unchanged; only modify implementation"
3. CONSTRAINT BURIAL
the mistake: embedding critical constraints in long paragraphs
❌ "i need you to implement the feature and make sure it follows the existing patterns and also please only modify the service layer, not the controllers, and use the existing types we already have defined..."
why it fails: 16.4% compliance rate on constraints. long context dilutes critical requirements.
the fix: separate constraints as explicit bullet points
✓ "implement the feature with these constraints:
- ONLY modify service layer (not controllers)
- use existing types from types.ts
- follow patterns from similar-service.ts"
4. OUTPUT LOCATION AMBIGUITY
the mistake: not specifying exactly where to write output
❌ "create a test file for this"
❌ "add some documentation"
❌ "write a migration"
why it fails: 8.3% compliance rate on output directives. agent guesses wrong locations.
the fix: give exact file paths
✓ "create test at src/services/__tests__/auth.test.ts"
✓ "add documentation to docs/api/auth.md"
✓ "write migration at db/migrations/002_add_users.sql"
context failures
5. MISSING FILE REFERENCES
the mistake: describing code without pointing to it
❌ "fix the authentication bug"
❌ "update the component that handles user profiles"
❌ "there's a race condition somewhere in the worker"
why it fails: no file references means agent must guess which files are relevant. threads with @path/to/file in opener show +25pp success rate.
the fix: include explicit file references
✓ "fix the authentication bug in @src/auth/middleware.ts"
✓ "update @src/components/UserProfile.tsx to handle loading state"
✓ "race condition in @worker/processor.ts — the locks around L45-67"
6. ASSUMING PRIOR CONTEXT
the mistake: referencing work from previous threads without summary
❌ "continue from where we left off"
❌ "you know what i mean"
❌ "like we discussed"
why it fails: each thread is fresh context. agent has no memory of previous conversations.
the fix: provide minimal but complete context
✓ "continue the refactor from T-abc123 — we moved auth to middleware, now need to update the routes"
✓ "using the pattern from @src/lib/existing.ts, apply same approach to new.ts"
7. ERROR DUMP WITHOUT FOCUS
the mistake: pasting full error logs without highlighting the actual issue
❌ [pastes 500 lines of stack trace]
"fix this"
why it fails: agent may focus on noise instead of signal. no guidance on what matters.
the fix: include error PLUS the specific line/area of concern
✓ "test fails with:
`TypeError: Cannot read property 'id' of undefined at L45`
the issue seems to be in the user object destructuring"
8. NO VERIFICATION CRITERIA
the mistake: requesting work without defining “done”
❌ "make it work"
❌ "fix the tests"
❌ "clean this up"
why it fails: leads to PREMATURE_COMPLETION. agent declares done without meeting implicit expectations.
the fix: specify how to verify completion
✓ "fix the tests — run `pnpm test auth` to verify"
✓ "clean up: should pass lint and have no type errors"
✓ "make it work: should return status 200 with body matching schema"
workflow anti-patterns
9. THREAD ABANDONMENT
the mistake: starting threads and leaving before resolution
thread → 3 turns → user leaves
thread → 5 turns → handoff without closure
why it fails: 48% abandonment rate in threads with NO steering vs 4-5% in steered threads. abandonment ≠ failure — but it wastes tokens and fragments knowledge.
the stats:
- threads <10 turns: 14% success rate
- threads 26-50 turns: 75% success rate
- handoff orphan rate: 62.5%
the fix: commit to threads or explicitly close them
✓ after resolution: "ship it" / "commit and push" / "lgtm"
✓ if handing off: "handing this to T-xyz123 for completion"
✓ if abandoning: at least mark as complete or note why stopping
10. ORACLE AS RESCUE ONLY
the mistake: only consulting oracle when already stuck
thread: 40 turns of debugging
user: "ask oracle what's wrong"
why it fails: 46% of FRUSTRATED threads used oracle vs 25% of RESOLVED. oracle correlates with frustration because it’s used too late.
the fix: use oracle proactively for planning
✓ thread start: "consult oracle on architecture before implementing"
✓ before complexity: "check with oracle if this approach is sound"
✓ NOT: wait until 30 turns of failure to ask
11. STEERING WITHOUT APPROVAL
the mistake: only providing corrections, never confirmations
user: "no, wrong"
user: "not like that"
user: "still wrong"
user: "ugh, no"
why it fails: approval:steering ratio < 1:1 correlates with FRUSTRATED outcome. agent needs positive signal to know what’s working.
the stats:
- ratio >4:1 → COMMITTED threads
- ratio <1:1 → FRUSTRATED threads
the fix: balance corrections with approvals
✓ "yes, that part is right — but fix the error handling"
✓ "good, keep going"
✓ "lgtm so far, now do X"
12. EVENING/LATE SESSION START
the mistake: starting complex work during low-performance hours
the stats:
- 2-5am: 60.4% resolution rate (BEST)
- 6-9pm: 27.5% resolution rate (WORST)
- weekend: +5.2pp vs weekday
why it fails: unclear — possibly user fatigue, context switching, or distraction.
the fix: batch complex agent work for focused sessions
✓ queue hard problems for morning
✓ use evening for exploration/reading, not implementation
✓ weekend sessions show better outcomes
13. MEGA-CONTEXT FRONTLOAD
the mistake: dumping massive context in first message
❌ "here's the entire architecture, all the files, the history,
the constraints, the edge cases, the future plans..."
[2000 words of context]
"now fix the bug"
why it fails: high initial context correlates with more steering. agent may latch onto wrong details.
the fix: start minimal, let agent discover
✓ "fix auth bug in @middleware.ts — login returns 401 when should be 200"
[agent reads file, asks clarifying questions if needed]
quick reference: the 13 mistakes
| mistake | fix |
|---|
| polite requests | use direct verbs |
| negative framing | state what TO do |
| buried constraints | bullet points |
| ambiguous output location | exact file paths |
| missing file references | use @path/to/file |
| assuming prior context | summarize in-thread |
| error dump without focus | highlight specific line |
| no verification criteria | define how to verify |
| thread abandonment | commit to closure |
| oracle as rescue | use proactively |
| steering without approval | balance with confirmations |
| evening sessions | batch for focused time |
| mega-context frontload | start minimal |
success pattern summary
the inverse of these mistakes = high-success behaviors:
- direct imperative verbs in opener
- file references (
@path) in first message
- verification command specified
- approval:steering ratio > 2:1
- 26-50 turn persistence on complex tasks
- oracle at planning, not rescue
- constraints as bullets, not buried prose
recovery: when you’ve made a mistake
already in a struggling thread? recovery steps:
- pause and reframe: “let me restart the instruction clearly”
- provide missing context: “here are the files that matter: @X, @Y”
- give explicit constraint: “constraint: do NOT modify Z”
- define done: “success = passes this test command”
- approve what’s working: “yes, keep that part”
87% of steered threads recover. the doom spiral only happens when steering cascades without any approval signal.