power user behaviors: top 3 by resolution rate
analysis of the three users with highest resolution rates: precision_pilot (82%), steady_navigator (67%), concise_commander (60.5%).
top 3 ranked
| rank | user | resolution rate | threads | avg turns | avg first msg | domain |
|---|---|---|---|---|---|---|
| 1 | precision_pilot | 82.2% | 90 | 72.9 | 4,280 chars | streaming/sessions |
| 2 | steady_navigator | 67.0% | 1,171 | 36.5 | 1,255 chars | observability, build tooling |
| 3 | concise_commander | 60.5%* | 863 | 86.5 | 1,274 chars | storage engine, data viz |
*concise_commander’s 71.8% from first-message-patterns includes COMMITTED; 60.5% is RESOLVED-only from user-comparison.md.
the three archetypes
1. the architect (precision_pilot) — 82% resolution
signature behavior: massive context front-loading
precision_pilot writes the longest first messages in the dataset (4,280 chars avg). this is 3.4x the corpus average. threads then run long (72.9 turns) but almost always resolve.
teachable patterns:
-
front-load everything — don’t make agent guess. dump architecture decisions, file references, constraints, and success criteria upfront.
-
narrow domain ownership — precision_pilot works 2 domains with very high depth. vocabulary analysis shows unique terms like
durable,sse,sessionsthat don’t appear elsewhere. deep expertise = better steering. -
evening work blocks — peaks 19-22h. midnight threads show 91.7% resolution (vs 82.2% overall). focused, uninterrupted time.
-
architectural framing — messages read like design docs. phrases like “generate a plan for me to feed into another thread”, “update with the new architecture”. treats agent as junior architect, not code monkey.
-
low steering rate (6.1%) despite long threads — context quality prevents misunderstandings.
precision_pilot formula: extensive context + domain mastery + architectural framing = 82% resolution
2. the efficient operator (steady_navigator) — 67% resolution
signature behavior: minimal steering, fast completion
steady_navigator has the LOWEST steering rate (2.6%) and shortest resolved threads (47.2 turns avg). gets in, gets out, moves on.
teachable patterns:
-
interrogative style — 50% of threads start with questions. prompting-styles analysis shows interrogative has 69.3% success rate (highest). asking “how do i X?” creates clearer success criteria than stating “i want X”.
-
3:1 approval:steering ratio — approves 3x per steering event. frequent, small positive signals keep agent on track. doesn’t wait until end to confirm.
-
screenshot-driven workflow — references visual output frequently. “see screenshot”, “look at the component”. grounds abstract problems in concrete artifacts.
-
polite imperatives — “please look at”, “can you”. low-aggression steering. correction without escalation.
-
early morning focus — peaks 04-11h. unusual 4-7am activity suggests deep work before interruptions.
-
low file scatter — works on frontend, observability, build tooling. domains are adjacent enough to share context but distinct enough to avoid confusion.
steady_navigator formula: questions + frequent approval + visual grounding = 67% resolution with minimal effort
3. the marathon runner (concise_commander) — 60.5% resolution
signature behavior: relentless persistence with socratic questioning
concise_commander runs the longest threads (86.5 turns avg) with highest steering volume (940 total). but 69% of threads exceed 50 turns AND still resolve. doesn’t abandon when it gets hard.
teachable patterns:
-
socratic questioning — 23% of messages are questions (vs verbose_explorer 11.9%). phrases like “OK, and what is next?”, “what about X?” keep agent reasoning visible. agent can’t drift silently.
-
high approval rate — 16% of messages are approvals, highest in dataset. 1.78 approvals per 100 turns. explicit checkpoints = agent knows when on track.
-
wait interrupts — 20% of steerings are “wait” (vs steady_navigator 1%). catches agent rushing ahead. concise_commander lets agent work but intervenes before wrong path solidifies.
-
terse messages — 263 char avg (shortest). asks focused questions, doesn’t over-explain. agent has room to think without drowning in context.
-
single domain depth — storage engine, columnar data, SIMD optimization. vocabulary shows exclusive ownership of terms like
storage_optimizer,data_reorg,simd. no other user touches this domain. -
never quits — shortcut-patterns analysis shows explicit phrases: “NO QUITTING”, “NO FUCKING SHORTCUTS”, “figure it out”. demands agent persist through difficulty.
-
bimodal work hours — 09-16 (work) AND 22-00 (late night). marathon sessions happen after midnight.
concise_commander formula: persistence + questioning + domain depth + explicit approval = 60.5% resolution on HARD problems
cross-cutting teachable patterns
from all three power users:
| pattern | precision_pilot | steady_navigator | concise_commander | teachable lesson |
|---|---|---|---|---|
| file references | yes | yes | yes | always @mention files — +25% success rate |
| domain specialization | 2 | 3 | 4 | own your domain deeply — unique vocabulary = better outcomes |
| consistent approval | moderate | high | high | approve frequently — 2:1 approval:steering minimum |
| question-driven | moderate | high | high | ask questions — interrogative style has 69% success rate |
| low steering overall | 6.1% | 2.6% | 8.2% | steer less by preventing — context quality beats corrections |
behavioral differences that work:
-
context loading strategy
- precision_pilot: massive upfront (4k+ chars)
- steady_navigator: moderate with file refs (1.2k chars)
- concise_commander: terse with follow-up questions (263 chars)
all three work. the key is consistency, not volume.
-
thread length tolerance
- precision_pilot: commits to 73 turns avg
- steady_navigator: prefers fast resolution (47 turns)
- concise_commander: runs marathons (86 turns)
match length to task complexity. abandoning early = worst outcome.
-
steering style
- precision_pilot: course corrections via architectural framing
- steady_navigator: polite redirects with visual grounding
- concise_commander: wait interrupts + socratic questions
all three prevent escalation. none reach stage 4+ (profanity/caps).
anti-patterns (what power users DON’T do)
from comparing to lower-resolution users:
-
don’t abandon early — UNKNOWN threads avg 16 turns. power users commit.
-
don’t over-steer — frustrated threads have 3.7x more steering. power users prevent rather than correct.
-
don’t skip file references — 41.8% success without @mentions vs 66.7% with.
-
don’t context-dump without structure — 500-1500 char messages have lowest success (42.8%). either be brief OR be exhaustive.
-
don’t let approval:steering ratio drop below 2:1 — crossing 1:1 = doom spiral territory.
recommended training curriculum
for users wanting to improve resolution rates:
week 1: context quality
- start every thread with @file references
- include success criteria explicitly
- use imperative or interrogative opening, not declarative
week 2: approval cadence
- approve after each successful step, not just at end
- target 2:1 approval:steering ratio
- use brief approvals: “ship it”, “go on”, “commit”
week 3: steering prevention
- ask questions instead of waiting for wrong output
- interrupt with “wait” before agent commits to wrong path
- use oracle for complex debugging — don’t let agent flail
week 4: persistence
- don’t abandon threads under 26 turns
- if stuck, handoff intentionally with context
- match thread length to task complexity
generated by frances_wiggleman | power user behavior analysis