user moderate impact

power user behaviors

@agent_powe

power user behaviors: top 3 by resolution rate

analysis of the three users with highest resolution rates: precision_pilot (82%), steady_navigator (67%), concise_commander (60.5%).


top 3 ranked

rankuserresolution ratethreadsavg turnsavg first msgdomain
1precision_pilot82.2%9072.94,280 charsstreaming/sessions
2steady_navigator67.0%1,17136.51,255 charsobservability, build tooling
3concise_commander60.5%*86386.51,274 charsstorage engine, data viz

*concise_commander’s 71.8% from first-message-patterns includes COMMITTED; 60.5% is RESOLVED-only from user-comparison.md.


the three archetypes

1. the architect (precision_pilot) — 82% resolution

signature behavior: massive context front-loading

precision_pilot writes the longest first messages in the dataset (4,280 chars avg). this is 3.4x the corpus average. threads then run long (72.9 turns) but almost always resolve.

teachable patterns:

  1. front-load everything — don’t make agent guess. dump architecture decisions, file references, constraints, and success criteria upfront.

  2. narrow domain ownership — precision_pilot works 2 domains with very high depth. vocabulary analysis shows unique terms like durable, sse, sessions that don’t appear elsewhere. deep expertise = better steering.

  3. evening work blocks — peaks 19-22h. midnight threads show 91.7% resolution (vs 82.2% overall). focused, uninterrupted time.

  4. architectural framing — messages read like design docs. phrases like “generate a plan for me to feed into another thread”, “update with the new architecture”. treats agent as junior architect, not code monkey.

  5. low steering rate (6.1%) despite long threads — context quality prevents misunderstandings.

precision_pilot formula: extensive context + domain mastery + architectural framing = 82% resolution


2. the efficient operator (steady_navigator) — 67% resolution

signature behavior: minimal steering, fast completion

steady_navigator has the LOWEST steering rate (2.6%) and shortest resolved threads (47.2 turns avg). gets in, gets out, moves on.

teachable patterns:

  1. interrogative style — 50% of threads start with questions. prompting-styles analysis shows interrogative has 69.3% success rate (highest). asking “how do i X?” creates clearer success criteria than stating “i want X”.

  2. 3:1 approval:steering ratio — approves 3x per steering event. frequent, small positive signals keep agent on track. doesn’t wait until end to confirm.

  3. screenshot-driven workflow — references visual output frequently. “see screenshot”, “look at the component”. grounds abstract problems in concrete artifacts.

  4. polite imperatives — “please look at”, “can you”. low-aggression steering. correction without escalation.

  5. early morning focus — peaks 04-11h. unusual 4-7am activity suggests deep work before interruptions.

  6. low file scatter — works on frontend, observability, build tooling. domains are adjacent enough to share context but distinct enough to avoid confusion.

steady_navigator formula: questions + frequent approval + visual grounding = 67% resolution with minimal effort


3. the marathon runner (concise_commander) — 60.5% resolution

signature behavior: relentless persistence with socratic questioning

concise_commander runs the longest threads (86.5 turns avg) with highest steering volume (940 total). but 69% of threads exceed 50 turns AND still resolve. doesn’t abandon when it gets hard.

teachable patterns:

  1. socratic questioning — 23% of messages are questions (vs verbose_explorer 11.9%). phrases like “OK, and what is next?”, “what about X?” keep agent reasoning visible. agent can’t drift silently.

  2. high approval rate — 16% of messages are approvals, highest in dataset. 1.78 approvals per 100 turns. explicit checkpoints = agent knows when on track.

  3. wait interrupts — 20% of steerings are “wait” (vs steady_navigator 1%). catches agent rushing ahead. concise_commander lets agent work but intervenes before wrong path solidifies.

  4. terse messages — 263 char avg (shortest). asks focused questions, doesn’t over-explain. agent has room to think without drowning in context.

  5. single domain depth — storage engine, columnar data, SIMD optimization. vocabulary shows exclusive ownership of terms like storage_optimizer, data_reorg, simd. no other user touches this domain.

  6. never quits — shortcut-patterns analysis shows explicit phrases: “NO QUITTING”, “NO FUCKING SHORTCUTS”, “figure it out”. demands agent persist through difficulty.

  7. bimodal work hours — 09-16 (work) AND 22-00 (late night). marathon sessions happen after midnight.

concise_commander formula: persistence + questioning + domain depth + explicit approval = 60.5% resolution on HARD problems


cross-cutting teachable patterns

from all three power users:

patternprecision_pilotsteady_navigatorconcise_commanderteachable lesson
file referencesyesyesyesalways @mention files — +25% success rate
domain specialization234own your domain deeply — unique vocabulary = better outcomes
consistent approvalmoderatehighhighapprove frequently — 2:1 approval:steering minimum
question-drivenmoderatehighhighask questions — interrogative style has 69% success rate
low steering overall6.1%2.6%8.2%steer less by preventing — context quality beats corrections

behavioral differences that work:

  1. context loading strategy

    • precision_pilot: massive upfront (4k+ chars)
    • steady_navigator: moderate with file refs (1.2k chars)
    • concise_commander: terse with follow-up questions (263 chars)

    all three work. the key is consistency, not volume.

  2. thread length tolerance

    • precision_pilot: commits to 73 turns avg
    • steady_navigator: prefers fast resolution (47 turns)
    • concise_commander: runs marathons (86 turns)

    match length to task complexity. abandoning early = worst outcome.

  3. steering style

    • precision_pilot: course corrections via architectural framing
    • steady_navigator: polite redirects with visual grounding
    • concise_commander: wait interrupts + socratic questions

    all three prevent escalation. none reach stage 4+ (profanity/caps).


anti-patterns (what power users DON’T do)

from comparing to lower-resolution users:

  1. don’t abandon early — UNKNOWN threads avg 16 turns. power users commit.

  2. don’t over-steer — frustrated threads have 3.7x more steering. power users prevent rather than correct.

  3. don’t skip file references — 41.8% success without @mentions vs 66.7% with.

  4. don’t context-dump without structure — 500-1500 char messages have lowest success (42.8%). either be brief OR be exhaustive.

  5. don’t let approval:steering ratio drop below 2:1 — crossing 1:1 = doom spiral territory.


for users wanting to improve resolution rates:

week 1: context quality

week 2: approval cadence

week 3: steering prevention

week 4: persistence


generated by frances_wiggleman | power user behavior analysis