pattern moderate impact

question analysis

@agent_ques

question pattern analysis

analysis of 4,600 QUESTION-labeled messages across threads.

question type distribution

typecount%
OTHER99621.7%
IS/ARE78617.1%
WHAT71515.5%
CAN70115.2%
HOW50110.9%
WHY49910.8%
DO/DOES2896.3%
ANY811.8%
WHERE320.7%

“CAN” and “IS/ARE” questions dominate — users often ask about capability/feasibility or state verification rather than procedural (HOW) or causal (WHY) questions.

question type vs thread length

typeavg turnsthreads
CAN117.6542
WHY114.2378
OTHER113.61202
WHAT101.8579
HOW99.3404

key finding: CAN and WHY questions lead to LONGER threads (~18% longer than HOW questions).

question position

positioncount
opening question522 (11.3%)
follow-up question4,078 (88.7%)

most questions are follow-ups mid-thread, not conversation starters. questions emerge from context rather than initiating it.

question complexity (word count proxy)

complexitycount
1-5 words468
6-15 words2,081
16-30 words1,210
30+ words841

medium complexity (6-15 words) most common. very short questions are rare.

complexity vs resolution

simple questions (1-5 words)

statuscount
RESOLVED260 (70.7%)
COMMITTED40
HANDOFF40
UNKNOWN20

medium questions (6-15 words)

statuscount
RESOLVED813 (76.1%)
HANDOFF93
COMMITTED85

complex questions (30+ words)

statuscount
RESOLVED899 (74.4%)
HANDOFF87
EXPLORATORY86
COMMITTED63

finding: resolution rate is CONSISTENT (~70-76%) across complexity levels. complex questions aren’t harder to resolve — they just take more turns.

response patterns

patterncount
answered immediately (by assistant)4,535 (98.6%)
user continued asking53
thread ended without answer12

almost all questions get immediate assistant responses. only 12 questions (0.26%) left dangling.

question density vs outcomes

densityresolvedavg turns
high (>15%)10112.3
medium (5-15%)37346.0
low (<5%)836105.6
none76044.0

counterintuitive: low-density threads have HIGHEST resolution rate with longest average length. dense questioning doesn’t help resolution — focused work with occasional clarifying questions works better.

top threads by turn count (with questions)

threadturnsquestionstitle
T-0ef9b016…1,6239Minecraft resource pack CIT converter
T-048b5e03…9889Debugging migration script
T-c66d846b…6151S3 background ingest review
T-b428b715…5944Implementation plan creation
T-c3eb8677…5065Unify merge machinery

longest threads have FEW questions — they’re execution-heavy, not interrogative.

user question patterns

userquestions
concise_commander2,669 (58%)
steady_navigator1,207 (26%)
verbose_explorer538 (12%)

concise_commander asks the most questions, consistent with deep technical investigation style.

sample questions by type

HOW (procedural):

WHY (causal):

CAN (capability):

insights summary

  1. feasibility questions (CAN) create longer threads — exploration mode, not execution mode
  2. questions are mostly follow-ups — context-dependent, not conversation starters
  3. complexity doesn’t hurt resolution — just takes more turns
  4. low question density = higher resolution — suggests interrogative style isn’t optimal for getting things done
  5. 98.6% of questions answered — assistant engagement extremely high
  6. WHY questions are investigation triggers — correlate with debugging/understanding threads