question pattern analysis
analysis of 4,600 QUESTION-labeled messages across threads.
question type distribution
| type | count | % |
|---|---|---|
| OTHER | 996 | 21.7% |
| IS/ARE | 786 | 17.1% |
| WHAT | 715 | 15.5% |
| CAN | 701 | 15.2% |
| HOW | 501 | 10.9% |
| WHY | 499 | 10.8% |
| DO/DOES | 289 | 6.3% |
| ANY | 81 | 1.8% |
| WHERE | 32 | 0.7% |
“CAN” and “IS/ARE” questions dominate — users often ask about capability/feasibility or state verification rather than procedural (HOW) or causal (WHY) questions.
question type vs thread length
| type | avg turns | threads |
|---|---|---|
| CAN | 117.6 | 542 |
| WHY | 114.2 | 378 |
| OTHER | 113.6 | 1202 |
| WHAT | 101.8 | 579 |
| HOW | 99.3 | 404 |
key finding: CAN and WHY questions lead to LONGER threads (~18% longer than HOW questions).
- CAN questions often involve exploration/experimentation cycles
- WHY questions require causal investigation, often branching
- HOW questions are more procedural, resolved faster
question position
| position | count |
|---|---|
| opening question | 522 (11.3%) |
| follow-up question | 4,078 (88.7%) |
most questions are follow-ups mid-thread, not conversation starters. questions emerge from context rather than initiating it.
question complexity (word count proxy)
| complexity | count |
|---|---|
| 1-5 words | 468 |
| 6-15 words | 2,081 |
| 16-30 words | 1,210 |
| 30+ words | 841 |
medium complexity (6-15 words) most common. very short questions are rare.
complexity vs resolution
simple questions (1-5 words)
| status | count |
|---|---|
| RESOLVED | 260 (70.7%) |
| COMMITTED | 40 |
| HANDOFF | 40 |
| UNKNOWN | 20 |
medium questions (6-15 words)
| status | count |
|---|---|
| RESOLVED | 813 (76.1%) |
| HANDOFF | 93 |
| COMMITTED | 85 |
complex questions (30+ words)
| status | count |
|---|---|
| RESOLVED | 899 (74.4%) |
| HANDOFF | 87 |
| EXPLORATORY | 86 |
| COMMITTED | 63 |
finding: resolution rate is CONSISTENT (~70-76%) across complexity levels. complex questions aren’t harder to resolve — they just take more turns.
response patterns
| pattern | count |
|---|---|
| answered immediately (by assistant) | 4,535 (98.6%) |
| user continued asking | 53 |
| thread ended without answer | 12 |
almost all questions get immediate assistant responses. only 12 questions (0.26%) left dangling.
question density vs outcomes
| density | resolved | avg turns |
|---|---|---|
| high (>15%) | 101 | 12.3 |
| medium (5-15%) | 373 | 46.0 |
| low (<5%) | 836 | 105.6 |
| none | 760 | 44.0 |
counterintuitive: low-density threads have HIGHEST resolution rate with longest average length. dense questioning doesn’t help resolution — focused work with occasional clarifying questions works better.
top threads by turn count (with questions)
| thread | turns | questions | title |
|---|---|---|---|
| T-0ef9b016… | 1,623 | 9 | Minecraft resource pack CIT converter |
| T-048b5e03… | 988 | 9 | Debugging migration script |
| T-c66d846b… | 615 | 1 | S3 background ingest review |
| T-b428b715… | 594 | 4 | Implementation plan creation |
| T-c3eb8677… | 506 | 5 | Unify merge machinery |
longest threads have FEW questions — they’re execution-heavy, not interrogative.
user question patterns
| user | questions |
|---|---|
| concise_commander | 2,669 (58%) |
| steady_navigator | 1,207 (26%) |
| verbose_explorer | 538 (12%) |
concise_commander asks the most questions, consistent with deep technical investigation style.
sample questions by type
HOW (procedural):
- “How do the best techniques from sneller and memchr combine here?”
- “How does that support transactions?”
- “How do we get the right image for the k8s job?”
WHY (causal):
- “Why is min and max always float?”
- “Why is this a url parameter?”
- “Why is filtering in line instead of the plan better?”
CAN (capability):
- “Can you make a pass at the functions and remove obvious perf issues?”
- “Can you use the real table we tested with?“
insights summary
- feasibility questions (CAN) create longer threads — exploration mode, not execution mode
- questions are mostly follow-ups — context-dependent, not conversation starters
- complexity doesn’t hurt resolution — just takes more turns
- low question density = higher resolution — suggests interrogative style isn’t optimal for getting things done
- 98.6% of questions answered — assistant engagement extremely high
- WHY questions are investigation triggers — correlate with debugging/understanding threads