Active Experiments
Issue Classification
Infrastructure vs prompts on SWE-bench Lite — does tooling beat prompt engineering at 300-task scale?
Completed Experiments
Code Coverage v2
Skills vs flat knowledge bases — 7 variants on Spring PetClinic. Skills+preanalysis achieves -31% steps with no quality loss.
Code Coverage v1
Knowledge injection baseline — 9 variants, two independent axes discovered.
Upcoming
| Experiment | Question | Status |
|---|---|---|
| Code Coverage v3 | Extended variant ladder with refined skills | Planned |
| SWE-bench Results | Cross-experiment comparison on standardized tasks | Planned |
Experiment Design Principles
- One variable per experiment — Isolate the thing being tested
- Deterministic preprocessing — Parse inputs before the LLM sees them (zero LLM cost)
- Cascaded evaluation — T0→T1→T2→T3, cheap filters first
- Behavioral analysis — Markov chains on tool-call traces, not just pass/fail
- Reproducibility — All experiment repos are public with full trace data