05 — Extract Key Learnings
Purpose: Convert experiment evidence into clear, validated learnings.
Outcome
Learnings that are evidence-backed, confidence-tagged, and decision-relevant.
This is the stage where the team proves whether the experiment changed understanding or only generated raw information.
Time to complete
30-90 minutes after experiment completion.
Inputs
Experiment outcomes from Step 04.
Success and invalidation criteria.
Steps in SwiftCNS
Review evidence against expected outcomes.
Document what was learned, not what was hoped.
Tag confidence and uncertainty.
Link each learning to the original hypothesis/assumption.
Mark learnings that are strong enough for insight synthesis.
Why this stage matters
This is where many teams lose rigor. Once evidence exists, people naturally want to interpret it quickly and move on. But if learnings are weak, every later stage becomes harder.
The job here is not just to summarize results. It is to answer:
what changed in our understanding,
how strong that change is,
and what uncertainty still remains.
Role lenses
Startup: avoid optimistic interpretation; stay evidence-first.
Program manager: verify that learnings are comparable across teams.
Mentor: challenge unsupported claims and weak confidence ratings.
What strong output looks like
A strong learning:
is anchored in evidence,
says what changed in confidence,
avoids overstating what the data supports,
is useful input for synthesis and decision-making.
Weak vs strong pattern
Weak
learning reads like a preference or narrative,
no confidence or uncertainty is stated,
there is no clear link back to the tested hypothesis,
the team cannot tell whether the result matters for the bet.
Strong
learning is evidence-backed,
confidence is visible,
hypothesis traceability is clear,
the implication for the next stage is obvious.
Outputs
Validated learnings.
Updated confidence for tested assumptions.
Definition of done
Learnings are evidence-backed and decision-relevant.
Team can explain what changed and why.
Common failure mode
The most common issue here is optimism drift. Teams see some encouraging signal and translate it too quickly into a strong learning. The fix is simple: force the learning to stay close to the evidence and explicitly name what is still uncertain.
If blocked
Use Learning Quality Rubric to improve evidence strength.
Next step
Continue to 06 — Synthesize Insights -> Decision.
Last updated