What is implementation evaluation?
The systematic assessment of how an intervention is delivered, used, and sustained.
Focuses on process and use; distinct from impact evaluation.
Why is implementation evaluation different from performance monitoring?
Because its purpose is learning, not accountability.
Explains variation and supports adaptation.
What is the primary purpose of implementation evaluation?
To inform course correction during implementation.
Real-time learning and design refinement.
What does formative evaluation mean?
Evaluation conducted to improve implementation while it is ongoing.
Early and iterative; prevents late surprises.
What does summative evaluation mean?
Evaluation conducted to judge overall success after implementation.
Retrospective and outcome-focused; too late to fix design.
Why is formative evaluation especially important in implementation?
Because systems adapt as you intervene.
Drift occurs and workarounds emerge.
What question should evaluation answer early on?
“Is this being used as intended?”
Focus on adoption, fidelity, and acceptability.
Why is “Did it work?” the wrong early evaluation question?
Because effects depend on implementation quality.
No use → no effect; poor use → misleading effect.
What makes evaluation actionable?
Clear linkage between findings and decisions.
If no decision follows, evaluation failed.
Why do dashboards often fail implementation teams?
Because they show numbers without interpretation.
No causal story and no phase awareness.
What role does qualitative data play in evaluation?
It explains why quantitative patterns occur.
Reveals mechanisms and surfaces hidden costs.
Why is mixed-methods evaluation powerful?
Because numbers and narratives answer different questions.
Quant = what and how much; Qual = why and how.
What is a common evaluation mistake early in implementation?
Measuring distal outcomes too soon.
Lagging indicators and confounded results.
How should evaluation change across phases?
Focus shifts as implementation matures.
Early: adoption and feasibility; Mid: fidelity and acceptability; Late: sustainment.
Why is evaluation inseparable from Theory of Change?
Because evaluation tests causal assumptions.
Which links hold and which links break.
What does “learning orientation” mean in evaluation?
Treating data as guidance, not judgement.
Curiosity over blame; adaptation over defence.
Why do teams hide or downplay negative findings?
Because evaluation is tied to performance judgement.
Reputation risk and funding pressure.
How does evaluation support adaptation decisions?
By identifying where friction concentrates.
Drop-off points and drift indicators.
What is the danger of over-measurement?
Measurement burden that slows implementation.
Data collection fatigue and reduced goodwill.
Why is evaluation timing as important as what you measure?
Because systems change over time.
Early volatility and later stabilisation.
What is “evaluation drift”?
When measures lose relevance as implementation evolves.
Old questions and new realities.
How can evaluation unintentionally distort behaviour?
By incentivising metric optimisation over real improvement.
Gaming and surface compliance.
Why should evaluation findings be shared quickly?
Because delayed feedback reduces learning value.
Fast loops and timely adjustment.
What does “closing the loop” in evaluation mean?
Acting on findings and checking effects.
Decision → change → reassessment.