What does “AUGMENT” mean in the BIT–AI framework?
Using behavioural science to improve how AI systems reason, signal uncertainty, allocate effort, and support human decision-making.
What is “resource rationality”?
The idea that good reasoning balances accuracy with cognitive/computational cost, rather than maximising accuracy at all times.
What is “metacognition” in AI systems?
The system’s ability to monitor its own confidence, uncertainty, and limits, and adapt its behaviour accordingly.
What is “epistemic uncertainty”?
Uncertainty about whether the system knows the correct answer (as opposed to randomness in outcomes).
What is “confidence calibration”?
Alignment between a system’s expressed confidence and its actual accuracy.
Why is resource rationality behaviourally grounded?
Humans adapt effort based on stakes and uncertainty; AI should similarly modulate depth of reasoning rather than always “thinking hard.”
How do poorly calibrated AI systems create risk?
Overconfidence leads users to over-trust outputs, increasing automation bias and errors of omission.
Why should AI signal uncertainty explicitly?
Humans use confidence cues to decide when to rely, double-check, or escalate decisions.
How does metacognition improve human–AI teaming?
It allows AI to defer, slow down, or request human input when confidence is low or stakes are high.
Why does “thinking fast vs slow” matter for AI design?
Many tasks need fast heuristics; others require deliberate reasoning—mirroring dual-process trade-offs in humans.
Accuracy optimisation vs reliability optimisation
Accuracy alone ignores confidence signalling; reliability requires knowing when the system is likely wrong.
Black-box prediction vs metacognitive AI
Black-box predicts without self-awareness; metacognitive AI monitors and communicates its limits.
Uniform reasoning depth vs adaptive reasoning
Uniform depth wastes resources; adaptive depth allocates effort based on uncertainty and stakes.
Human override as failure vs feature
Override is not error—it’s a designed safety valve preserving joint system resilience.
Why is confidence calibration critical in clinical AI?
Clinicians need to know when to trust vs challenge AI recommendations, especially in high-risk cases.
Example of poor calibration risk in healthcare
An AI triage tool flags “low risk” with high confidence → clinician fails to escalate → adverse event.
How could metacognitive AI support clinicians?
By flagging “low confidence” cases, slowing outputs, or prompting second opinions.
Organisational example of resource rationality
AI performs quick screening for routine cases but switches to deeper analysis for edge cases.
Why “always-on explainability” can backfire
Excess explanation increases cognitive load; explanations should be adaptive, not constant.
An AI system is accurate but always expresses high confidence. What’s the problem?
Users cannot distinguish routine vs risky outputs → increased automation bias.
An AI flags uncertainty and requests human review. What agency effect does this have?
Preserves human agency and responsibility; supports joint decision-making.
AI reasoning time increases as uncertainty rises. Which principle is this?
Resource rationality and metacognitive control.
Clinicians ignore AI uncertainty warnings over time. What failed?
Organisational norms and training—not just AI design—failed to reinforce appropriate use.
“Smarter AI” ≠ safer AI
Without uncertainty signalling, smarter models can amplify over-trust.