BIT-AI Augment Flashcards

(30 cards)

1
Q

What does “AUGMENT” mean in the BIT–AI framework?

A

Using behavioural science to improve how AI systems reason, signal uncertainty, allocate effort, and support human decision-making.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is “resource rationality”?

A

The idea that good reasoning balances accuracy with cognitive/computational cost, rather than maximising accuracy at all times.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is “metacognition” in AI systems?

A

The system’s ability to monitor its own confidence, uncertainty, and limits, and adapt its behaviour accordingly.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is “epistemic uncertainty”?

A

Uncertainty about whether the system knows the correct answer (as opposed to randomness in outcomes).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is “confidence calibration”?

A

Alignment between a system’s expressed confidence and its actual accuracy.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Why is resource rationality behaviourally grounded?

A

Humans adapt effort based on stakes and uncertainty; AI should similarly modulate depth of reasoning rather than always “thinking hard.”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How do poorly calibrated AI systems create risk?

A

Overconfidence leads users to over-trust outputs, increasing automation bias and errors of omission.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Why should AI signal uncertainty explicitly?

A

Humans use confidence cues to decide when to rely, double-check, or escalate decisions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How does metacognition improve human–AI teaming?

A

It allows AI to defer, slow down, or request human input when confidence is low or stakes are high.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Why does “thinking fast vs slow” matter for AI design?

A

Many tasks need fast heuristics; others require deliberate reasoning—mirroring dual-process trade-offs in humans.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Accuracy optimisation vs reliability optimisation

A

Accuracy alone ignores confidence signalling; reliability requires knowing when the system is likely wrong.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Black-box prediction vs metacognitive AI

A

Black-box predicts without self-awareness; metacognitive AI monitors and communicates its limits.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Uniform reasoning depth vs adaptive reasoning

A

Uniform depth wastes resources; adaptive depth allocates effort based on uncertainty and stakes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Human override as failure vs feature

A

Override is not error—it’s a designed safety valve preserving joint system resilience.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Why is confidence calibration critical in clinical AI?

A

Clinicians need to know when to trust vs challenge AI recommendations, especially in high-risk cases.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Example of poor calibration risk in healthcare

A

An AI triage tool flags “low risk” with high confidence → clinician fails to escalate → adverse event.

17
Q

How could metacognitive AI support clinicians?

A

By flagging “low confidence” cases, slowing outputs, or prompting second opinions.

18
Q

Organisational example of resource rationality

A

AI performs quick screening for routine cases but switches to deeper analysis for edge cases.

19
Q

Why “always-on explainability” can backfire

A

Excess explanation increases cognitive load; explanations should be adaptive, not constant.

20
Q

An AI system is accurate but always expresses high confidence. What’s the problem?

A

Users cannot distinguish routine vs risky outputs → increased automation bias.

21
Q

An AI flags uncertainty and requests human review. What agency effect does this have?

A

Preserves human agency and responsibility; supports joint decision-making.

22
Q

AI reasoning time increases as uncertainty rises. Which principle is this?

A

Resource rationality and metacognitive control.

23
Q

Clinicians ignore AI uncertainty warnings over time. What failed?

A

Organisational norms and training—not just AI design—failed to reinforce appropriate use.

24
Q

“Smarter AI” ≠ safer AI

A

Without uncertainty signalling, smarter models can amplify over-trust.

25
Confidence inflation for user reassurance
Overconfident outputs may improve short-term trust but increase long-term risk.
26
Treating human override as inefficiency
Suppressing override reduces resilience and escalation capacity.
27
Ignoring context in AI reasoning depth
Same model behaviour across low- and high-stakes settings is dangerous.
28
How does AUGMENT connect to behavioural science foundations?
Mirrors dual-process theory, bounded rationality, and confidence heuristics in human cognition.
29
How does AUGMENT connect to agency (Module 13)?
Metacognitive AI preserves human agency by knowing when not to decide.
30
Key design principle for AUGMENT
Build AI systems that know their limits and make those limits visible to humans.