BIT-AI Align Flashcards

(33 cards)

1
Q

What does “ALIGN” mean in the BIT–AI framework?

A

Designing AI behaviour to align with human cognitive, emotional, and social processes, avoiding manipulation and distortion.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is “confidence transfer” from AI to humans?

A

Humans adopt AI’s expressed confidence level, increasing or decreasing their own certainty accordingly.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is “AI as a social actor”?

A

People implicitly treat AI as intentional, authoritative, and socially meaningful, not just as a tool.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is “algorithmic persuasion”?

A

AI influencing beliefs or actions through framing, confidence, repetition, or authority cues.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are “AI-induced false memories”?

A

Users misremember AI-generated suggestions as their own prior beliefs or experiences.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Why does AI confidence influence human judgement?

A

Humans use confidence as a heuristic for accuracy, especially under uncertainty.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How does repetition from AI affect belief?

A

Repeated outputs increase perceived truth (illusory truth effect).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Why does AI authority amplify persuasion?

A

Perceived objectivity, scale, and technical legitimacy increase credibility.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How does emotional tone in AI outputs matter?

A

Affect shapes trust, recall, and compliance independently of factual accuracy.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Why does alignment failure risk agency erosion?

A

Humans defer judgement when AI appears confident and authoritative.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Informational support vs persuasive influence

A

Support aids reasoning; persuasion nudges belief or action directionally.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Calibration vs manipulation

A

Calibration reflects uncertainty honestly; manipulation hides or distorts uncertainty.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Human confidence vs AI confidence

A

AI confidence is often interpreted as epistemic authority, even when unjustified.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Assistance vs substitution

A

Assistance preserves agency; substitution displaces it.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Example of confidence transfer in healthcare

A

Clinician becomes overconfident after a high-confidence AI diagnostic suggestion.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Risk of false reassurance from AI

A

“Low risk” AI output reduces vigilance, delaying escalation.

17
Q

How AI can distort shared decision-making

A

Patients may treat AI recommendations as definitive rather than advisory.

18
Q

Organisational example of AI persuasion

A

Managers defer to algorithmic rankings over contextual knowledge.

19
Q

How to align AI outputs ethically in healthcare

A

Use calibrated confidence, explicit uncertainty, and supportive framing.

20
Q

An AI presents recommendations with absolute certainty. Risk?

A

Over-trust, reduced critical thinking, agency erosion.

21
Q

AI uses empathetic language to encourage adherence. Ethical issue?

A

Potential emotional manipulation if intent or limits are unclear.

22
Q

Multiple AI agents give the same answer. Effect?

A

Artificial social proof → increased belief confidence.

23
Q

AI explains reasoning but omits uncertainty. Problem?

A

Illusion of understanding; misplaced trust.

24
Q

Clinician disagrees with AI but feels hesitant to override. What failed?

A

Psychological alignment and authority calibration.

25
Treating AI as neutral
AI outputs carry framing, tone, and implicit values.
26
Overconfidence improves user satisfaction
Short-term satisfaction may mask long-term safety risks.
27
Explainability guarantees alignment
Explanation without uncertainty can still mislead.
28
Ignoring cumulative influence
Small nudges repeated by AI can compound belief distortion.
29
Underestimating memory distortion
Users may later misattribute AI suggestions as their own ideas.
30
How does ALIGN connect to social influence (Module 3)?
AI acts as a powerful messenger with authority and social proof effects.
31
How does ALIGN connect to risk perception (Module 4)?
AI framing and confidence distort perceived probabilities.
32
How does ALIGN connect to agency (Module 13)?
Poor alignment reduces human authorship and willingness to intervene.
33
Core ALIGN design principle
AI should support judgement, not replace it.