What does “ALIGN” mean in the BIT–AI framework?
Designing AI behaviour to align with human cognitive, emotional, and social processes, avoiding manipulation and distortion.
What is “confidence transfer” from AI to humans?
Humans adopt AI’s expressed confidence level, increasing or decreasing their own certainty accordingly.
What is “AI as a social actor”?
People implicitly treat AI as intentional, authoritative, and socially meaningful, not just as a tool.
What is “algorithmic persuasion”?
AI influencing beliefs or actions through framing, confidence, repetition, or authority cues.
What are “AI-induced false memories”?
Users misremember AI-generated suggestions as their own prior beliefs or experiences.
Why does AI confidence influence human judgement?
Humans use confidence as a heuristic for accuracy, especially under uncertainty.
How does repetition from AI affect belief?
Repeated outputs increase perceived truth (illusory truth effect).
Why does AI authority amplify persuasion?
Perceived objectivity, scale, and technical legitimacy increase credibility.
How does emotional tone in AI outputs matter?
Affect shapes trust, recall, and compliance independently of factual accuracy.
Why does alignment failure risk agency erosion?
Humans defer judgement when AI appears confident and authoritative.
Informational support vs persuasive influence
Support aids reasoning; persuasion nudges belief or action directionally.
Calibration vs manipulation
Calibration reflects uncertainty honestly; manipulation hides or distorts uncertainty.
Human confidence vs AI confidence
AI confidence is often interpreted as epistemic authority, even when unjustified.
Assistance vs substitution
Assistance preserves agency; substitution displaces it.
Example of confidence transfer in healthcare
Clinician becomes overconfident after a high-confidence AI diagnostic suggestion.
Risk of false reassurance from AI
“Low risk” AI output reduces vigilance, delaying escalation.
How AI can distort shared decision-making
Patients may treat AI recommendations as definitive rather than advisory.
Organisational example of AI persuasion
Managers defer to algorithmic rankings over contextual knowledge.
How to align AI outputs ethically in healthcare
Use calibrated confidence, explicit uncertainty, and supportive framing.
An AI presents recommendations with absolute certainty. Risk?
Over-trust, reduced critical thinking, agency erosion.
AI uses empathetic language to encourage adherence. Ethical issue?
Potential emotional manipulation if intent or limits are unclear.
Multiple AI agents give the same answer. Effect?
Artificial social proof → increased belief confidence.
AI explains reasoning but omits uncertainty. Problem?
Illusion of understanding; misplaced trust.
Clinician disagrees with AI but feels hesitant to override. What failed?
Psychological alignment and authority calibration.