Five Principles for Ethical AI
Beneficence
Do Good
AI should promote well-being, preserve dignity, and support environmental sustainability.
AI that helps doctors detect cancer earlier = beneficence.
Five Principles for Ethical AI
Non-maleficence
Do no harm
Protect privacy, security, and safety
Be cautious with powerful or risky AI capabilities
NYT, Disney, and artists suing OpenAI/Midjourney for unauthorized training on copyrighted work
Five Principles for Ethical AI
Autonomy
Human Control
Respect people’s freedom to make their own decisions
Avoid manipulation or coercion
AI should support choice, not override it
AI should assist your choices, not secretly steer you toward something.
Five Principles for Ethical AI
Justice
Fairness
AI must avoid discrimination and reinforce fairness.
Prevent bias, inequality, and exclusion.
Five Principles for Ethical AI
Explicability
Understandable and accountable
Intelligiblity –> can we understand it?
Accountabiliy –> Who is responsible for it?
Why do we need clear, unified AI ethics principles?
Too many different guidelines cause confusion.
→ If they overlap = redundancy
→ If they conflict = people choose what benefits them (“market for principles”)
This slows regulation and ethical consistency.
Why is Explicability crucial?
It enables all the other principles.
You can’t check fairness, prevent harm, or assign responsibility if the system can’t be explained.
How do early AI researchers define the “AI problem”?
Making a machine behave in ways that would be considered intelligent if a human did them.
What is COMPAS
AI used to predict wether offendors would reoffend.
Labeled a man as a “high risk”
Black box –> No one knew where this reasoning came from
Why are AI defamation cases important?
They raise the legal question of who is responsible for false, harmful AI-generated content.
Why do experts call to halt superintelligence development?
It poses major risks and must be proven safe and publicly supported before moving forward.
Why are AI-generated videos a societal risk?
They make disinformation more realistic and harder to detect, harming trust and safety.
What emerging AI threat?
Hyper-realistic AI video (deepfakes) enabling mass disinformation.
OpenAI’s Sora generating fake home invasions or bombings; NYT quiz showing how impossible it is to tell fake from real.