Ethics Flashcards

(113 cards)

1
Q

What is the primary learning objective of the Data Ethics course?

A

To understand the legal and ethical issues in developing and using artificial intelligence

This objective is divided into two stages: legal, social, and moral issues, and rules for using robots.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Define ethics in the context of this course.

A

The philosophy of recommending a system of right and wrong behaviours

Standards of behaviour may come from personal views, societal norms, or written laws.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

True or false: Robots must work within the same ethical framework as humans.

A

TRUE

Robots must follow the same external rules for ethics, including laws and cultural norms.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are the three laws proposed by Isaac Asimov for robots?

A
  • A robot may not injure a human being or allow a human to come to harm
  • A robot must obey orders given by humans unless it conflicts with the First Law
  • A robot must protect its own existence as long as it does not conflict with the First or Second Laws

These laws became influential in guiding robot ethics development.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the zeroth law added by Asimov?

A

A robot may not harm humanity, or allow humanity to come to harm

This law is considered more important than the original three laws.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is a concern regarding the vagueness of harm in Asimov’s laws?

A

Whether a robot should refuse a third slice of cake to prevent increased blood sugar

This highlights the complexity of defining harm in robot ethics.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is roboethics?

A

The general ethical framework governing robot design

It encompasses the legal, social, and moral issues related to robots.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Which cultures were early to create legislation for robots?

A

Far Eastern cultures

They view robots positively compared to Western cultures.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What significant legislation did the European Union (EU) set out in 2017?

A

Civil law rules for robotics

This led to the EU AI Act of 2023.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Who is legally liable for a robot’s actions?

A
  • The initial designer
  • The programmer
  • The manufacturer
  • The user

Liability does not rest with the robot itself, as it is not conscious.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is one proposed solution for liability regarding robot actions?

A

An insurance scheme paid for by robot manufacturers and owners

This would compensate for damage caused by robots.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Under existing health and safety laws, who is responsible for a surgical robot?

A

The operator

This is similar to how responsibility is assigned for drones.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is the current status of the UK government’s legislation regarding robots as of 2024?

A

Still in the consultation phase

The EU AI Act is already in place, but the UK is still developing its approach.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the responsibility of manufacturers like Mercedes-Benz and Volvo regarding self-driving technology?

A

They have volunteered to take full responsibility for accidents

However, ultimate decisions will be made by governments and courts.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

The next video will cover rules and principles of using robots in our homes and workplaces, including safety, privacy, and responsibility.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is the general ethical framework governing robot design and usage called?

A

roboethics

Roboethics complements existing human rights legislation and balances the advantages and risks of robotics to humanity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Who is responsible for minimizing the risks posed by robots?

A
  • Designers
  • Producers
  • Users

Even autonomous robots are unable to make moral decisions themselves.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is the first principle of roboethics?

A

Respect for fundamental human rights

A hospital service robot must not injure a patient due to poor maintenance or incorrect settings.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What does the second principle of roboethics emphasize?

A

Precaution

Robots must be safe for society and the environment, and humans must be aware they are interacting with a robot.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is the third principle of roboethics?

A

Inclusiveness

Information about a robot’s decision-making process must be accessible to prevent bias.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What does the fourth principle of roboethics state?

A

Accountability

The robot developer and manufacturer are accountable for the social, environmental, and human health impact of robotics.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What is the fifth principle of roboethics?

A

Safety

Human safety, health, physical well-being, and rights must be considered, and risks must be disclosed by manufacturers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What does the sixth principle of roboethics entail?

A

Reversibility

A robot must be able to undo its actions to return to a safe position if it finds itself in an unsafe situation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What is the seventh principle of roboethics?

A

Privacy

Robots must keep personal information secure and obtain consent for private data use.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
What is the final principle of roboethics?
**Maximise benefit and minimise harm** ## Footnote The risk of harm while interacting with a robot should be no greater than life before the introduction of robots.
26
True or false: Roboethics is intended as a guiding ethical framework for the design and use of robots.
TRUE ## Footnote It serves as a code of conduct for anyone working with robots.
27
What are the **ethical implications** of AI deployment?
* Bias * Discrimination * Privacy violations ## Footnote These implications necessitate robust AI ethics governance frameworks.
28
True or false: AI ethics governance is only a theoretical concern.
FALSE ## Footnote It is a practical necessity due to increasing scrutiny from regulatory bodies.
29
What do companies need to demonstrate regarding their **AI systems**?
* Designed ethically * Developed ethically * Deployed ethically ## Footnote This is crucial for maintaining public trust and compliance with regulations.
30
Name the **principles** emphasized by UNESCO's Recommendations on the Ethics of Artificial Intelligence.
* Respect for human rights * Fairness * Transparency * Accountability * Sustainability ## Footnote These principles serve as a universal framework for ethical AI practices.
31
What are the **UK Government's FAST Track Principles** for AI systems?
* Fairness * Accountability * Sustainability * Transparency ## Footnote These principles provide actionable guidance for ethical AI design and implementation.
32
What does the **EU AI Act** categorize AI systems into?
* Unacceptable risk * High risk * Limited risk * Minimal risk ## Footnote This categorization helps in regulating AI development and deployment.
33
What must **high-risk AI systems** incorporate according to the EU AI Act?
* Transparency * Human oversight ## Footnote Users should be informed when interacting with AI and clear instructions must be provided.
34
What accountability measures does the **EU AI Act** establish?
* Documentation * Logging ## Footnote These measures are necessary to demonstrate compliance with regulations.
35
What practices does the **EU AI Act** prohibit?
* Manipulating human behavior to cause harm * Exploiting vulnerabilities of specific groups * Enabling social scoring by governments ## Footnote These practices are deemed to pose unacceptable risks.
36
What is the purpose of **regulatory sandboxes** in AI development?
To test and develop AI systems under regulatory supervision ## Footnote This allows for flexibility and innovation before full-scale deployment.
37
What is one effective way to govern AI ethics within a company?
Establish an **AI Ethics Committee** ## Footnote This committee should include diverse stakeholders to oversee ethical implications of AI projects.
38
What are the primary roles of the **AI Ethics Committee**?
* Develop ethical guidelines * Conduct ethical risk assessments * Ensure alignment with company values and legal requirements ## Footnote The committee also discusses and resolves ethical dilemmas in AI projects.
39
What power should the **AI Ethics Committee** have regarding AI projects?
To halt or modify projects posing significant ethical risks ## Footnote This ensures ethical considerations are integrated into AI development.
40
In summary, AI Ethics Governance is about creating a culture of _______ within the organisation.
ethical awareness and responsibility ## Footnote This involves aligning with standards and establishing an AI Ethics Committee.
41
What are **ethical risk assessments** in AI ethics governance?
Identifying potential ethical risks and evaluating their likelihood and impact ## Footnote The goal is to address ethical concerns before they manifest in real-world scenarios.
42
Name some factors to consider in an **ethical risk assessment**.
* Bias in data and algorithms * Potential for discrimination * Privacy violations * Unintended consequences of AI decisions ## Footnote For example, assessing biases in training data for hiring AI systems.
43
True or false: A diverse team of stakeholders is essential for conducting a thorough **ethical risk assessment**.
TRUE ## Footnote Involvement of ethicists, data scientists, legal experts, and affected community representatives ensures a wide range of perspectives.
44
What is **ethical auditing**?
Systematically reviewing AI projects for compliance with ethical guidelines and standards ## Footnote This includes evaluating technical aspects and broader ethical implications.
45
When should **ethical audits** be conducted?
* During design * Development * Deployment * Post-deployment ## Footnote This ensures ethical considerations are integrated throughout the AI lifecycle.
46
What should be done with the results of **ethical audits**?
Document and share with relevant stakeholders ## Footnote This promotes transparency and builds trust within the organization and with external partners.
47
Why is **ethics training** important in AI development?
To equip the workforce to navigate ethical challenges ## Footnote Training should include practical applications and ethical decision-making frameworks.
48
How should **ethics training** be structured?
* Ongoing process * Regular updates * Interactive elements like workshops ## Footnote Tailored to different roles within the organization for effectiveness.
49
What specific training might **developers** need in ethics?
* Ethical coding practices * Bias mitigation ## Footnote This helps developers address ethical dilemmas in their work.
50
What is the overall goal of **ethical risk management** in AI?
Proactively identifying and addressing ethical risks throughout the AI lifecycle ## Footnote This includes conducting assessments, providing training, and implementing audits.
51
Why is it essential to embed **ethics** into the design, development, and deployment of AI systems?
To create technologies that are fair, transparent, and accountable ## Footnote This involves integrating ethical considerations into every stage of the AI development process.
52
What is one approach to embedding ethics in AI development?
Using **design thinking tools** that prioritise user needs and ethical considerations ## Footnote Scenario analysis can explore potential ethical dilemmas and design solutions that minimise harm.
53
What role does **user research** play in embedding ethics into AI systems?
Helps developers understand the diverse needs and concerns of affected people ## Footnote This is critical for creating ethical AI solutions.
54
What is meant by **algorithmic fairness** in AI development?
Ensuring AI systems do not perpetuate or escalate existing biases ## Footnote Achieved through careful data selection, bias mitigation techniques, and ongoing testing.
55
True or false: Open communication is a key element of ethical AI development.
TRUE ## Footnote It ensures all stakeholders are informed about ethical implications and can voice concerns.
56
How can companies promote **open communication** regarding ethical AI issues?
Through regular **ethics reviews** and meetings ## Footnote These discussions should be inclusive and encourage collaboration across departments.
57
What should companies establish to report **ethical concerns**?
Clear channels such as an **anonymous hotline** or a dedicated ethics officer ## Footnote This ensures ethical issues can be raised and addressed promptly.
58
What is necessary for building an **ethical AI culture**?
Fostering a mindset of ethical responsibility throughout the organisation ## Footnote This involves valuing ethical considerations in every AI development decision.
59
What role does **leadership** play in building an ethical AI culture?
Modeling ethical behaviour and demonstrating commitment to ethical principles ## Footnote This sets the tone for the organisation and encourages employees to follow suit.
60
What are some components of building an ethical AI culture?
* Leadership * Continuous education * Awareness-raising efforts * Ethics training * Recognising and rewarding ethical behaviour ## Footnote These practices help reinforce ethical principles within the organisation.
61
In summary, ethical risk management practices involve embedding ethics into the design, development, and deployment of AI systems, promoting open communication about ethical concerns, and building a(n) _______.
ethical AI culture ## Footnote By adopting these practices, companies can create AI systems aligned with their ethical values.
62
Why is **considering ethics** essential when using AI tools?
To ensure fairness, prevent biases, and promote transparency and accountability ## Footnote Ethical use of AI helps guide the creation of systems that respect human rights and dignity.
63
What are the **three key stages** in the lifecycle of AI systems?
* Input Stage * Model Stage * Output Stage ## Footnote Understanding these stages helps identify how biases may enter the system.
64
In the **Input Stage**, how can biases be introduced?
If the data does not represent enough diversity or is skewed due to past inequalities ## Footnote A dataset that mostly includes one demographic group can lead to unfair AI decisions.
65
What happens during the **Model Stage** of AI?
The AI system is developed and trained on prepared data ## Footnote Bias can be exaggerated if algorithms reinforce initial biases present in the input data.
66
What is the focus of this course regarding AI?
The **Output Stage** ## Footnote This stage analyzes the results given by AI to determine biases or ethical issues.
67
List the **four ethical principles** as applied to AI.
* Transparency * Fairness * Accountability * Privacy ## Footnote These principles guide the ethical use of AI systems.
68
What does the principle of **Transparency** in AI entail?
Letting people know how a specific AI system works ## Footnote This builds trust and enables informed decisions about using AI.
69
What is meant by **Fairness** in the context of AI?
Ensuring AI systems don’t discriminate against any individual or group ## Footnote It involves designing systems to identify and eliminate biases.
70
Define **Accountability** in AI development.
Developers must explain and justify the system's decisions ## Footnote Accountability ensures actions taken by AI can be traced.
71
Why is **Privacy** important in AI development?
To protect data throughout AI development and ensure compliance with laws ## Footnote Strong data protection must be enforced from collection to output.
72
True or false: Ethical AI practices encourage **transparency** and **accountability**.
TRUE ## Footnote These practices help build user trust in AI technologies.
73
What is a real-life challenge related to AI that will be examined in this course?
The moral decision-making involved in **self-driving cars** ## Footnote This example highlights the complexity of ethical decision-making in AI.
74
What are the **tasks** for which AI was used according to the declaration?
* Content generation * Code optimisation * Image production ## Footnote Listing all tasks ensures full transparency.
75
What ethical measures were taken to promote **inclusivity**?
* Mitigating biases * Enhancing accessibility features ## Footnote These measures address potential biases and promote inclusivity.
76
Why is it important to **disclose AI use**?
* Protect viewers from fake content * Maintain trust * Set a high standard for transparency ## Footnote Disclosing AI use helps prevent misinformation and values human creativity.
77
True or false: A lack of awareness about AI involvement can lead to **misinformation**.
TRUE ## Footnote It makes it difficult to distinguish real from synthetic content.
78
What are the **challenges** posed by video/audio deepfakes?
* New challenges for user protection * Need for clear guidelines ## Footnote Declaring AI use helps users make informed choices and ensures integrity.
79
What advantages does **transparency** offer when developing projects using AI tools?
* Increased trust and credibility * Upholding ethical standards ## Footnote Users are more likely to trust your work when they understand how AI is involved.
80
When disclosing AI use, what should you focus on regarding **how AI is used**?
Clearly explain the specific tasks AI performs ## Footnote For example, stating, 'We use AI to generate images on this website.'
81
What should you highlight in your **ethical commitments** when disclosing AI use?
Your commitment to transparency and responsibility ## Footnote For example, 'We are committed to monitoring and improving our AI systems to ensure ethical and responsible use.'
82
What should you explain regarding the **ethical measures** used in AI projects?
What measures you used and why you have taken them ## Footnote For example, addressing biases related to gender, sexual identity, race, culture, and age.
83
What is the role of **diverse viewpoints** within an ethics committee?
Ensure fairness and address different perspectives ## Footnote Helps spot biases that a particular group might overlook and fosters broader acceptance of decisions.
84
True or false: A diverse ethics committee is seen as more trustworthy by the public.
TRUE ## Footnote This perception fosters broader acceptance of decisions made by the committee.
85
What are the **challenges** of having many different opinions about AI ethics?
* Managing disagreements * Ensuring everyone is heard * Figuring out solutions that fit various cultures and laws ## Footnote These challenges require careful navigation to maintain a productive discussion.
86
Why is it important to have a **diverse group** when considering ethical implications of AI tools?
Helps make better ethical choices ## Footnote Diversity in perspectives can lead to more comprehensive ethical assessments.
87
What is the **first step** in forming an AI Ethics Committee?
Establish Diverse Team ## Footnote This sets the foundation for a variety of perspectives in ethical discussions.
88
What platform is suggested for **communication and collaboration** within the ethics committee?
Ethics Committee Slack Group ## Footnote This tool facilitates ongoing discussions and coordination among team members.
89
What should the team discuss regarding **AI tools** in their project?
Potential Ethical Issues ## Footnote This discussion is crucial for identifying and addressing ethical concerns early on.
90
What is the purpose of assigning **roles with different perspectives** in the ethics committee?
Simulate real-life diversity ## Footnote This helps to explore various biases and viewpoints in ethical decision-making.
91
What should the committee do to ensure adherence to ethical standards during the project?
Set Up Regular Meetings ## Footnote Regular meetings help review ethical guidelines and address any concerns.
92
What is important to do with projects to ensure they meet **ethical standards**?
Check Projects for Ethical Standards ## Footnote Regular reviews help maintain compliance with established ethical guidelines.
93
What should the committee do to stay informed about **new ethical issues**?
Keep up to date with emerging ethical issues ## Footnote Incorporating new issues into discussions is vital for ongoing ethical awareness.
94
What is the **final step** in reviewing a project to ensure ethical standards?
Final Review ## Footnote This step ensures that all projects comply with the established ethical standards.
95
What should be recorded after the final review of a project?
Record Insights ## Footnote Noting key takeaways and areas for ethical improvement is essential for future reference.
96
What should be prepared after reviewing the project to summarize findings?
Prepare a Summary ## Footnote This summary should include findings and recommendations regarding ethical standards.
97
What are **pre-processing techniques** used for in training data?
Modifying the training data to remove or reduce bias ## Footnote The goal is to ensure that the resulting embeddings are less likely to reflect biases.
98
Name one approach to **balance the training data**.
* Ensuring equal representations of different groups * Adding additional data to balance representations ## Footnote For example, addressing gender imbalances in training data.
99
What does **text normalisation** involve?
Standardising text data to remove bias-inducing elements ## Footnote For example, replacing gender-specific titles with gender-neutral terms.
100
What is **bias-aware sampling**?
Sampling training data to reduce reinforcement of existing biases ## Footnote This can involve oversampling underrepresented groups or undersampling overrepresented ones.
101
What is the purpose of **adversarial debiasing**?
Training the model to resist learning biased associations ## Footnote An adversarial model detects and removes biased patterns.
102
What do **regularisation methods** do in the context of bias?
Penalise the model for learning biased associations ## Footnote This can discourage associations between certain professions and specific genders.
103
What are **modified loss functions** used for?
Adjusting the loss function to account for fairness ## Footnote Ensures the model minimises bias while performing well on primary tasks.
104
What is **hard debiasing**?
Modifying word vectors directly to remove bias ## Footnote For example, adjusting terms like 'doctor' and 'nurse' to be gender-neutral.
105
What does **soft debiasing** aim to achieve?
Reduce bias while maintaining original relationships between words ## Footnote It modifies word vectors slightly to preserve semantic relationships.
106
What are **equalising pairs** in debiasing?
Adjusting pairs of words to ensure similar representations ## Footnote Often used in combination with hard or soft debiasing.
107
What is a potential **challenge** of debiasing techniques?
* Loss of information * Introduction of new biases * Complexity and interpretability * Ethical dilemmas ## Footnote These challenges can affect the effectiveness and transparency of AI systems.
108
True or false: The **Word2Vec** model exhibited strong gender biases.
TRUE ## Footnote It associated 'man' with 'doctor' and 'woman' with 'nurse'.
109
What issue did a major tech company face with its **machine translation system**?
Homogenisation of language understanding ## Footnote This led to mistranslations and misinterpretations, especially for less commonly spoken languages.
110
What privacy concern arose from using embeddings trained on **social media data**?
Encoded patterns could be traced back to individual users ## Footnote This raised serious privacy concerns and highlighted the need for ethical reviews.
111
What does maintaining **ethical awareness** in AI systems involve?
Understanding technical aspects and societal impacts of text embeddings ## Footnote It includes implementing debiasing techniques and promoting transparency.
112
What is meant by fostering an **ethical culture** within organisations?
Encouraging open dialogue about ethical concerns and ongoing ethics training ## Footnote Involving diverse stakeholders in the development process is also crucial.
113
The impact of sharing common text embedding matrices extends to fundamental ethical issues related to __________.
Bias, cultural diversity, and privacy ## Footnote AI practitioners must navigate these challenges thoughtfully.