Authors Flashcards

(25 cards)

1
Q

Matthais

A

Argues for the Responsibility Gap: nobody has enough control over an autonomous machine’s actions to assume responsibility for them, because control is a necessary condition for responsibility.

Limitation: inherent to technology

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Königs

A

Focuses on moral responsibility (blame/praise). Argues that AI autonomy does not excuse human negligence or recklessness (Negligent Engineer case). Distinguishes the gap from the Problem of Many Hands.

Objection: against the universality of the responsibility gap

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Robert Sparrow

A

Argues against autonomous weapons because they violate the required principle of Responsibility in warfare, which is necessary to treat enemies and civilians with appropriate respect.

Objection from königs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Tonkens

A

Presents the Dilemma for Kantian AMAs: building them without free will means they aren’t morally responsible; but if they are autonomous, Kantian ethics forbids their creation because it treats them merely as a means to the creator’s ends.

Limitation: internal philsophical dilemma

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Himma

A

Argues that Consciousness is a necessary condition for moral agency, because agency requires the capacity for free choice/deliberation and accountability (which requires pleasant/unpleasant mental states for praise/blame).

Limitation: practical challenge for development

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Goldstein & Kirk-Gianinni (G&K)

A

Argue that Artificial Language Agents (LLMs + planning/memory) plausibly have wellbeing (moral status). They argue consciousness is not required for wellbeing based on Desire-Satisfaction and Objective List theories.

Objection: the philosophical counter-argument
Limitation: defining the scope of the theory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Aristotle

A

Founder of Virtue Ethics, focused on cultivating moral character (virtues), which are defined by the Doctrine of the Mean (balance between vices), to achieve Eudaimonia (human flourishing).

Limitation: internal philosophical flaw

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Shannon Vallor

A

Applies Virtue Ethics to social media, arguing technology reshapes character development by challenging communicative virtues like Patience, Honesty, and Empathy.

Limitation: negative effect of technology design

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

James Rachels

A

Defines the Minimum Conception of Morality as the effort to guide conduct by reason (doing what is best supported by arguments) while giving equal weight to the interests of each individual affected.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Singer

A

Advocates Preference Utilitarianism (maximizing preference satisfaction). Requires moral claims to be justifiable from a universal point of view. Opposes ethical relativism because it leads to moral infallibility and prohibits moral progress.

Objection from Nolt

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Nolt

A

Favors Objective Welfare Utilitarianism (using an objective list of goods like health and knowledge) over Hedonistic or Preference utilitarianism. Argues we should integrate both deontological and consequentialist thinking.

Limitation: Internal philosophical flaw of the theory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Aylsworth & Castro (A&C)

A

Argue that undergraduate students ought not use chatbots to write papers. This is a Kantian moral duty to cultivate their humanity/autonomy (the capacity to rationally choose ends), which writing papers uniquely facilitates.

Objection: address and dismiss common objections
Limitation: scope of the argument

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Sunstein

A

Algorithms are beneficial because they overcome human cognitive biases (e.g., affinity bias) and, lacking intentions, they avoid disparate treatment discrimination.

Limitation: inherent to using real-world data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Hedden

A

Argues that most algorithmic fairness criteria are not necessary for fairness; only Calibration Within Groups (accuracy) is plausible. He concludes that unfairness typically lies in societal background conditions, not the algorithm itself.

Limitation: incompleteness of the fairness criteria/scope of moral critique

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Lazar

A

Focuses on Algorithmic Intermediaries (like social media/LLMs) that perform pre-emptive governance (shaping options, not making laws). This power requires authority derived from democratic authorization to maintain democratic ideals.

Limitation: lack of political legitimacy/constraints

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Fabre

A

Justifies mass surveillance based on the duty to detect threats early to protect potential victims and responders. She argues the strongest objection against surveillance is based on fairness, not privacy.

Objection: against the privacy critique
Limitation: on the strength of the privacy objection

17
Q

Nguyen (Echo Chambers)

A

Distinguishes between Epistemic Bubbles (omission of voices) and Echo Chambers (active discrediting of outsiders). Echo chambers manipulate whom members trust. Escaping requires a social-epistemic reboot (suspending beliefs about whom one trusts).

Limitation: difficulty of the proposed solution

18
Q

Hicks, Humphries, & Slater

A

Argues that ChatGPT is defined by its indifference to the truth; thus, its output is bullshit (not lying or hallucinating). It is a “soft bullshitter” (indifferent to truth) and a “bullshit machine”.

Objection: against common terminology

19
Q

Danaher

A

Focuses on the non-economic consequences of automation. The core risk of technological unemployment is that it will sever the connection between human activity and meaningful outcomes in the world (e.g., scientific advancement), leading to an “impoverished form of existence”.

Limitation: unresolved future risks

20
Q

Hardin

A

Articulated the Tragedy of the Commons: individual self-interest rationally leads to the collective ruin of a shared, subtractable resource, making everyone worse off.

Objection from Nolt

21
Q

Johnson

A

Argues that in a Tragedy of the Commons (where individual actions are harmless), there is no ethical requirement for individuals to voluntarily reduce their use unilaterally. The moral obligation is instead to work for a collective agreement (like laws).

Objection: Dismissal of a counter-analogy
Limitation: controversial result of his theory

22
Q

Hsu

A

Argues that capitalism will solve environmental problems, but only if it is reformed by government action to compel the internalization of environmental costs (e.g., taxing pollution).

Objection: against rival solution… socialism

23
Q

Bellaby

A

Argues that political hacking (unauthorized access for political ends) can be morally justified as a form of self-defense or defense of others’ vital interests when the state fails to provide adequate protection.

Objection from Fabre

24
Q

Arvan

A

Argues that reliable AI alignment is impossible because there is no possible engineering solution. This impossibility arises because alignment requires reliable interpretability of what Large Language Models (LLMs) are learning, and this is empirically impossible. Because reliable alignment is unsolvable, we must never give LLMs the kind of power that we wouldn’t trust humans with.

25
Dung
Dung argues for Near-Term Human Disempowerment Through AI, using a 6-premise sequence. Humanity will be permanently disempowered by 2100. Permanent disempowerment means humanity cannot decide its own future (extinction or a subordinate relationship to AI). Extinction is the most likely form. Limitations: reduce the probability of human disempowerment. try to permanently disempower humanity