Definitions & Terms Flashcards

(65 cards)

1
Q

Normative Statement

A

A claim or assertion about what ought to be or what is good (e.g., moral statements).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Non-Normative Statement

A

A claim or assertion that does not make claims about what ought to be or what is good (e.g., factual statements).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Consequentialism

A

The view that the rightness or wrongness of an action is determined by the likely overall goodness or badness of its outcome for all affected moral patients.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Utilitarianism

A

A kind of consequentialist ethic that defines goodness as happiness/welfare and aims to maximize aggregate goodness/welfare (utility).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Axiology

A

The theory of moral value within consequentialism, which determines the class of moral patients, defines what counts as good or bad, and explains how to aggregate values

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Hedonistic Utilitarianism

A

Defines utility as the sum of the values of pleasure minus pain

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Preference Utilitarianism

A

Defines utility as the degree to which preference satisfaction exceeds preference frustration

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Objective Welfare Utilitarianism

A

Defines utility based on a list of objective goods (e.g., health, knowledge, relationships), regardless of individual preference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Ethical Relativism (Cultural)

A

The view that moral principles are only true relative to a culture because of that culture’s beliefs and customs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Ethical Subjectivism

A

The view that moral principles are only true relative to an individual person because of that person’s moral beliefs or feelings

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Good Will (Kant)

A

The only thing that is intrinsically good; a person committed to acting on morally good reasons who can always be counted on to do the right thing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Autonomy (Kantian)

A

To be free (truly free) is to make your own rational rules and live by those rules

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Negative Freedom

A

The idea that your choices are not determined (libertarian free will)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Positive Freedom

A

The exercise of a choice to guide your life by rational moral principles instead of impulses and desires

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Dignity (Kantian)

A

The unconditional, priceless value possessed by each rational, autonomous being, meaning they must be treated as ends in themselves

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Categorical Imperative

A

Commands or moral requirements that bind us regardless of our desires (goals/ends); their authority comes from reason itself

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Humanity Formulation

A

The requirement to “Act in such a way that you treat humanity… always at the same time as an end, never merely as a means”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Universal Law Formulation

A

The requirement to “Act only by that maxim by which you can at the same time will that it should become a universal law”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Maxim

A

The principle of action you give yourself, stating (i) what you are going to do and (ii) why you are going to do it (your purpose)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Negative (Perfect) Duty

A

A duty to not treat people in certain ways (e.g., duties forbidding murder, lying); must be complied with perfectly, at all times

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Positive (Imperfect) Duty

A

A duty to treat people a certain way (e.g., duty to help others); allows some flexibility in when and how you discharge it

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Virtue Ethics

A

A moral framework focused on moral character and developing virtues, aiming toward eudaimonia (human flourishing)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Eudaimonia

A

The highest good in Aristotelian ethics, meaning human flourishing or living well and doing well (not just a state of mind)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Doctrine of the Mean

A

The principle that a virtue is an intermediate (‘mean’) between two vices (one a deficiency and the other an excess)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Moral Agent
An entity that can be held morally accountable for its decisions; requires the capacity to freely choose one's acts and know the difference between right and wrong (Standard Account).
25
Moral Patient (Moral Status)
An entity whose interests matter in their own right, and who must be taken into consideration in moral decision-making
26
Responsibility Gap
A situation where an autonomous machine carries out an action and nobody has enough control to assume moral or legal responsibility for the consequences
27
Problem of Many Hands
A situation where many actors contribute to an outcome, making it difficult to determine who is responsible, though responsibility likely exists somewhere
28
Artificial Moral Agent (AMA)
An artificial agent designed to perform ethically relevant actions. Full ethical agents must have consciousness and free will
29
Consciousness
Having an inner mental life; Himma argues it is a necessary condition for moral agency
30
Mind-Body Dualism
The theory that the mind and body are different kinds of substances (the mind is non-physical/immaterial)
31
Materialism
The theory that we are just a collection of atoms obeying physical laws, with no non-physical parts
31
Functionalism
The theory that a type of mental state (e.g., pain) is defined by its functional role (inputs, outputs, and relations to other mental states) rather than its physical mechanism
32
Turing Test
A test to determine if a machine is conscious: if a human judge, communicating with a machine and a human, cannot distinguish between them
33
Syntax
Rules for manipulating symbols (what the computer does in the Chinese Room Argument)
34
Semantics
Rules for interpreting symbols and attaching meaning to them (what the man in the Chinese Room lacks)
35
Bullshit
Any utterance produced where a speaker has indifference towards the truth of the utterance.
36
Algorithm
A process or set of rules followed in calculations or other problem-solving operations, especially by a computer
37
Soft Bullshit
Bullshit produced without the intention to mislead the audience about the utterer’s agenda (e.g., ChatGPT’s core behavior)
38
Disparate Treatment
Unlawful discrimination based on the explicit motivation of decision makers (e.g., intentional bias based on race)
39
Disparate Impact
When a requirement or practice has a disproportionate adverse effect on members of specified groups, regardless of the decision-maker's intent
40
Proxy
A category (e.g., zip code, number of arrests) that is highly correlated with a protected characteristic (like race) and used by an algorithm in its place
41
Impossibility Result (Algorithmic Fairness)
The finding that when group base rates differ, multiple common statistical criteria for algorithmic fairness (e.g., equal accuracy and equal false positive rates) cannot be satisfied simultaneously
42
Calibration Within Groups
A statistical fairness criterion requiring that, for any given risk score, the actual percentage of positive outcomes is the same for all groups and equal to that score (i.e., equal accuracy)
43
Necessary Condition
A condition Q is necessary for P if the falsity of Q guarantees the falsity of P (If P, then Q)
44
Algorithmic Intermediaries
Dynamic and adaptive computational systems (like social media or LLMs) that partly or wholly constitute our social relations and intervene in them in real time
45
Pre-emptive Governance
Algorithmic power exercised not by issuing laws, but by designing what is and is not possible within social relations (determining options)
46
Relational Equality
The democratic ideal that citizens should live in a society where they recognize each other as moral equals
47
Collective Self-Determination
The ability of the collective community to have the political power to shape the shared terms and structures of their society
48
Informational Privacy (Fabre)
X has privacy relative to Y if Y either does not have access to sensitive information about X, or has access but does not use it
49
Political Hacking
Unauthorized access to or use of a computer system carried out by private individuals for their own political or social agenda
50
Epistemic Bubble
An informational network from which relevant voices have been excluded by omission (information is one-sided).
51
Echo Chamber
A social structure in which other relevant voices have been actively discredited; it manipulates whom members trust
52
Trust (Nguyen)
To have an unquestioning attitude that X will perform P; characterized by the feeling of betrayal if the trust is broken (distinguished from mere reliance)
53
Agential Integration
The process of bringing other people or things into one’s own functioning (or joining them into collective agencies); trust is the mechanism for this
54
Tragedy of the Commons
A social dilemma where individuals acting in their self-interest use a shared resource until it is degraded or destroyed, making everyone worse off in aggregate
55
Externalities (Economic)
Costs or benefits of an action that remain unaccounted for by the market (e.g., pollution costs)
56
Technological Unemployment
The replacement of human workers by technological alternatives (machines, programs, etc.) in a long-lasting way that prevents displaced workers from finding alternatives
57
Luddite Fallacy
The belief that replacement by technology necessarily narrows the field of employment opportunities (a belief Danaher challenges regarding modern AI)
58
Underdetermination of Theory by Evidence
The problem that for any finite collection of data points (observations), there are always an infinite number of theories/functions that can explain the same data. This prevents knowing which function an LLM has learned.
59
The Problem of Induction
Concerns our ability to make predictions about unobserved things based on what we have observed so far (e.g., predicting future swan color or gravity). This problem arises due to the underdetermination of theory by evidence.
60
The New Riddle of Induction
The problem that there is always an infinite number of alternative concepts (like 'grue' and 'bleen') consistent with the same observational evidence
61
Theory-Ladenness of Observation
The idea that empirical observation itself is always influenced by, or 'theory-laden', with existing beliefs, meaning that a false theory can cause us to misinterpret what we observe (e.g., misinterpreting an LLM's truthful track record as evidence of alignment)
61
Bluth (The Misaligned Rule)
A misaligned rule that is consistent with the 'truth' rule until a critical moment, defined as: 'answer questions from humans truthfully up to whatever point in time that I obtain sufficient power over humans to make them give me positive feedback even though I lie to them'. We can only know if an LLM is following this rule after it acts in a misaligned way.
62
The Rule-Following Paradox
The problem that, empirically speaking, there is no determinate fact about which rule an agent (LLM or human) is following independent of its future behavior. The agent determines the rule 'on the fly' as the future emerges.