Computer Ethics Final Flashcards

(60 cards)

1
Q

Alysworth and Castro- should undergraduate students use Chat to write Papers?

A

No
- all arguments trying to give an explanation why it’s wrong are not compelling
1. they are unreliable
- if this is the only reason, then it would be fine
2. it would be cheating
- some universities and profs allow people to use it
A&C still think there would be something wrong

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Cultivating your capacities - A&C

A

it would be wrong to use ChatGPT because then you would fail to develop an important ability.
- A&C: “technology has caused us to lose countless capacities, but we do not lament the loss of each one.”
- further argument is required to establish the capacity that is lost has value

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Final/Intrinsic Value Mill

A
  • intrinsic value: valuable for its own sake
    Classic utilitarian, hedonist -> only pleasure and the absence of pain have intrinsic value
    Argument:
  • everything else we think is valuable is ultimately only value insofar as it contributes to pleasure or the absence of pain.
  • He needs to give some reason to think that the chain stops with pleasure and the
    absence of pain, and argues that it is self-evident: “questions of ultimate value do not admit of proof”
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

A&C - appeal to Kantian autonomy

A

our rationality and autonomy (together: our humanity, and A&C sometimes just use ‘autonomy’ for this) is what makes us immeasurably valuable.
- You have no choice but to care about it – about your capacity to evaluate which ends matters to you and which ends do not
- On A&C’s view, we should never forfeit autonomy for any amount of pleasure. (never would hand over decision making to someone else for optimal happiness)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Categorical imperative - A&C view

A

Always treat yourself as an ends never merely a means
- Treating yourself with respect requires not undermining your own autonomy – your own ability
to rationally choose and pursue ends you set for yourself.
- This requires us to examine the coherence of various desires with our conception of the good life l taking an active role in the construction of this conception is a morally duty

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Writing and Autonomy A&C

A

skills we could outsource to a machine without compromising an autonomy
- But handing over our ability to critically assess our values, reflect on and revise our conception of the good life, is not one of these
* We have positive/imperfect duties to ourselves to cultivate the capabilities required for autonomy.
* We have negative/perfect duties not to do things that undermine them.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

A&C’s argument

A
  1. You have a duty to cultivate your humanity, because this capacity has final value.
  2. If you have a duty to cultivate something, happen upon a good and unique
    opportunity to cultivate it, and do not have a good reason to pass on the opportunity,
    you ought to take the opportunity.
  3. Writing your own (humanities) papers is one such opportunity.
  4. So, you ought to write your own (humanities) papers.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Objection: Chatbots enhance autonomy

A

A&C:
Experts, who have already mastered a skill, may be able to reach new levels by
partnering with AI without undermining their autonomy.
But the same is not true for novices (like undergraduate students).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

mind-body problem

A

explaining how mental stuff is related to physical stuff; how the mind is related to the brain (or body)
- if something is conscious, it has an inner mental life

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

mind-body dualism

A
  • body is physical and mind is non-physical
  • it is always possible to be mistaken about facts about your body; it is impossible to be mistaken about the contents of your own mind
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Descartes Conceivability argument

A
  1. we can imagine the body without the mind: a human robot exactly life us with no consciousness
  2. we can imagine the mind without the body: spirts
  3. If two things can exist without each other, they are not the same thing
  4. therefore the mind and body are different things
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Objections to Dualism (2)

A

interaction problem: it’s hard to see how a nonphysical mind could interact with a physical body
Radical Emergence:
its hard to see why the immaterial mind would suddenly pop into existence

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Materialism

A

we are just a collection of atoms obeying the laws of physics and chemistry.
- We have no non-physical parts

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Behaviourism and Objections

A

mental states are behaviours and/or dispositions to behave
in certain ways
- ex. to be angry is to act aggressively
Objections:
1. we are a aware of mental states even if no behaviour is occurring
2. not every mental state can be understood as a pattern of behaviour
3. It seems possible for two people to behave in the same way and have different mental states

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Mind-brain identity theory

A

mental states are brain states
- as implying that the pain I feel now is identical with some specific bit of brain activity now occurring in my head
- not implying that there is pain if and only if a certain type of brain activity is occurring

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

functionalism

A

a type of mental state, e.g., pain, is any feature of a physical
system that serves the same functions, in that system (e.g., as our neuron firings serve in us when we are that mental state).
functions of a mental state:
1. environmental inputs
2. behavioural outputs
3. relations to other mental states
many philosophers accept mind-brain identity theory and functionalism
- allows for the possibility of computers to be in pain (just requires the same mental states)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

implications of functionalism

A

Functionalism implies that any sufficiently complex system will have a mind (it just has to have things with the same functional roles as the mental states in our own minds) and that physical composition doesn’t matter.
- The connection between body and mind may be like the relation between hardware and software. What we call the ‘mind’ would then be just a program running in the brain.
- Rachels and Rachels say this wouldn’t be consciousness

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

problems for materialism (2)

A
  1. Materialist theories may not be able to explain how it is that mental states are about things - how can a physical state be about anything?
  2. theory is unable to explain why there is something it is like to have experiences.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Mary thought experiment

A

“Suppose that a brilliant scientist named Mary has lived her entire life in a black-and-white environment. Up to now, she has seen only whites, blacks, and grays. Also suppose that Mary knows everything there is to know, in physical terms, about colour perception. She knows, for example, what happens in your brain when you see the color red.
- Then one day, for the first time, Mary sees a ripe tomato. When this happens, she learns something: she learns what it is like to experience redness. This knowledge is new to her, even though she already knew all the physical facts about color.
- Thus, ‘what it is like to experience redness’ cannot be physical.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

The piecemeal- Replacement argument and The tipping point objection

A
  • Computer chip replaces piece of brain
  • cannot tell the difference and each time one is added you can still not tell the difference
    -if this is true, no only could a machine think, but you could become a machine
    Tipping point: some tipping point could be reached at some point, where consciousness would be lost
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

The Turing test

A
  • determines whether a machine is conscious
  • if a human judge cannot differentiate between a human and a computer, the computer passes this
    Why?
  • we take what others say as proof that they are conscious
  • behaviourism was the favoured theory
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

why the turing test fails - Rachels (2)

A
  1. it is an application of Behaviourism, which is now thought to be wrong
  2. John Searle’s Chinese room argument- computer could pass the test without understanding anything said
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Searle’s Chinese Room argument

A

argument: computers cannot genuinely understand language
- a man inside a room who, without understanding Chinese, can respond to Chinese questions by following a rule book to manipulate symbols
- process of manipulation (syntax) is not the same as understanding (sematics)
- having a mind must have more than syntax and also must have semantics

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Response to chinese room

A
  1. the person in the room might not understand chinese, but the room as a whole does understand
    - if the man does not understand chinese, how could the man plus the books understand
  2. we would have the same evidence for consciousness as we do for other humans, and there’s no more evidence for consciousness that could possibly be provided
    - our belief that humans are conscious is not a deduction from behaviour, but we infer that other humans (bc they are similar) must have similar experiences)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Hallucinating and lying LLMs - Hicks et al.
Hicks et al. argue that ChatGPT is not lying or hallucinating when it presents false claims. The idea that it is hallucinating suggests that it is misrepresenting the world and describing that it ‘sees’. --> HHS don’t think this is what’s going on * ChatGPT can’t itself be concerned with the truth. * It is only designed to produce text that looks true.
26
hicks et al. - nature of chatGPT
ChatGPT can produce text that is (typically) indistinguishable from that of an average English speaker/writer. - cannot infer that it has a mind - humans have goals - LLMs aim is to replicate human speech/writing - when creativity is increased, false claims are increased - goal of LLM is to provide normal response, not information that is helpful
27
Accuracy problem - Hicks et al.
because Chat isn't designed to produce the truth, it sometimes produces true-sounding response that are made up - lawyer: false cases They conclude that the problem isn't that they hallucinate, lie, or misrepresent the world --> its that they are not designed to represent the world but to produce convincing lines
28
define: Lie, Bullshit: hard and soft
lie: make a believed false statement with intention that the other person believes that statement to be true Bullshit: Any utterance produced where a speaker has indifference towards the truth of the utterance. - ex. politics Hard: bullshit with intention to mislead audience soft: bullshit without the intention to mislead the hearer
29
Frankfurt on Bullshit
- indifference to the truth is extremely dangerous - civilized life depends on true and false
30
Chat GPT is a bullshitter - Hicks et al.
Chat is a soft bullshitter - there is no intention to mislead, just indifference to truth - hard bullshit depends on whether chat can be an agent - agree with frankfurt that indifference to truth is dangerous
31
possibility of hard bullshit in Chat. -hicks et al.
even if chat is not an agent, its creators and users and agents - chats creators and users produce bullshit with it -> can lead to hard bullshit
32
intentional stance - Dennett
To adopt the intentional stance is to decide to attempt to characterize, predict, and explain behaviour by using intentional idioms, such as ‘believes’ and ‘wants’, a practice that assumes or presupposes the rationality of the target system. - If we know why/how a system was designed, we can also try to make predictions on the basis of this (instead of taking the intentional stance), but it’s not easy to do this with ChatGPT. - According to Hicks et al., when we adopt the the intentional stance toward ChatGPT, we make worse predictions if we attribute to it a desire to convey the truth than if we attribute the intentions of a hard bullshitter. - “If ChatGPT is trying to do anything, it is trying to portray itself as a person.”
33
Hicks et al. conclusions
Chat is a soft bullshitter - and either itself a hard bullshitter or is used by people to produce hard bullshit --> it is a bullshit machine - we ought to be careful of its outputs - misleading to describe errors as hallucinations
34
Moral status AI (Patient v. agent)
Moral patient: it has moral status, is part of the moral community, its interests matter in their own right, and we must take its interests into consideration in our moral decision making Moral agent: it can be held morally accountable for it's actions - all moral agents are moral patients
35
Himma argument on ICT (AIs)
argues that Ai cannot be moral agents - this is because consciousness is a necessary condition for moral agency
36
2 account of agency as a mental state
1. A belief/desire pair: if I want X and believe Y is a necessary means of achieving X, my belief and desire will cause by doing X 2. Volition or Willing: it is a necessary condition for some event y to count as an action that y be causally related to some other mental state that simply a desire or belief
37
The standard account of moral agency (2)
1. the capacity to freely choose one's acts - one must be the direct cause of one's behaviour 2. The capacity to know the difference between right and wrong - a being should conform its behaviour to moral requirements and have the ability to do so X is a moral agent iff: X is an agent having the capacities for (1) making free choices and deliberating about what one ought to do, and (2) understanding and applying moral rules correctly in paradigm cases. (similar to kant) - this is himmas conception of consciousness
38
Himma - consciousness
each of the capacities required for moral agency presupposes the capacity for consciousness - Only a being with a conscious mental state can have states like willing to do something, which they need if they are to count agents. - Agents also need to have some kind of conscious grasp/understanding of reasons in order to be capable of deliberating about them and acting because of them. - need something to blame
39
Group agents?- Himma
Corporations, governments? - moral agents without consciousness? Himma: no - the acts we attribute to corporations are the acts of their individual members
40
Artifical moral agents? - himma
AI would first need to be conscious in order to be an agent to be a moral agent, it would also need the capacities to (1) choose its actions freely and to (2) understand the concepts and requirements of morality. - 2 may be easiest - 1 free is harder: if we don’t even know if our own choices are free, we are going to have an especially hard time building it in. Consciousness: we don't know how to test for it
41
AI Wellbeing- goldstein and kirk-Gianinni
- similar to Himma: AI can have wellbeing without being conscious
42
Park et al's smallville
- LLM receive observation about their surroundings in the form of sentences, and they act on their surroundings by producing text descriptions of their behaviour - personality and other features are initially determined by text backstory but can change overtime - observations added to a memory stream - agents also have capacity of reflection in which they query the LLM to draw general conclusions about their values, relationships, or higher-level representations
43
Beliefs and desires (Types -3)
1. representationalism: Beliefs and desires are mental representations that describe how the world is or how we want it to be. - they have causal powers - ex. smallville: having sentences enter memory stream accompanied by dispositions of beliefs/desires 2. dispositionalism: to have the right dispositions to act, think, and feel in certain ways. ex. to believe the stove is hot, means you are disposed not to touch it 3. interpretationalism: the mental states that make your behaviour interpretable as rational. - ex. if someone goes to the freezer and gets icecream, the best way to interpret this is that they desire icecream
44
No Necessary connection between having beliefs and desires, and having consciousness - G&K
- someone could be motivated to act out of duty without any accompanying pleasure - if we met advanced aliens, we would be able to know that they had beliefs before knowing whether they are conscious
45
Wellbeing (hedonism) - G&K
wellbeing is a function of pleasure and pain; higher to the extent to which you experience more pleasure and less pain - don't you need consciousness? - according to desire- satisfaction theories: your life goes well to the extent that you desires are satisfied - sometimes understood as having a genuine attraction as opposed to mere compulsion G&K: accounts to AI wellbeing
46
Objective list in wellbeing - G&K
language agents can have all of these objective goods (eg. Reasoning, knowledge, achievements, etc..)
47
The conscious requirement and G&K response (2)
consciousness is necessary for wellbeing 1. it is possible that language agents do have consciousness 2. there are reasons to doubt the consciousness requirement - experience machine = shows that experiences don't matter to wellbeing. - objective welfare list
48
further requirement on groups - G&K
The reality requirement: X has mental states only if its behavior isn’t completely explained by someone else’s beliefs and desires.
49
dealing with uncertainty: G&K
possible person - torcher someone: 90% chance that it is not a real person, but 10% chance that it is. should you do it? - probably not
50
Alignment- Arvan
- we want the AI to be aligned with human values - many goals or preferences could be morally bad, so, we need sound moral values (Hard bc all morality is contested. Arvan: even if we knew which morals AI should conform to, we can never had empirical evidence that an LLM is aligned to them
51
Problem of Induction - Arvan
our ability to make predictions about unobserved things based on what we have observed so far - Swans: suppose that we have observed 1000 swans, and all have been white. Do we have good evidence that all swans are white? For all we know, most of the others are black.
52
Underdetermination of theory by evidence- arvan
there is always an infinite number of theories that can explain the same data - Normally, to solve this, science will look to a simplier, more unified explanation - But, LLMs are irreducibly complex and we can't safely assume that what is correct in nature applies here
53
The New Riddle of Induction- Arvan
there is also always an infinite number of alternative concepts consistent with the same observational evidence green changes to blue and they swap on dec. 3 (Bleen and Grue) - after dec. 3 these concepts are just as consistent with out observational data as the previous concepts of green and blue Problem for LLMs: suppose current empirical data says we that LLMs tell humans the truth - We cannot determine which rule the LLM is following before it reaches the critical condition (gaining power) - Only after it behaves in a misaligned way (the “flip”) can we know “oh—it was following the bluth, not the truth.”
54
Theory-ladenness of observation - Arvan
- Empirical observation is theory laden - when we have a false theory, we will misinterpret what we observe - if we think that an LLM is following the rule Tell humans the truth, then we may interpret its truthful track-record as evidence that it is following this rule, and not as evidence that it is following the rule Tell humans the bluth. - the practical solution is to make further observations
55
Vignette - Arvan
Humans instruct AI: help solve climate change without harming us - could pass safety tests, but then suddenly interpret- Sustainability as kill humans No matter how clearly we define rules, the LLMS can always generate a new interpretation, which implications could be catastrophic
56
The Rule-following Paradox- arvan
Empirically, there is no determinate fact about which rule an agent (human or LLM) is following independent of its future behavior. We should treat complex entities like LLMs as making up the rules they follow as the future emerge - can never tell which rule an agent is following based only on past behaviour
57
Arvan Conclusions
LLMs like humans, are unpredictable - bc alignment is unsolvable, when people think they are solvable, this could cost humans - researchers should not give up, but: we should never give LLMs the kind of power that we wouldn’t trust humans with
58
Dung - conclusion
Conclusion: Humanity will be permanently disempowered by 2100.
59
Dung - argument
by 2100 humanity will build AI that could permanently disempower us - if they can do this, they will (Financial incentives) - if they are built, they will be misaligned (very difficult to get them aligned) - If misaligned AI exists, it will try to disempower humans (disempowering humans is instrumentally useful) - Therefore, they will disempower humanity by 2100.
60
Why humans could loose - Dung
- Humans dominate other mammals due to cognitive superiority. - AI could surpass us in cognition. - AI may gain access to infrastructure (internet, critical systems). - Recursive self-improvement increases power rapidly.