Finals Study Flashcards

(46 cards)

1
Q

What is Attribute Inference (Model Inversion)?

A

Attack that infers sensitive features of an input from model outputs; works best with overfitting and high-fidelity outputs (logits). Example: Warfarin dosage → infer genotype.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What conditions make model inversion effective?

A

Overfitting, smooth decision boundaries, high-resolution outputs (probabilities/logits), strong correlation between features and predictions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is Property Inference?

A

Attack that learns global properties of the training set (e.g., most users wear glasses, dataset contains celebrities). Does NOT recover individuals.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How does Property Inference work?

A

Train shadow models on datasets with/without property P; meta-classifier predicts whether target model’s training data had P.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is Model Extraction?

A

Attacker recreates a surrogate model f’(x) approximating target f(x). Often uses repeated queries, confidence scores, or boundary exploration.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How is linear model extraction done?

A

Query n+1 points in n-dimensional space → solve system for weights w and bias b.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What helps model extraction succeed?

A

Access to confidence scores/logits; smooth or simple model structure; deterministic output.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is Membership Inference?

A

Attack determining whether a specific record x was in the training set. Relies on overfitting and confidence differences.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the Membership Inference pipeline?

A

Shadow models → attack model trained on their outputs → classify target model’s output as member/non-member.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is Federated Learning?

A

Server sends model → clients train locally → send updates → server aggregates. Raw data stays local.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are attack surfaces in FL?

A

Malicious server reading gradients, gradient leakage reconstructing inputs, malicious clients performing poisoning.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is DSSGD (Selective Gradient Descent)?

A

Clients send only top-K gradients. Reduces leakage of small gradients but selection pattern still leaks info.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

When does DSSGD fail?

A

When sensitive attributes heavily influence the largest gradients that are still transmitted.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is Secure Aggregation?

A

Mechanism where users add pairwise noise (“antiparticles” +x/-x); noise cancels during aggregation so server sees only sum.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Pros of Secure Aggregation?

A

Strong privacy against malicious server; zero utility loss (noise cancels).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Cons of Secure Aggregation?

A

Protocol complexity; requires handling dropouts; involves peer-to-peer coordination.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is Differential Privacy in FL?

A

Adds noise to protect individual updates. Only local DP (on-device) protects against malicious server; server-side DP does not.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

DP vs Secure Aggregation difference?

A

DP adds irreversible noise reducing utility; secure aggregation preserves utility but requires stronger protocol complexity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is Fairness Through Blindness?

A

Removing protected attributes (race/gender) from inputs. Fails because proxy variables still encode them.

20
Q

What is Statistical Parity?

A

Positive outcome probability should be equal across groups: P(positive|S) ≈ P(positive|S^c).

21
Q

Limitation of Statistical Parity?

A

Ignores correctness of predictions; can hide discriminatory error rates.

22
Q

What is QII (Quantitative Input Influence)?

A

Causal transparency method: replace a feature with random value from population; measure output change.

23
Q

What questions does QII answer?

A

“Did gender change the decision?” or “Which feature mattered most for this prediction?”

24
Q

What is memorization in GenAI?

A

LLMs store rare or duplicated sequences (k-eidetic memorization). Can reveal private data through prompting.

25
What increases GenAI memorization?
Duplicated content, unique/small datasets, long-tail tokens with little variance.
26
What are GenAI privacy attacks?
Training data extraction (prompt to reveal private strings), leakage through chat logs reused for training.
27
What mitigations reduce GenAI leakage?
DP training, gradient clipping, deduplication, limiting rare token retention.
28
What does HIPAA protect?
Medical/health records held by covered entities (hospitals, insurers, providers).
29
What does FERPA protect?
Student educational records; applies to schools and universities.
30
What does COPPA regulate?
Data collection from children under 13; requires parental consent; triggered by child-directed content.
31
What is GDPR known for?
EU opt-in model; broad definition of personal data (including inferences); strong user rights.
32
What is CCPA known for?
California opt-out law; “Do Not Sell My Info”; consumer rights to know/delete/share restrictions.
33
What are the four dimensions of privacy notices?
Timing, Channel, Modality, Control.
34
What makes privacy notices effective?
Contextual timing and causing actual behavior change (e.g., adjusting settings).
35
Why did P3P fail?
Low adoption, complexity, mismatch between policy text and machine-readable rules, no enforcement.
36
What do ML compliance systems do?
Use NLP and behavior logs to detect inconsistencies between policy claims and actual data practices.
37
What are hash pointers?
Pointer + cryptographic hash; ensures tamper-evident blockchain structure.
38
Why wait 6 confirmations in Bitcoin?
Heuristic for high confidence that a transaction is irreversible and not part of a fork.
39
What is the privacy weakness of Bitcoin?
It is pseudonymous, not anonymous; transaction graph analysis links addresses.
40
What breaks relationship anonymity in Bitcoin?
Combining multiple inputs in a single transaction reveals shared ownership.
41
What is CoinJoin?
Mixing technique where many users combine inputs into one transaction; hides sender→receiver mapping.
42
When does CoinJoin fail?
If users later consolidate mixed outputs, collapsing the anonymity set.
43
What is Website Fingerprinting?
Inferring visited sites via traffic metadata (timing, burst size, direction), even under Tor/VPN.
44
What defeats WF?
Padding (bandwidth cost) and batching/delays (latency cost).
45
What are acoustic side-channel attacks?
Use sound signatures (keyboard clicks, printer motors) to infer keystrokes or objects.
46
What are motion sensor inference attacks?
Use accelerometer/gyroscope/magnetometer data to infer driving routes, gestures, or user activities.