Chapter 1 Flashcards

(43 cards)

1
Q

Define a side channel.

A

An unintended information leakage path (e.g., timing, power, cache state) that reveals secrets without reading them directly.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What makes caches powerful side channels?

A

Access latency depends on whether data is cached (hit) or not (miss). Attackers measure timing differences to infer victim memory accesses.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Cache line size on x86_64 (typical)?

A

64 bytes. The lowest 6 address bits are the line offset.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Page size (typical) and consequence for indexing?

A

4 KiB pages; the lowest 12 bits are the page offset (includes the 6-bit line offset). Many L1/L2 set-index bits lie within these 12 bits.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Define cache associativity.

A

Number of lines per set. An N-way cache can hold N distinct lines per set before evicting.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Number of sets formula.

A

(#cache bytes) / (line size × associativity).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Which levels are per-core vs shared?

A

L1 and L2 are per-core; L3 (LLC) is shared among cores (often split into slices hashed by physical address).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Inclusive vs non-inclusive caches.

A

Inclusive: L1⊆L2⊆L3; evicting from L3 invalidates copies in upper levels. Non-inclusive (or mostly-inclusive) has no strict subset relation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Do loads fill all cache levels?

A

Typically yes (on inclusive hierarchies): a miss fills L3→L2→L1. Later evictions can leave a line only in L2 or L3.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Define Prime+Probe (high level).

A

Prime: fill a set. Victim runs. Probe: time reaccesses; slower lines indicate eviction by victim → reveals which set the victim used.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Define Flush+Reload (high level).

A

Flush: invalidate a specific shared line (e.g., clflush). Victim runs. Reload: time access; fast=the victim reloaded it, slow=not touched.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

When do you use Flush+Reload?

A

When attacker and victim share physical pages (shared libraries, shared mmap). Gives high spatial resolution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

When do you use Prime+Probe?

A

When no shared memory exists. You only need conflicting (same-set) addresses; works cross-process and cross-VM with care.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

How to time memory on x86_64?

A

Use rdtsc/rdtscp with fences. Example: lfence; rdtsc before and rdtscp; lfence after a volatile load.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Why fences around rdtsc?

A

To serialize instructions so timing boundaries don’t get reordered by the CPU.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Hit vs miss timing (typical ballpark).

A

L1: ~4–5 cycles; L2: ~10–20; L3: ~30–60+; DRAM: ~100–300+. Measure on your CPU to set thresholds.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

How to empirically pick a hit/miss threshold?

A

Measure cold (after clflush) and warm (repeated access) latencies; set threshold ~ midpoint, or use distributions and cluster.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Define conflicting addresses.

A

Different virtual/physical addresses mapping to the same cache set (and same slice for LLC), competing for associativity.

19
Q

How to generate same-set addresses for L1/L2?

A

Use identical page offsets across many pages (e.g., base + k*4096 + offset where offset is multiple of 64).

20
Q

Why does LLC (L3) need extra care?

A

LLC is physically indexed and sliced by a hash; set index uses bits above page offset. You need physical info or measurement-based grouping.

21
Q

Simple conflict test idea.

A

Warm A; hammer candidate B; time A. If A slows (miss) more often than baseline, A and B likely conflict (same set/slice). Repeat and vote.

22
Q

Eviction set definition.

A

A set of ≥ associativity addresses mapping to the same set (and slice) so that accessing them evicts any victim line from that set.

23
Q

Greedy eviction-set refinement.

A

Build a large candidate pool; remove one address at a time and test if the victim still gets evicted. Keep only necessary addresses until ≈ associativity.

24
Q

Why many samples?

A

Microarchitectural noise, OS interrupts, and prefetchers add variance. Aggregation (median/mean) yields reliable classification.

25
Role of prefetchers & how to avoid them.
Sequential accesses may trigger prefetchers; access in pseudo-random order, stride > page, or use dependent pointers to reduce prefetch.
26
Effect of hyperthreading (SMT).
Sibling thread shares core-private caches (L1/L2) and execution resources, increasing noise and potential cross-thread leakage.
27
Virtual vs physical addressing for caches.
L1/L2 often virtually indexed (set bits within page offset) but physically tagged; LLC is physically indexed and sliced—important for address mapping.
28
‘Slice hashing’ in L3.
The LLC is split into slices (≈ one per core). A hash of physical-address bits selects the slice; conflicts must match both set and slice.
29
How clflush works for attacks.
`clflush addr` invalidates the line from all levels (and other cores). Great for clean misses and for Flush+Reload.
30
Flush+Reload minimal pseudo-code.
`clflush(p); victim(); t=timed_load(p); if (t
31
Prime+Probe minimal pseudo-code.
`prime(set_addrs); victim(); for a in set_addrs: t=timed_load(a); if (t>thr) eviction_detected`.
32
Why map ‘same page offset’ helps?
For L1/L2 the set index bits often lie within the 12-bit page offset. Matching offsets → same set index regardless of page frame.
33
How to discover associativity experimentally?
Increase number of same-set candidates K until a probe is consistently evicted. The smallest K that causes eviction ≈ associativity.
34
Covert vs side channel.
Covert: two cooperating parties communicate via the channel. Side: attacker infers information from an unaware victim.
35
Statistics you should know.
Mean, median, variance; hypothesis testing or thresholding to separate hit vs miss distributions robustly.
36
Minimal C timing helper (concept).
`t0=rdtsc(); volatile uint8_t v=*p; t1=rdtsc(); lat=t1-t0;` Use fences and volatile to avoid reordering/optimization.
37
Why volatile on loads?
Prevents the compiler from optimizing the memory access away or caching it in a register; enforces an actual load.
38
Mitigations (software) for Cache Side Channels.
Reduce timer precision, add noise, avoid secret-dependent accesses, use constant-time code, partition memory, disable shared pages.
39
Spectre/Meltdown relationship.
Transient execution leaks or uses data into microarchitectural state (e.g., caches). Cache side channels (like Flush+Reload) exfiltrate that data.
40
Why shared libraries enable Flush+Reload.
Identical code/data pages are mapped read-only and shared across processes; attacker and victim observe the same physical line.
41
How to validate an eviction set.
Prime candidates; access a known target; probe multiple times; expect high miss rate. Cross-validate on different runs/addresses.
42
Typical workflow for an attack lab in cache side channel attacks.
1) Calibrate timers & thresholds. 2) Build/validate eviction set. 3) Synchronize with victim. 4) Collect many traces. 5) Analyze statistically.
43
Common failure modes & fixes in Side Channel Attacks for Cache
Noisy timings → pin threads & repeat; prefetch false positives → randomize order; wrong slice → enlarge pool; thresholds off → re-calibrate and histogram.