Memory Hierarchy Design Flashcards

(53 cards)

1
Q

High-end microprocessors have ___ on-chip cache

A

> 10 MB

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

When a word is not found in the cache, a ____ occurs

A

miss

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Immediately update lower levels of hierarchy

A

Write-through

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Only update lower levels of hierarchy when an updated block is replaced

A

Write-back

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Fraction of cache access that result in a miss

A

Miss rate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Causes of Misses: First reference to a block

A

Compulsory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Causes of Misses: Blocks discarded and later retrieved

A

Capacity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Causes of Misses: Program makes repeated references to multiple addresses from different blocks that map to the same location in the cache

A

Conflict

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

True or False: Speculative and Multithreaded Processors may execute other instructions during a miss to reduce performance impact of misses

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Six basic cache optimizations: It reduces compulsory misses

A

Larger block size

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Six basic cache optimizations: Increases hit time and power consumption

A

Larger total cache capacity to reduce miss rate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Six basic cache optimizations: Reduces conflict misses

A

Higher Associativity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Six basic cache optimizations: Reduces overall memory access time

A

Higher number of cache levels

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Six basic cache optimizations: It reduces miss penalty

A

Giving priority to read misses over writes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Six basic cache optimizations: Reduces hit time

A

Avoiding address translation in cache indexing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Performance metric concerned with cache

A

Latency

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Performance metric concerned with multiprocessors and I/O

A

Bandwidth

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Time between read request and when desired word arrives

A

Access Time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Minimum time between unrelated requests to memory

A

Cycle Time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

It has low latency and is used for cache

A

SRAM memory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Use for main memory

A

DRAM

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

SRAM requires ______ to retain bit and _ _________/bit

A

low power; 6 transistors

23
Q

DRAM must be ____ after being read, and be periodically _____.

A

re-written; refreshed

24
Q

How many transistor per bit in DRAM?

25
[DRAM] Upper half of address: ______; Lower half of address: ________
Row Access Strobe (RAS) and Column Access Strobe (CAS)
26
According to him, memory capacity should grow linearly with processor speed
Amdahl
27
It must be erased before being overwritten
NAND Flash
28
Possibly 10X improvement in write performance and 2X improvement in read performance
Phase-Change/Memrister Memory
29
True or False: Memory is susceptible to cosmic rays
True
30
Detected and fixed by error correcting codes; dynamic errors
Soft Errors
31
Permanent Errors; use spare rows to replace defective rows
Hard Errors
32
A RAID-like error recovery technique
Chipkill
33
Advanced Optimizations: Small and simple first-level caches and way prediction
Reduce hit time
34
Advanced Optimizations: Pipelined caches, multibanked caches, non-blocking caches
Increase bandwidth
35
Advanced Optimizations: Critical word first, merging write buffers
Reduce miss penalty
36
Advanced Optimizations: Compiler Optimizations
Reduce Miss Rate
37
Advanced Optimizations: Hardware or compiler prefetching
Reduce miss penalty or miss rate via parallelization
38
To improve hit time, predict the way to pre-set mux
Way Prediction
39
Used to improve bandwidth
Pipelined Caches
40
Organize cache as independent banks to support simultaneous access
Multibanked Caches
41
Allow hit before previous misses complete
Nonblocking Caches
42
Request missed word from memory first then send to the processor as soon as it arrives
Critical word first
43
Request words in normal order
Early Restart
44
Swap nested loops to access memory in sequential order
Loop Interchange
45
Instead of accessing entire rows or columns, subdivide matrices into blocks
Blocking
46
Fetch two blocks on miss
Hardware Prefetching
47
Insert prefetch instructions before data is needed
Compiler Prefetching
48
Loads data into register
Register Prefetch
49
Loads data into cache
Cache Prefetch
50
Mold tag and data together, use direct mapped
Alloy cache
51
Keeps processes in their own memory space
Protection via Virtual Memory
52
Supports isolation and security; sharing a computer among many unrelated users
Virtual Machines
53