Week 12 Flashcards

5.4-5.5 (35 cards)

1
Q

full associative cache

A
  • A cache structure in which a block can be placed in any location in the cache
  • To find a given block in a fully associative cache, all the entries in the cache must be searched because a block can be placed in any on
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

set associative cache

A
  • A cache that has a fixed number of locations (at least two) where each block can be placed
  • Each block in the memory maps to a unique set in the cache given by the index field, and a block can be placed in any element of that set
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

least recently used (LRU)

A
  • a replacement scheme in which the block replaced is the one that has been unused for the longest time
  • implemented by keeping track of when each element in a set was used relative to the other elements in the set
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

average memory access time (AMAT) definition

A
  • way to examine alternative cache designs to capture the fact that the time to access data for both hits and misses affects performance
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

If the clock rate is increased without changing the memory system, the fraction of execution time due to cache misses _____ relative to total execution time

A
  • increases
  • can actually slow down performance
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

set associative

A
  • the amount of sets in a cache
  • ex. two-way set associative means two blocks per slot in the cache
  • set size * number of slots = size of cache in blocks
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

increasing the set associativity usually ___ the miss rate but ___ the hit time

A
  • decreases, increases
  • because more items per set as you increase, takes more time to find a specific item
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

multilevel cache

A
  • A memory hierarchy with multiple levels of caches, rather than just a cache and main memory
  • primary cache of a multilevel cache is often smaller
  • secondary is much larger than in a single-level cache, since the access time is less critical
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

The second-level cache in a multi-level cache is typically used to reduce the multi-level cache’s _____.

A
  • miss penalty
  • The goal of a second-level cache is to reduce the number of accesses to memory, which incur large miss penalties
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

global miss rate

A

The fraction of references that miss in all levels of a multilevel cache

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

local miss rate

A
  • the fraction of references to one level of a cache that miss
  • used in multilevel hierarchies
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

First-level caches are more concerned about _____.

Second-level caches are more concerned about _____.

A
  • hit time, miss rate
  • primary is smaller than secondary so focuses on being fast
  • secondary is larger, its access time is less critical, but still want misses to be fast
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

average memory access time (AMAT) formula

A
  • AMAT = hit rate * hit time + (1 - hit rate) * miss time
    OR
  • AMAT = hit time + (miss rate * miss penalty)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

loads/stores move between the ___ and the __ in the memory hierarchy

A

register file, cache

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

the operating system moves between __ and the ___ in the memory hierarchy

A

OS, disk

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

true structure of memory hierarchy

A
  • upper level is two separate memories, lower level is unified memory
  • separates by the instruction fetch and register file
  • the first part is the instruction memory (IF) cache and the second part is the data memory (MEM) cache
17
Q

three rules of memory hierarchy

A
  • Lowest level in memory hierarchy stores all information long-term
  • A level above is a subset of information from the level below where the level above cannot satisfy a processor memory access
  • When processor stores (writes) information into the top level, that change must eventually propagate down the levels to maintain rule 1
18
Q

reduce compulsory misses (2 ways)

A
  • for static, make cache block size larger reducing number of blocks used by a program
  • while dynamic, while processor is busy making accesses that hit within one block
19
Q

reduce conflict misses (1 way)

A
  • Give each set k block storage locations
  • Zero conflict misses until number of in demand blocks at an index exceeds k
20
Q

reduce capacity misses (2 ways)

A
  • for hardware, increase cache capacity
  • for software, reduce size of data structure temporal and/or spatial locality or machine program
21
Q

how is a block found?

A
  • Use content addressed memory (CAM) strategy
  • Use tag as a key and have comparator circuit for every block storage location in that set, search simultaneously
22
Q

where can a block be placed?

A

In direct mapped cache:
- In one specific block based on modular arithmetic
In set associative cache:
- A block can be placed in any one of the k block storage locations provided by the set

23
Q

least recently used (LRU)

A
  • when access a set, update block usage recency ranking
  • a lot of times programs are the same locality for a while and then shift
24
Q

not recently used (NRU)

A
  • a approximation of the LRU that is simple and efficient
  • add “recently used” bit to each cache block
  • clear all recently used bits periodically, then set recently used bit each time a block is accessed
  • when need to replace, choose the block with the lowest Keep Priority value using this table
25
All replacement algorithms can work well and ______ at times
fail miserably
26
fully associative cache (def and 3 modifications)
- conflict miss category is eliminated if can place any block in any block storage location - Set # field is eliminated - Search cache for a block using only tag field - Need a comparator for every tag in the cache
27
when to use a fully associative cache
When conflict misses dominate AND When the miss penalty is enormous
28
what does it mean to procrastinate to go faster?
- the write-back policy for stores (only when a dirty block must be replaced is the block written down one level) - can make CPU see writes complete at speed of L1 cache when wait to write to main memory - danger is if a write back does not happen before app crashes data is gone
29
dirty bit
- used for write backs - in upper level, asserts that a block is no longer a copy of a block at a lower level in the memory hierarchy
30
write-back and NRU replacement
- revise NRU table to prioritize keeping dirty blocks in cache - still replace block with lowest Keep Priority value
31
the fastest/slowest tier in the memory hierarchy are ___ and ___ respectively
register file, cloud
32
working set
how VM keeps in DRAM only the portion of address space currently in use by program
33
page
a virtual memory block
34
page fault
a virtual memory miss
35
why virtual memory? (2 things)
- Provide a convenient memory address environment - Allow efficient and safe sharing of memory among multiple programs