Week 14 Flashcards

6.1-6.5 (45 cards)

1
Q

multiprocessor

A
  • A computer system with at least two processors
  • is in contrast to a uniprocessor
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

task-level parallelism

A

utilizing multiple processors by running independent programs simultaneously

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

cluster

A

A set of independent computers connected over a LAN that function as a single large multiprocessor

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

multicore multiprocessor

A
  • A microprocessor containing multiple processors (“cores”) in a single integrated circuit
  • basically all microprocessors today
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

shared memory multiprocessor (SMP)

A

a parallel processor with a single physical address space

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

strong scaling

A
  • speed-up achieved on a multiprocessor without increasing the size of the problem
  • aka without spending more time, money, resources
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

weak scaling

A
  • speed-up achieved on a multiprocessor while increasing the size of the problem proportionally to the increase in the number of processors
  • can not be worth it if high cost creates a small speedup
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

SSID (single instruction stream, single data stream)

A
  • a uniprocessor
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

MIMD (multiple instruction streams, multiple data streams)

A
  • a multiprocessor
  • what’s used in pretty much everything today
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

SIMD (single instruction, multiple data streams)

A

The same instruction is applied to many data streams, as in a vector processor

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

data-level parallelism

A
  • conventional MIMD programming model, where a single program runs all processor
  • parallelism achieved by performing the same operation on independent data
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

vector lane

A
  • same instruction is applied to many data streams, as in a vector processor
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

hardware multithreading

A

increasing utilization of a processor by switching to another thread when one thread is stalled

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

thread

A
  • thread includes the program counter, the register state, and the stack
  • is a lightweight process; whereas threads commonly share a single address space, processes don’t
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

process

A
  • includes one or more threads, the address space, and the operating system state
  • a process switch usually invokes the operating system, but not a thread switch
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

fine-grained multithreading

A

version of hardware multithreading that implies switching between threads after every instruction

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

coarse-grained multithreading

A

version of hardware multithreading that implies switching between threads only after significant events, such as a last-level cache miss

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

SMT (simultaneous multithreading)

A
  • version of multithreading that lowers the cost of multithreading by utilizing the resources needed for multiple issue, dynamically scheduled microarchitecture
  • most sophisticated and optimized version
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

synchronization

A

process of coordinating the behavior of two or more processes, which may be running on different processors

20
Q

SPMD

A
  • single program, multiple data streams
  • conventional MIMD programming model, where a single program runs across all processors
21
Q

SIMD

A
  • single instruction stream, multiple data streams
  • same instructions is applied to many data streams, as in a vector processor
  • has a lot of adders so operation can be performed in parallel
22
Q

network

A

how multicore multiprocessors w/shared memory are connected

23
Q

processing element (PE)

A
  • a fundamental unit performing basic operations (like add, logic, multiply) on data
  • used in SIMDs
24
Q

SIMD structure

A
  • has a lot of adders so operation can be performed in parallel
  • each processor has a processing element and local data memory
  • all connect via a control unit that enables certain instructions
25
parallel computer
used when parallelism dominates the entire architecture
26
data dependencies
calculation that depends on a prior calculation must execute in program order
27
critical path
- concept that no program can run more quickly than its longest chain of dependent calculations - most algorithms are not one long chain of dependent calculations
28
fine-grain parallelism
- when communication between parallel tasks is frequent - amount of communication is relatively high, issues multiple instructions per clock cycle - communication must be low latency
29
course-grain parallelism
- communication between parallel tasks is infrequent - communication doesn't have to be as low latency as fine grain
30
embarrassingly parallel
communication between parallel tasks is rare or never occurs
31
asymmetric
collection of different (unique) processors that can be optimized for specific tasks
32
symmetric
- set of N identical processors - multicore processors are symmetric - only makes sense to to use if fast way of communication between processors
33
flynn taxonomy
wear to classify programs by number of instruction streams and number of data streams (think SISD, SIMD, MISD, MIMD)
34
control unit (CU)
- contains IF and ID processor pipeline stages and instruction memory - CU fetches, decodes, and broadcasts decoded instructions to PEs
34
PE enable stack
- controls execution of non-control type instructions - if top == true then PE executes, if top == FALSE ignores CU broadcast instruction - great for things like if else
35
push
used in conventional stack operations to add something to stack
36
push*
- ex. push* A' ANDs A with present top of PE enable stack - looks at top of PE enable stack - for SIMD only
37
push**
- following a push* command, push** A reaches one place down from top of stack to read PT - for SIMD only
37
barriers
- synchronize processors for MIMD - barrier means that all processes must stop and may not proceed until after all processes reach the barrier - this is because parallelism is visible to programmer in MIMD (each processor is its own agent)
38
comparison of SIMD and MIMD (2 vs. 3 things)
SIMD: - only one copy of program in memory - no synchronized overhead MIMD: - more difficult to write - copy of program in each processor - explicit synchronization required with barriers
39
vector execution
- uses single instructions to perform operations on multiple data elements (vectors) simultaneously - interpretation of SIMD - vector code takes up less space because larger instructions fetch and execute less times, reduces demand on memory hierarchy and space
40
multiprocessor overhead
- more processors is not always a clear win - overhead sources may increase non-linearly with increasing number of processors
41
MIMD deadlock
- one processor able to race ahead to the same barrier in the code before the last processor has left the barrier - barrier mismanagement causes MIMD computer to deadlock
42
hardware locks
- prevents simultaneous access to data - a separate lock assigned to each item, each lock is assigned an id - hardware allows one processor (core) to hold a given lock at a given time and blocks others
43
grid computing
- use idle time of many computers on the internet to achieve high throughput in solving small portions of a large problem - can also use for redundancy to screen out errors