Chapter 6 Flashcards

(45 cards)

1
Q

One form of multiprocessing, a situation in which two or more processors operate in unison

A

parallel processing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

There are two primary benefits to parallel processing systems: _________________ and ________________.

A

increased reliability and faster processing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Increased flexibility brings increased complexity, however, and two major challenges remain: _________________________

A
  1. how to connect the processors into configurations
  2. how to orchestrate their interaction
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Three typical configurations are:

A

master/slave configuration
loosely coupled configuration
symmetric configuration

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

a single-processor system with additional slave processors, each of which is managed by the primary master processor

A

master/slave configuration

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

advantage and disadvantage of master/slave configuration

A

advantage: simplicity

disadvantage:

  • Its reliability is no higher than for a single-processor system because if the master processor fails, the entire system fails.
  • It can lead to poor use of resources because if a slave processor should become free while the master processor is busy, the slave must wait until the master becomes free and can assign more work to it.
  • It increases the number of interrupts because all slave processors must interrupt the master processor every time they need operating system intervention, such as for I/O requests. This creates long queues at the master processor level when there are many processors and many interrupts.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

each processor controls its own resources—its own files, access to memory, and its own I/O devices—and that means that each processor maintains its own commands and I/O management tables.

A

loosely coupled configuration

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

advantage and disadvantage of loosely coupled configuration

A

advantage: not prone to catastrophic system failures because even when a single processor fails, the others can continue to work independently

disadvantage: difficult to detect when a processor has failed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

The symmetric configuration has four advantages over loosely coupled configuration:

A
  • It’s more reliable.
  • It uses resources effectively.
  • It can balance loads well.
  • It can degrade gracefully in the event of a failure.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

disadvantage of symmetric configuration

A

the most difficult configuration to implement because the processes must be well synchronized to avoid the problems of races and deadlocks

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

other term for symmetric configuration

A

tightly coupled

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

In a __________________, processor scheduling is decentralized.

A

symmetric configuration

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

algorithms to resolve conflicts between processors

A

process synchronization

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What steps must each processor perform when Processor 1 and Processor 2 finish their current jobs at the same time?

A
  1. Consult the list of jobs to see which one should be run next.
  2. Retrieve the job for execution.
  3. Increment the READY list to the next job.
  4. Execute it.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

it is a critical section and its execution must be handled as a unit.

A

critical region

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

explain lock-andkey arrangement

A

Before a process can work on a critical region, it must get the key. And once it has the key, all other processes are locked out until it finishes, unlocks the entry to the critical region, and returns the key so that another process can get the key and begin work.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

lock-and-key arrangement consists of two actions

A

(1) the process must first see if the key is available

(2) if it is available, the process must pick it up and put it in the lock to make it unavailable to all other processes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Several locking mechanisms have been developed, including:

A

test-and-set
WAIT and SIGNAL
semaphores

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

_______ is a single, indivisible machine instruction known simply as TS and was introduced by IBM for its multiprocessing System 360/370 computers.

20
Q

two drawbacks of test and set

A
  1. when many processes are waiting to enter a critical region, starvation could occur because the processes gain access in an arbitrary fashion
  2. the waiting processes remain in unproductive, resource-consuming wait loops, requiring context switching. (also known as busy waiting)
21
Q

is a modification of test-and-set that’s designed to remove busy waiting.

A

WAIT and SIGNAL

22
Q

two operations added in WAIT and SIGNAL

A

WAIT and SIGNAL

23
Q

A _______ is a non-negative integer variable that’s used as a binary signal, a flag.

24
Q

In an operating system, a semaphore performs a similar function: It signals if and when a resource is free and can be used by a process. Dijkstra (1965) introduced two operations to overcome the process synchronization problem we’ve discussed. Dijkstra called them P and V, and that’s how they’re known today.

what does P and V stand for

A

The P stands for the Dutch word proberen (to test)

The V stands for verhogen (to increment).

The P and V operations do just that: They test and increment.

25
The name traditionally given to this semaphore in the literature which is necessary to avoid having two operations attempt to execute at the same time.
mutex, stands for MUTual EXclusion
26
The classic problem of producers and consumers is:
one in which one process produces some data that another process consumes later
27
The classic problem of readers and writers is:
arises when two types of process need to access a shared resource such as a file or database
28
The state of the system can be summarized by four counters initialized to 0:
* Number of readers who have requested a resource and haven’t yet released it (R1 = 0) * Number of readers who are using a resource and haven’t yet released it (R2 = 0) * Number of writers who have requested a resource and haven’t yet released it (W1 = 0) * Number of writers who are using a resource and haven’t yet released it (W2 = 0)
29
Producers and Consumers Algorithm
empty: = n full: = 0 mutex: = 1 COBEGIN repeat until no more data PRODUCER repeat until buffer is empty CONSUMER COEND
30
having a programmer explicitly state which instructions can be executed in parallel
explicit parallelism
31
The automatic detection by the compiler of instructions that can be performed in parallel
implicit parallelism
32
heavyweight processs have the following characteristics:
* They pass through several states from their initial entry into the computer system to their completion: ready, running, waiting, delayed, and blocked. * They require space in main memory where they reside during their execution. * From time to time they require other resources such as data
33
Contents of Thread Control Block
Thread identification Thread state CPU information: Program counter Register contents Thread priority Pointer to process that created this thread Pointers to all other threads created by this thread
34
what 4 cases can concurrent programming be applied?
Case 1: Array Operations Case 2: Matrix Multiplication Case 3: Searching Databases Case 4: Sorting or Merging Files
35
what are the three levels of multiprocessing?
- Job level - Process level - Thread level
36
what kind of arrangement was process synchronization sometimes implemented as?
lock-and-key arrangement
37
these two are implemented using semaphores and both require mutual exclusion and synchronization
producers and consumers readers and writers
38
In the last chapter, we discussed deadlocks. Describe in your own words why mutual exclusion is necessary for multiprogramming systems.
Mutual exclusion is necessary in multiprogramming systems to make sure that only one process uses a shared resource (like a file or printer) at a time. Without it, two or more processes might try to use or change the same resource at once, causing errors, data corruption, or even system crashes. It helps keep processes organized and prevents chaos.
39
There are occasions when several processes work directly together to complete a common task. Two famous examples are the problems of ____________________________. Each case requires both mutual exclusion and synchronization, and each is implemented by using semaphores.
producers and consumers and of readers and writers
40
Order of operations steps:
1. Perform all calculations in parentheses 2. Calculate all exponents 3. Perform all multiplication and division 4. Perform the addition and subtraction For each step, go from left to right.
41
Advantage of test and set
Simple procedure to implement; work well for a small number of processes
42
Three important features of optical disc (chapter 7 ni xD)
Sustained data transfer rate Average access time Cache size
43
Limitations of flash memory
1. The bits can be erased only by applying the flash to a large block of memory 2. With each flash erasure, the block becomes less stable
44
Describe the programmer’s role when implementing explicit parallelism.
They are required to explicitly state which instructions are to be executed in parallel. The programmer is responsible for manually dividing tasks, managing threads or processes, synchronizing data, and ensuring proper communication between them.
45
Describe the programmer’s role when implementing implicit parallelism.
The compiler or system automatically identifies and executes parallel tasks, so the programmer writes normal sequential code without handling parallelization details.