Chapter 5: CPU Scheduling (3) Flashcards

(238 cards)

1
Q

What fundamental concept is the basis of multiprogrammed operating systems?

A

CPU scheduling.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

In modern operating systems, what is actually being scheduled by the OS instead of processes?

A

Kernel-level threads.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What two scheduling terms are often used interchangeably, despite the technical difference?

A

Process scheduling and thread scheduling.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

In the context of CPU scheduling, the phrase “run on a CPU” implies that a process is running on what specific component?

A

A CPU’s core.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

In a system with a single CPU core, how many processes can run at any given time?

A

Only one process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the primary objective of multiprogramming with respect to the CPU?

A

To have some process running at all times, in order to maximize CPU utilization.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

In a multiprogramming system, what does the operating system do when the currently running process has to wait?

A

It takes the CPU away from that process and gives the CPU to another process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Process execution consists of a cycle of alternating between which two types of bursts?

A

CPU bursts and I/O bursts.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

A process’s execution cycle begins with what type of burst?

A

A CPU burst.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How does the final CPU burst of a process typically conclude?

A

With a system request to terminate execution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

The frequency distribution of CPU bursts is characterized by a large number of _____ bursts and a small number of _____ bursts.

A

short, long

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What component of the operating system selects a process from the ready queue to be executed when the CPU becomes idle?

A

The CPU scheduler.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Is the ready queue necessarily implemented as a First-In, First-Out (FIFO) queue?

A

No, it can be implemented as other structures like a priority queue, a tree, or an unordered list.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What data structure generally represents a process in the records of a ready queue?

A

The Process Control Block (PCB).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Term: Nonpreemptive Scheduler

A

A scheduler where once the CPU has been allocated to a process, the process keeps it until it releases it by terminating or switching to the waiting state.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Term: Preemptive Scheduler

A

A scheduler where a process that has been allocated the CPU can have it taken away before it terminates or voluntarily switches to the waiting state.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is the name of the module that gives control of the CPU’s core to the process selected by the CPU scheduler?

A

The dispatcher.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What are the three steps involved in the dispatcher’s function?

A
  1. Switching context, 2. Switching to user mode, 3. Jumping to the proper location in the user program to restart it.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Term: Dispatch Latency

A

The time it takes for the dispatcher to stop one process and start another one running.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Term: CPU Utilization

A

A scheduling criterion that measures the percentage of time the CPU is utilized.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Term: Throughput

A

A scheduling criterion that measures the number of processes that complete their execution per time unit.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Term: Turnaround Time

A

A scheduling criterion measuring the interval from the time a process enters the system to the time of its completion.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

How is turnaround time calculated?

A

Turnaround time = completion time − arrival time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Term: Waiting Time

A

A scheduling criterion that measures the sum of the time a process spent waiting in the ready queue.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Term: Response Time
A scheduling criterion measuring the interval from when a process enters the system to when its first CPU run starts.
26
How is response time calculated?
Response time = time of first CPU run − arrival time.
27
What is the desired optimization goal for CPU utilization and throughput?
To maximize them.
28
What is the desired optimization goal for turnaround time, waiting time, and response time?
To minimize them.
29
In most cases, scheduling algorithms aim to optimize the _____ measure of the evaluation criteria.
average
30
To guarantee good service for all users in an interactive system, what specific metric might a scheduler aim to minimize?
The maximum response time.
31
What is considered the simplest CPU-scheduling algorithm?
First-Come, First-Served (FCFS).
32
What is the fundamental principle of the First-Come, First-Served (FCFS) scheduling algorithm?
The process that requests the CPU first is allocated the CPU first.
33
What data structure is typically used to manage the FCFS policy?
A FIFO (First-In, First-Out) queue.
34
When a new process enters an FCFS ready queue, its PCB is linked to the _____ of the queue.
tail
35
When the CPU is free in an FCFS system, it is allocated to the process at the _____ of the queue.
head
36
What is a significant negative characteristic of the FCFS scheduling policy regarding performance?
The average waiting time is often quite long.
37
Term: Convoy Effect
A phenomenon where a number of processes wait for one long, CPU-bound process, causing suboptimal device and CPU utilization.
38
In the convoy effect scenario, what happens to I/O devices while I/O-bound processes are waiting in the ready queue?
The I/O devices are idle.
39
In the convoy effect scenario, what happens to the CPU after the long CPU-bound process finally moves to an I/O device?
The CPU sits idle because the I/O-bound processes are now busy with I/O.
40
Is the FCFS scheduling algorithm preemptive or nonpreemptive?
Nonpreemptive.
41
Why is the FCFS algorithm considered particularly troublesome for interactive systems?
Because one process can keep the CPU for an extended period, preventing other processes from getting a share at regular intervals.
42
What scheduling algorithm assigns the CPU to the process that has the smallest next CPU burst?
Shortest-Job-First (SJF).
43
How does the SJF algorithm handle a tie where two processes have the same length for their next CPU burst?
It uses FCFS scheduling to break the tie.
44
What is a more technically accurate term for the Shortest-Job-First (SJF) scheduling method?
Shortest-next-CPU-burst algorithm.
45
In what way is the SJF scheduling algorithm considered 'provably optimal'?
It gives the minimum average waiting time for a given set of processes.
46
What is the primary practical difficulty of implementing the SJF algorithm at the level of CPU scheduling?
There is no way to know the length of the next CPU burst in advance.
47
Since the exact next CPU burst length is unknown, SJF is typically implemented by _____ its value.
predicting (or approximating)
48
What mathematical technique is generally used to predict the length of the next CPU burst?
An exponential average of the measured lengths of previous CPU bursts.
49
In the exponential average formula for predicting CPU bursts, what does $t_n$ represent?
The true (actual) length of the $n$-th CPU burst.
50
In the exponential average formula for predicting CPU bursts, what does $\tau_{n+1}$ represent?
The predicted value for the next (n+1)-th CPU burst.
51
What is the formula for calculating the predicted next CPU burst, $\tau_{n+1}$, using exponential averaging?
$\tau_{n+1} = \alpha t_n + (1 - \alpha)\tau_n$
52
In the exponential average formula, the parameter $\alpha$ controls the relative weight of _____ and _____ history.
recent, past
53
In exponential averaging, if $\alpha = 0$, what is the effect on the next prediction $\tau_{n+1}$?
The prediction becomes $\tau_{n+1} = \tau_n$, meaning recent history has no effect.
54
In exponential averaging, if $\alpha = 1$, what is the effect on the next prediction $\tau_{n+1}$?
The prediction becomes $\tau_{n+1} = t_n$, meaning only the most recent CPU burst matters.
55
What is a common value chosen for $\alpha$ to give recent and past history equal weight in the prediction?
$\alpha = 1/2$
56
In the expanded formula for exponential average, why do older CPU bursts have progressively less weight in the prediction?
Because the weighting factor for past bursts, $(1 - \alpha)^j$, decreases as $j$ increases.
57
The SJF algorithm can be implemented in two forms: _____ and _____.
preemptive, nonpreemptive
58
When does the choice between preemptive and nonpreemptive SJF become critical?
When a new process arrives at the ready queue while another process is currently executing.
59
A preemptive SJF algorithm will preempt the current process if the new process has a CPU burst that is shorter than what?
The remaining portion of the next CPU burst of the currently executing process.
60
What is another name for preemptive SJF scheduling?
Shortest-remaining-time-first (SRTF) scheduling.
61
A _____ SJF algorithm will allow the currently running process to finish its CPU burst, even if a new process with a shorter burst arrives.
nonpreemptive
62
In a Gantt chart for FCFS with processes P1 (24ms), P2 (3ms), P3 (3ms) arriving in that order, what is the waiting time for P2?
24 msec.
63
In a Gantt chart for FCFS with processes P1 (24ms), P2 (3ms), P3 (3ms) arriving in that order, what is the waiting time for P3?
27 msec (24 + 3).
64
If processes P2 (3ms), P3 (3ms), P1 (24ms) arrive in that order for FCFS, what is the waiting time for P1?
6 msec (3 + 3).
65
Comparing two arrival orders in FCFS demonstrates that the _____ can be dramatically affected by the order.
average waiting time
66
In preemptive SJF, if P1 is running (8ms total, started at t=0) and P2 arrives at t=1 with a 4ms burst, what happens?
P1 is preempted because its remaining time (7ms) is greater than P2's burst time (4ms).
67
For the following processes with preemptive SJF, what is the average waiting time? P1(AT=0, BT=8), P2(AT=1, BT=4), P3(AT=2, BT=9), P4(AT=3, BT=5)
6.5 msec.
68
The process of stopping one process and starting another is known as a context switch, which contributes to what form of latency?
Dispatch latency.
69
Which scheduling algorithm is nonpreemptive and can lead to the convoy effect?
First-Come, First-Served (FCFS).
70
Which scheduling algorithm provides the optimal (minimum) average waiting time but is difficult to implement directly?
Shortest-Job-First (SJF).
71
In an operating system, processes are kept in _____ to be scheduled for the CPU.
memory
72
What is the term for the graphical representation of a particular schedule, often shown as a bar chart?
A Gantt chart.
73
In the exponential average formula $\tau_{n+1} = \alpha t_n + (1 - \alpha)\tau_n$, what is $\tau_n$?
The predicted value for the $n$-th CPU burst.
74
To improve responsiveness in an interactive system, a _____ scheduler is generally preferred over a nonpreemptive one.
preemptive
75
In the example of SJF, P4 arrives at time 5.0 with a burst time of 3. Why is it scheduled before P1, P2, and P3?
Because its burst time (3) is the shortest among all processes in the ready queue at that moment.
76
Moving a _____ process before a _____ one decreases the average waiting time in SJF.
short, long
77
What initial value, $\tau_0$, can be used when predicting the first CPU burst?
A predefined constant or an overall system average.
78
What is the waiting time for a process that starts executing immediately upon arrival?
0
79
The dispatcher jumps to the proper location in a user program to _____ that program.
restart
80
How does multiprogramming on a multicore system extend the concept of keeping the CPU busy?
It aims to keep all processing cores busy, not just one.
81
What is the last step a dispatcher performs before a user process begins execution?
Jumping to the proper location in the user program to restart that program.
82
The FCFS algorithm is managed with a FIFO queue. When the CPU is free, the process at the _____ is allocated the CPU and then removed from the queue.
head of the queue
83
Which scheduling algorithm's performance is highly dependent on the initial arrival order of processes?
First-Come, First-Served (FCFS).
84
In a preemptive SJF schedule, a newly arrived process will run if its burst time is less than the _____ time of the currently executing process.
remaining
85
The formula $\tau_{n+1} = \alpha t_n + (1 - \alpha)\tau_n$ is known as what type of prediction method?
Exponential average.
86
If the parameter $\alpha$ is set to 1/2 in exponential averaging, what does this imply about the weighting of history?
Recent history and past history are equally weighted.
87
How is the round-robin (RR) scheduling algorithm similar to FCFS scheduling?
It is similar to FCFS scheduling, but adds preemption to enable the system to switch between processes.
88
In Round Robin scheduling, what is the name for the small unit of time that is defined?
It is called a time quantum or time slice.
89
What is the general range for a time quantum in Round Robin scheduling?
A time quantum is generally in the range of 10 to 100 milliseconds.
90
How is the ready queue treated in the Round Robin (RR) scheduling algorithm?
The ready queue is treated as a circular queue.
91
What does the CPU scheduler do with the ready queue in RR scheduling?
The CPU scheduler goes around the ready queue, allocating the CPU to each process for a time interval of up to one time quantum.
92
To implement RR scheduling, the ready queue is treated as a _____ queue of processes.
FIFO (First-In, First-Out)
93
Where are new processes added in the ready queue for RR scheduling?
New processes are added to the tail of the ready queue.
94
What actions does the CPU scheduler take after picking the first process from the ready queue in RR scheduling?
It sets a timer to interrupt after one time quantum and dispatches the process.
95
In RR scheduling, what happens if a process has a CPU burst of less than one time quantum?
The process itself will release the CPU voluntarily.
96
After a process voluntarily releases the CPU in RR scheduling, what does the scheduler do next?
The scheduler will then proceed to the next process in the ready queue.
97
In RR scheduling, what occurs if a process's CPU burst is longer than one time quantum?
The timer will go off and will cause an interrupt to the operating system.
98
Following a timer interrupt in RR scheduling, what two actions are executed?
A context switch will be executed, and the process will be put at the tail of the ready queue.
99
After a context switch due to a timer interrupt, what does the CPU scheduler do?
The CPU scheduler will then select the next process in the ready queue.
100
The mechanism of handling CPU bursts longer than a time quantum makes the RR scheduling algorithm _____.
preemptive
101
What characteristic does Round Robin scheduling typically have regarding average turnaround time compared to SJF?
RR typically has a higher average turnaround time than SJF.
102
What characteristic does Round Robin scheduling typically have regarding response time compared to SJF?
RR typically has a shorter response time than SJF.
103
What is a common characteristic of the average waiting time under the RR policy?
The average waiting time under the RR policy is often long.
104
The performance of the RR algorithm depends heavily on the size of the _____.
time quantum
105
If the time quantum is extremely large, the RR policy becomes the same as which other policy?
The FCFS (First-Come, First-Served) policy.
106
What is the potential result of setting an extremely small time quantum in the RR approach?
It can result in a large number of context switches.
107
In what situation does the average turnaround time of a set of processes improve under RR scheduling?
It can be improved if most processes finish their next CPU burst in a single time quantum.
108
How does the addition of context-switch time affect average turnaround time for a smaller time quantum?
The average turnaround time increases even more, since more context switches are required.
109
What is the relationship between the time quantum and context-switch time for efficient RR scheduling?
The time quantum should be large compared with the context-switch time.
110
What is a general rule of thumb for setting the time quantum size relative to CPU bursts?
A rule of thumb is that 80 percent of the CPU bursts should be shorter than the time quantum.
111
In priority scheduling, how is the CPU allocated?
The CPU is allocated to the process with the highest priority.
112
How are equal-priority processes scheduled in a priority scheduling system?
Equal-priority processes are scheduled in FCFS (First-Come, First-Served) order.
113
The Shortest Job First (SJF) algorithm is a special case of which general scheduling algorithm?
The general priority-scheduling algorithm.
114
Specifically, SJF is a priority algorithm where the priority is the _____.
predicted next CPU burst
115
In the context of the source material, what does a low number used for priority representation indicate?
A low number represents a high priority.
116
What is the term for priorities that use measurable quantities like memory requirements or I/O to CPU burst ratio?
Internal priorities.
117
What is the term for priorities that are set by criteria outside the operating system, such as process importance or funding?
External priorities.
118
On Linux, what command can be used to change the importance of a program, an example of an external priority?
The 'nice' command.
119
What is a major problem with priority scheduling algorithms, also known as starvation?
Indefinite blocking.
120
What is the term for a process that is ready to run but is waiting for the CPU?
It can be considered blocked.
121
How can a priority scheduling algorithm cause some low-priority processes to wait indefinitely?
In a heavily loaded system, a steady stream of higher-priority processes can prevent a low-priority process from ever getting the CPU.
122
What is a solution to the problem of indefinite blockage of low-priority processes?
Aging.
123
What does the process of aging involve?
Aging involves gradually increasing the priority of processes that wait in the system for a long time.
124
Priority scheduling can be combined with round-robin so that the system executes the highest-priority process using priority scheduling and runs processes with the same priority using _____.
round-robin scheduling
125
In the example on slide 13, after process P2 finishes, why does P3 run to completion instead of continuing in RR with P2?
Because P3 becomes the highest-priority process in the ready queue at that moment.
126
When a process arrives at the ready queue in a priority system, its priority is compared with the priority of the _____.
currently running process
127
If a newly arrived process has a higher priority, what will a preemptive priority scheduling algorithm do?
It will allocate the CPU to the process with the higher priority.
128
If a newly arrived process has a higher priority, what will a nonpreemptive priority scheduling algorithm do?
It will simply put the new process at the head of the ready queue.
129
In practice, it is often easier to have separate queues for each distinct priority, which is known as _____.
multilevel queue
130
How does priority scheduling work with a multilevel queue structure?
It simply schedules the process in the highest-priority queue.
131
A multilevel queue scheduling algorithm partitions processes into several separate queues based on the _____.
process type
132
What is an example of partitioning processes into separate queues in a multilevel queue system?
Separate queues could be used for foreground and background processes.
133
In multilevel queue scheduling, can each queue have its own scheduling algorithm?
Yes, each queue could have its own scheduling algorithm (e.g., RR for foreground, FCFS for background).
134
What type of scheduling is commonly implemented to manage the queues themselves in a multilevel queue system?
Fixed-priority preemptive scheduling.
135
In a multilevel queue system, each queue has _____ priority over lower-priority queues.
absolute
136
In a multilevel queue example, under what condition could a process in the batch queue run?
Only if the queues for real-time, system, and interactive processes were all empty.
137
If an interactive process entered the ready queue while a batch process was running, what would happen to the batch process?
The batch process would be preempted.
138
Besides fixed-priority preemptive scheduling, what is another possibility for scheduling among queues in a multilevel system?
To time-slice among the queues.
139
What does it mean to time-slice among queues?
Each queue gets a certain portion of the CPU time, which it can then schedule among its various processes.
140
In a foreground-background queue example with time-slicing, the foreground queue could get 80% of CPU time for _____ scheduling.
RR (Round Robin)
141
In a foreground-background queue example with time-slicing, the background queue could get 20% of CPU time for _____ scheduling.
FCFS (First-Come, First-Served)
142
In a normal multilevel queue scheduling algorithm, how are processes assigned to a queue?
Processes are permanently assigned to a queue when they enter the system.
143
What is the primary advantage of the fixed queue assignment in standard multilevel queue scheduling?
It has low scheduling overhead.
144
What is the primary disadvantage of the fixed queue assignment in standard multilevel queue scheduling?
It is inflexible.
145
Which scheduling algorithm allows a process to move between queues?
The multilevel feedback queue scheduling algorithm.
146
The multilevel feedback queue algorithm differentiates processes according to the characteristics of their _____.
CPU bursts
147
In a multilevel feedback queue, what might happen to a process that uses too much CPU time?
It may be moved to a lower-priority queue.
148
Which types of processes are typically left in the higher-priority queues by a multilevel feedback queue algorithm?
I/O-bound and interactive processes.
149
In a multilevel feedback queue, what might happen to a process that waits too long in a lower-priority queue?
It may be moved to a higher-priority queue.
150
What problem does moving a process to a higher-priority queue after a long wait prevent?
This type of aging prevents starvation.
151
In the three-queue MFQS example, when will the scheduler execute processes in queue 1?
Only when queue 0 is empty.
152
In the three-queue MFQS example, a process in queue 1 will be preempted by a process arriving at which queue?
Queue 0.
153
In the three-queue MFQS example, a process that arrives for queue 1 will preempt a process in which queue?
Queue 2.
154
In the MFQS example, where is an entering process initially placed?
An entering process is put in queue 0.
155
In the MFQS example, what happens if a process in queue 0 does not finish within its 8 millisecond time quantum?
It is moved to the tail of queue 1.
156
If queue 0 is empty in the MFQS example, the process at the head of queue 1 is given a quantum of _____ milliseconds.
16
157
In the MFQS example, what happens if a process in queue 1 does not complete within its 16 millisecond time quantum?
It is preempted and moved to the tail of queue 2.
158
How are processes in queue 2 run in the MFQS example?
They are run on an FCFS basis.
159
Under what condition are processes in queue 2 run in the MFQS example?
They are run only when queues 0 and 1 are empty.
160
The MFQS algorithm gives the highest priority to any process with a CPU burst of _____ or less.
8 milliseconds
161
Processes that need more than 8 but less than 24 milliseconds are served quickly, although with _____ priority than shorter processes.
lower
162
In the MFQS example, where do long processes automatically sink to?
They sink to queue 2.
163
What is one parameter that defines a multilevel feedback queue scheduler?
The number of queues.
164
What is a second parameter that defines a multilevel feedback queue scheduler?
The scheduling algorithm for each queue.
165
What is a third parameter that defines a multilevel feedback queue scheduler?
The scheduling algorithm among the queues.
166
What is a fourth parameter that defines a multilevel feedback queue scheduler?
The method used to determine when to upgrade a process to a higher-priority queue.
167
What is a fifth parameter that defines a multilevel feedback queue scheduler?
The method used to determine when to demote a process to a lower-priority queue.
168
What is a sixth parameter that defines a multilevel feedback queue scheduler?
The method used to determine which queue a process will enter when that process needs service.
169
The definition of a multilevel feedback queue scheduler makes it the most _____ CPU-scheduling algorithm.
general
170
What is a major downside of the multilevel feedback queue scheduler, despite its generality?
It is also the most complex algorithm.
171
According to most modern operating systems, what entities are scheduled by the OS scheduler?
It is kernel-level threads, not processes, that are scheduled.
172
For systems using many-to-one and many-to-many models, what is the first step required to allocate the CPU to a user-level thread?
Process Contention Scope (PCS), which schedules user-level threads to be mapped to a kernel-level thread.
173
What does Process Contention Scope (PCS) involve?
Locally scheduling the threads belonging to the same process so that a user-level thread can be mapped to a kernel-level thread.
174
For systems using many-to-one and many-to-many models, what is the second step required to allocate the CPU to a user-level thread?
System Contention Scope (SCS), which schedules kernel-level threads so the CPU can be allocated to one.
175
What does System Contention Scope (SCS) involve?
Globally scheduling kernel-level threads so that the CPU can be allocated to a kernel-level thread.
176
For systems using the one-to-one model, such as Windows and Linux, which contention scope is required for thread scheduling?
Only System Contention Scope (SCS) is required.
177
In _____ Multiprocessing, a single processor known as the master server handles all scheduling decisions and system activities.
Asymmetric
178
What is the primary role of the 'master server' in Asymmetric Multiprocessing?
It handles all scheduling decisions, I/O processing, and other system activities.
179
In Asymmetric Multiprocessing, what task is performed by the processors other than the master server?
They only execute user code.
180
What is the main advantage of the Asymmetric Multiprocessing scheme?
It is a simple scheme because only one processor accesses system data structures, reducing the need for data sharing.
181
What is the main disadvantage of Asymmetric Multiprocessing?
The master server becomes a potential bottleneck that could affect overall system performance.
182
What is the standard scheduling scheme for a multiprocessor system?
Symmetric Multiprocessing (SMP).
183
In Symmetric Multiprocessing (SMP), how is scheduling handled?
Each processor is self-scheduling.
184
What does the scheduler for each processor do in an SMP system?
It examines the ready queue and selects a process to run.
185
What is the first of two possible strategies for organizing processes eligible for scheduling in an SMP system?
All processes may be in a common ready queue.
186
What is the second of two possible strategies for organizing processes eligible for scheduling in an SMP system?
Each processor may have its own private ready queue of processes.
187
What is a potential problem that can arise when using a common ready queue in an SMP system?
Two separate processors might try to schedule the same process in the shared ready queue.
188
What is the primary advantage of each processor having its own private ready queue in an SMP system?
It does not suffer from the possible performance problems associated with a shared ready queue.
189
Which strategy for organizing ready queues is the most common on systems supporting SMP?
Each processor having its own private ready queue.
190
Name four modern operating systems that support Symmetric Multiprocessing (SMP).
Windows, Linux, macOS, and mobile operating systems like Android and iOS.
191
What is the purpose of load balancing in an SMP system?
It attempts to keep the workload evenly distributed across all processors.
192
Why is it important to balance workloads on SMP systems?
To fully utilize the benefits of having multiple processors.
193
What is a potential consequence of not performing load balancing on an SMP system?
One or more processors may sit idle while other processors have high workloads.
194
Load balancing is typically necessary only on systems with what kind of ready queue organization?
Systems where each processor has its own private ready queue.
195
Why is load balancing generally unnecessary on systems with a common ready queue?
Because once a processor becomes idle, it immediately extracts a runnable process from the common ready queue.
196
What are the two general approaches to load balancing?
Push migration and pull migration.
197
In load balancing, what is push migration?
A specific task periodically checks the load on each processor and moves processes from overloaded to less-busy processors.
198
In load balancing, what is pull migration?
It occurs when an idle processor pulls a waiting process from a busy processor.
199
Are push and pull migration mutually exclusive techniques?
No, they are often implemented in parallel on load-balancing systems.
200
Which scheduler in Linux implements both push and pull migration techniques for load balancing?
The Completely Fair Scheduler (CFS).
201
When a process has been running on a specific processor, what populates the cache memory of that processor?
The data most recently accessed by the process.
202
What is the result of a processor's cache being populated with a process's recently accessed data?
Successive memory accesses by the process are often satisfied by the cache memory, known as a 'warm cache'.
203
What is the term for a processor cache that contains the data most recently accessed by a process?
A 'warm cache'.
204
What two things must happen to cache memory if a process migrates to another processor due to load balancing?
The content of the cache on the first processor must be invalidated, and the cache for the second processor must be repopulated.
205
The attempt by most SMP operating systems to avoid migrating a process from one processor to another is known as _____.
processor affinity
206
Why do most SMP operating systems try to maintain processor affinity?
Because of the high cost of invalidating and repopulating caches.
207
How does using a common ready queue affect processor affinity?
A process could be scheduled on a new processor, in which case the new processor's cache must be repopulated.
208
How does using a private ready queue promote processor affinity?
A process is always assigned to the same processor and can therefore benefit from the content of a 'warm' cache.
209
What were the two major weaknesses of the Linux kernel's scheduling algorithm prior to Version 2.5?
It was not designed for SMP systems and had poor performance with a large number of runnable processes.
210
With kernel Version 2.5, what scheduling algorithm was introduced in Linux?
The O(1) scheduler.
211
The O(1) scheduler was designed to run in _____ time regardless of the number of tasks in the system.
constant
212
What two key features for SMP systems did the O(1) scheduler provide?
It provided increased support for processor affinity and load balancing between processors.
213
In practice, what was the primary disadvantage of the O(1) scheduler despite its excellent SMP performance?
It led to poor response times for interactive processes common on many desktop computer systems.
214
In release 2.6.23 of the Linux kernel, which algorithm became the default scheduler?
The Completely Fair Scheduler (CFS).
215
In release 6.6 of the Linux kernel, which algorithm became the default scheduler?
Earliest Eligible Virtual Deadline First (EEVDF).
216
Name five criteria that may be used to evaluate the performance of scheduling algorithms.
CPU utilization, throughput, turnaround time, waiting time, and response time.
217
What is an example of a specific selection metric for a scheduling algorithm?
Maximizing CPU utilization under the constraint that the maximum response time is 300 milliseconds.
218
What is deterministic modeling as an evaluation method for scheduling algorithms?
This method takes a particular predetermined workload and generates the performance of an algorithm under that workload.
219
What are the primary benefits of using deterministic modeling for algorithm evaluation?
It is simple and fast, giving exact numbers that allow for direct comparison of algorithms.
220
What is the main limitation of deterministic modeling?
It requires exact numbers for input, and its answers apply only to those specific cases.
221
In the provided example, what was the calculated average waiting time for the FCFS algorithm?
8.75 msec.
222
In the provided example, what was the calculated average waiting time for the Non-preemptive SJF algorithm?
7.75 msec.
223
In the provided example, what was the calculated average waiting time for the Preemptive SJF algorithm?
6.50 msec.
224
In the provided example, what was the calculated average waiting time for the RR algorithm with a time quantum of 4 msec?
11.75 msec.
225
In the specific example shown, which algorithm yielded the smallest average waiting time?
The preemptive SJF policy.
226
In the specific example shown, which algorithm yielded the largest average waiting time?
The RR (Round Robin) algorithm.
227
What are the main uses of deterministic modeling in the study of scheduling algorithms?
It is mainly used to describe scheduling algorithms, provide examples, and indicate trends over a set of examples.
228
To accurately evaluate scheduling algorithms, we can use an evaluation method called _____.
simulation
229
Running a simulation for algorithm evaluation involves programming a _____ of the computer system.
model
230
In a system simulation, what do software data structures represent?
They represent the major components of the system.
231
A simulator has a variable representing a _____, which, when increased, causes the simulator to modify the system state.
clock
232
As a simulation executes, what kind of information is gathered and printed?
Statistics that indicate algorithm performance are gathered and printed.
233
What is the most common method used to generate data to drive a simulation?
Using a random-number generator that is programmed to generate events according to probability distributions.
234
What is a potential source of inaccuracy in a distribution-driven simulation?
It may be inaccurate because of relationships between successive events in the real system that the distribution doesn't capture.
235
What information does a frequency distribution fail to indicate about system events?
It does not indicate anything about the order of their occurrence.
236
To correct for the inaccuracies of distribution-driven simulations, one can use _____.
trace files
237
How is a trace file created?
By monitoring a real system and recording the sequence of actual events.
238
How is a trace file used in the evaluation of scheduling algorithms?
The recorded sequence of actual events is used to drive the simulation.