OS4 Flashcards

(37 cards)

1
Q

State and explain the 3 types of queues in scheduling.

A

Job queue: processes awaiting admission. Ready queue: processes in main memory, waiting to execute. Wait queue: processes waiting for an event.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Describe the CPU/IO burst cycle.

A

Process execution interleaves CPU execution with waiting for I/O.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Describe an IO bound process.

A

It has many short CPU bursts.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Describe a CPU bound process.

A

It has fewer longer CPU bursts.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

How is CPU utilisation maximised?

A

With multiprogramming: doing something else when waiting for IO.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What does the CPU scheduler do?

A

Select which process should be executed next, and allocate it to the CPU.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What does the job scheduler do?

A

Select which processes should be brought into the ready queue.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Compare between CPU and job scheduler.

A

CPU scheduler invoked frequently (milliseconds), job scheduler invoked infrequently (seconds/minutes). Some systems only have a CPU scheduler.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

State the 3 ways of handling idling.

A

Busy wait, Halt CPU until interrupted, Invent an idle process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Describe the pros/cons of busy wait.

A

Short response time but inefficient.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Describe the pros/cons of halting CPU until interrupted.

A

Saves energy but increases latency.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Describe the pros/cons of using an idle process.

A

The idle process can do housekeeping, but consumes resources and slows interrupt response.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What does the dispatcher do?

A

The dispatcher gives control of the CPU to the selected process by: Switching context, Switching to user mode, Resume execution of the user process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Define dispatch latency.

A

Dispatch latency is the time it takes to complete the stop/start procedure of the dispatcher.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Compare the non-preemptive and pre-emptive scheduler.

A

Non-preemptive: running process decides when it enters the scheduler (running -> ready). Pre-emptive: scheduler can force scheduler entry.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

State the hardware requirement for pre-emptive scheduling.

A

Hardware support for regular timer.

17
Q

State the system requirement for non-preemptive scheduling.

A

Requires a yield system call so running process can enter the scheduler.

18
Q

Describe the pros/cons of pre-emptive scheduling.

A

Prevents denial of service: long running processes are pre-empted. More complex to implement.

19
Q

Define turnaround time.

A

The total time from process submission to completion.

20
Q

Define waiting time.

A

The total time of a process in the ready state.

21
Q

Define response time.

A

The time for the process to start responding.

22
Q

State the cons of minimising waiting time.

A

Penalises IO heavy processes that spend long times in wait state.

23
Q

State the cons of minimising response time.

A

Penalises longer running sessions under heavy load.

24
Q

What does it mean to maximise CPU utilisation?

A

To maximise the time the CPU is actively in use.

25
State the cons of maximising CPU utilisation.
Penalises IO heavy processes which appear to leave the CPU idle.
26
Define throughput.
The rate at which processes complete execution.
27
State the cons of maximising throughput.
Penalises long-running processes as short processes are preferred.
28
Describe asymmetric multischeduling.
Only one processor access the system data structure to reduce the need for data sharing.
29
Describe symmetric multiprocessing.
Each processor is self-scheduling.
30
Define processor affinity.
Processor affinity is when a process has preference for which processor it runs.
31
State the two types of processor affinity.
Soft affinity indicates preference. Hard affinity indicates constraint.
32
Explain non-uniform memory access.
Different CPUs have faster or slower access to parts of memory because of its physical location.
33
Define load balancing.
Load balancing attempts to keep workload evenly distributed.
34
Describe push and pull migration.
Push migration: periodically check load on each CPU, push tasks of overloaded CPUs to other CPUs. Pull migration: idle CPUs pull ready tasks off busy CPUs.
35
Describe multicore.
Placing multiple CPU cores on the same chip, increasing speed and efficiency.
36
Describe hyperthreading.
Increasing the number of threads per core so one thread can make progress while others are reading from memory.
37
State the challenge virtualisation presents to scheduling.
Challenges the OS scheduler as hypervisor and guests are all scheduling against each other.