A2 Operating Systems Flashcards

(34 cards)

1
Q

Interrupts

A

An interrupt is a signal that temporarily stops the CPU from its current task. The CPU to pause the current program. It then runs a program called an Interrupt Service Routine. After the interrupt, the CPU resumes the original program.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Interrupt Priorities

A

Multiple interrupts may occur at the same time. Higher-priority interrupts are handled first. Lower-priority interrupts may be delayed. A higher-priority interrupt can interrupt a lower-priority ISR.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Multi-programming

A

Multiprogramming handles CPU efficiency at a low level: The OS keeps multiple programs in memory. When one program waits for I/O, the CPU can immediately switch to another program.
This ensures the CPU is almost never idle.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Multi-tasking

A

Multitasking handles user experience at a high level: The OS divides CPU time among many processes quickly. Each program seems to run simultaneously to the user, even if the CPU can really run only one instruction at a time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Partitioning

A

Partitioning is when main memory is divided into sections so that multiple processes can be stored and run at the same time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Purpose of Partitioning

A

Processes must be allocated an exclusive area of main memory. An allocated area of memory cannot be used by a second process until the first process is complete and de-allocated from memory.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Fixed Partitioning

A

Main memory is divided into a set number of non over-lapping sizes that are fixed. Sizes are decided in advance, and a process may be loaded into a partition of equal or greater size and is confined to its allocated partition.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Variable Partitioning

A

Variable partitioning is a system for dividing memory into non-overlapping but variable sizes. The number of partitions is fixed but the size of each partition may vary. More flexible than the fixed partitioning configuration, where small processes are allocated to small partitions and large processes allocated to larger partitions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Dynamic Partitioning

A

Partitions are not made before execution but during run-time according to processes’ needs, with the size of partition equal to the size of incoming process. The number of partitions is not fixed but depends on the number of incoming processes and size of main memory.
Pros: No Internal fragmentation. No restriction on degree of multiprogramming. No restriction of process size.
Cons: More difficult to implement as it requires allocation of memory during run-time. External fragmentation may still arise.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Internal Fragmentation

A

Small processes with respect to the fixed partition sizes results in occupied partitions with lots of unoccupied space left.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

External Fragmentation

A

Free memory exists but is split into small gaps and no single block is large enough for a new process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Performance overhead

A

More time spent managing memory instead of executing programs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Limitation on degree of Multiprogramming

A

Partitions in main memory are made before execution. The number of processes cannot be greater than the number of partitions in memory.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Paging

A

Paging divides memory and programs into
fixed-size blocks. Pages can be stored in any frame. Eliminates external fragmentation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How Paging works

A

Program is split into pages. RAM is split into frames. Page Table keeps track of where each page is stored. CPU generates a logical address MMU converts it to a physical address.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Advantages of Paging

A
  • No external fragmentation
    Memory is divided into fixed-size pages and frames, so free memory is always usable regardless of where it is located.
  • Efficient memory use
    Processes can be loaded into any available memory frames, which avoids the need for large contiguous memory blocks.
  • Supports virtual memory
    Paging allows only required pages of a process to be loaded into RAM, enabling programs larger than physical memory to run.
17
Q

Disadvantages of Paging

A
  • Internal fragmentation
    If a process does not completely fill its last page, the remaining space in that page is wasted.
  • Page tables take memory
    Each process requires a page table to map virtual pages to physical frames, which consumes additional memory.
  • Slight performance overhead
    Address translation through page tables adds
    extra steps during memory access, which can slow down execution without hardware support like a TLB.
18
Q

Segmentation

A

Segmentation is a memory management technique where a program is divided into logical segments. This means we can use segments of exact size requested rather than forcing data into fixed sized chunks.

19
Q

How Segmentation works

A

Program is divided into logical segments. Each segment has a variable size, unlike fixed-size pages. RAM stores segments in contiguous memory locations. A Segment Table keeps track of each segment’s base address and limit. CPU generates a logical address. MMU checks the offset against the segment limit and converts it to a physical address.

20
Q

Advantages of Segmentation

A

It supports logical program structure, making programs easier to manage and protect. Different segments can have different access permissions. It allows sharing of segments like libraries between processes.

21
Q

Disadvantages of Segmentation

A

It suffers from external fragmentation because segments are variable in size. Memory allocation and compaction become more complex. It is generally slower than paging due to the need for dynamic memory management.

22
Q

Data Transfer

A

Data transfer is the movement of data between:
CPU and Memory.
Memory and Input/Output devices.

23
Q

Buffers

A

A buffer is a temporary storage area in memory used during data transfer. Example: Printer. CPU sends data quickly to a buffer. CPU continues other tasks.Printer reads data slowly from the buffer. Prevents CPU from being blocked and
smooths out speed differences.

24
Q

Memory Buffering

A

The temporary storage in memory of information / processes that are waiting to be executed.

25
Double Buffering
The use of two buffers increases the throughput of a device and helps prevents bottlenecks. Why Double Buffering is Used - Continuous data flow. No waiting for one buffer to finish. - Improved performance. CPU and I/O devices work in parallel. - Prevents screen flicker. Common in animations and video.
26
Scheduling
Scheduling is how the operating system decides which process runs, when it runs and for how long. Every process is always in one of three states: Running - Process is currently using the CPU. Ready - Process is waiting for CPU time. All resources except CPU are available. Blocked - Process cannot continue Waiting for: Input/output. Data from another process.
27
High-Level Scheduling Principles
Processor allocation - Decides which process gets the CPU. Ensures fair access. Allocation of devices - Controls access to printers, storage and input devices. Prevents conflicts. Job priorities - Each job has a priority level Higher priority jobs: Run first. Get more CPU time.
28
Round Robin Scheduling
Round robin scheduling allocates CPU time equally using time slices. Each process gets a fixed time slice. Processes are placed in a queue. CPU cycles through them in order. Advantages - Fair, no process starves. Good for interactive systems. Predictable behaviour. Disadvantages - Too much switching if time slice is small. Longer completion time for long jobs
29
First-Come-First-Served Scheduling
First-come, first-served executes processes in arrival order. Processes are executed in the order they arrive. Advantages - Simple to implement. Low overhead. Disadvantages - Long waiting times. Convoy effect.
30
Priority Scheduling
Each process is given a priority level. Higher priority processes run first. Advantages - Important tasks handled quickly. Suitable for real-time systems Disadvantages - If new, higher-priority processes keep coming into the ready queue, then the processes waiting in the ready queue with lower priority may have to wait for long durations before execution. Possibly leading to indefinite blocking or starvation. The concept of aging can be used to prevent starvation of any process. Where the priority of a low-priority job is incremented as waiting time increases.
31
Threads
A thread is the smallest unit of execution within a process. A process can contain multiple threads.
32
Threading
A process is split into threads. Threads share: Memory and resources. But run independently. Benefits: Faster execution. Better responsiveness. Efficient multitasking.
33
Polling
The CPU regularly checks a device or process. Asks: “Are you ready yet?” Downside: Wastes CPU time if checked too often. Inefficient if device is slow - CPU may check thousands of times before ready.
34
Time slicing
CPU time is split into small time slices. Each process gets a turn. It is used for fairness, prevents one process from monopolising CPU and is used in multitasking systems