deck_20618258 Flashcards

(296 cards)

1
Q

what is a race condition?

A

A race condition occurs when multiple threads modify shared memory simultaneously, and the outcome depends on order of access.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

what is an atomic instruction?

A

An instruction that is executed as a single, indivisible step. A single machine-code instruction is atomic as it cannot be interuppted midway.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

what are critical sections?

A

Lines of code that need to be protected to prevent a race condition error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

what is the purpose of a mutual exclusion lock (mutex)?

A

Prevents multiple threads from accessing a shared resource at the same time. The thread holding the mutex lock will block any other threads trying to run that code until the mutex is unlocked.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is a major issue of mutex?

A

When the unlock call is not executed. This can either happen as a result of a programming mistake or if an exception occurs within the critical section.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

what is std::lock_guard?

A

A class that builds on std::mutex. The lock_guard will lock the mutex then release it when lock_guard goes out of scope. We must ensure the lock_guard goes out of scope as soon as the critical section code is finished.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is Deadlock?

A

Deadlock occurs when multiple threads become stuck at a synchronisation barrier, each waiting for resources held by other threads.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is try_lock()?

A

One way to prevent a deadlock is using try_lock() to check if mutex is already locked. If it is locked, the call will return false and the thread can release its own resources to allow the other thread to continue, and later attempt try_lock again.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is a livelock?

A

-when two or more threads keep changing their states in response to each other.
-However, the the threads keep reacting to each other’s actions in a way that prevents progress. -This is a result of using try_lock().

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is resource starvation?

A

Starvation happens when a process does not get enough CPU time or resources to make meaningful progress.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

what is the purpose of thread_local?

A

thread_local ensures each thread gets its own independant instance of the variable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is a condition variable?

A

A condition variable enables threads to wait for specific conditions to be satisfied before continuing execution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

what are spurious wakeups?

A

Spurious wakeups occur when the condition variable wakes threads even if they were not notified.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

how are spurious wakeups managed by the condition variable?

A

the wait() function is given a lambda (inline function) to recheck the condition once the thread wakes up

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is the reason to use unique_lock instead of mutex?

A

unique_lock enables us to work with condition variables.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

what are the steps that a program has when using unique_lock with condition variables?

A
  1. Process code that is not in the critical section
  2. Reach critical section
  3. Create the unique lock with the mutex we want to use, this will immediately lock the shared mutex.
  4. Process critical section code until we reach point where we need condition variable to be true before we can continue.
    5 Wait on condition variable - this unlocks the mutex and waits until the other threads condition variable issues a notify, it will relock when notified.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is a counting semaphore?

A

A counting semaphore manages multiple resources by permitting a certain number of threads to access the resource simultaneously. The counter indicates the number of available resources.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What are the two operations of a semaphore?

A
  1. “Aquire” decreases decreases the counter when a resource is about to be used.
  2. “Release” increases the counter again, making a resource available.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What are the differences between a mutex and a semaphore?

A
  1. Mutex provides a single thread at a time, semaphore means multiple threads can acquire based on the semaphore count.
  2. lock()/unlock() vs acquire()/release()
  3. mutex owned by the thread that locks it, unlock() must be in the same thread. Sempahore - no ownership.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

how do synchronisation barriers work?

A
  1. Each thread executes up to a given point (barrier) and then waits.
  2. Once all the threads have arrived at the barrier, they are all released, and if the final check is passed (using lambda function) they are allowed to continue executing.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

a re-entrant function can be invoked again before its previous execution is completed based on what requirements?

A
  1. Any static or global variables are strictly controlled.
    2.They do not return pointers to static data (unless that data is read-only).
    3.They only operate on data passed as arguments.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

When should you write a multi-threaded program?

A
  1. A wide variety of input sources need to be managed
    2.Different tasks within the program are of different importance
  2. Preventing a task from dominating the CPU
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What is the difference between re-entrant and thread-safe?

A

-thread-safe function uses shared resources
-but remains safe to be called by multiple threads simultaneously by employing e.g. mutexes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

what is inter-thread communication?

A

Inter-thread communication occurs between threads within the same process. Since all threads share the same memory space, they can communicate by directly accessing memory.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Advantages and disadvantages of inter-thread communication
provides the highest performance, however, it demands careful programming to prevent deadlocks and data corruption, and there is no memory protection between threads.
26
What is inter-process communication?
IPC takes place between processes running on the same OS. Processes have their own memory spaces so they cannot directly access each other's data. Communication must therefore use mechanisms provided by the OS.
27
Advantages and disadvantage of inter-process communication.
IPC is slower than ITC because the OS must handle the memory protection and data transfers between processes. However, it offers much better isolation and fault tolerance.
28
What is inter-computer communication?
Inter-computer communication facilitates data exchange between processes on different machines connected via a network.
29
What is a rendezvous?
a synchronisation point where two threads or processes meet to exchange data. This means both processes are ready simultaneously, resulting in a direct message exchange with no intermediate storage.
30
Advantages of synchronised data transfer
easier control over the timing and flow of data and the elimination of the need for extra memory or buffering mechanisms
31
3 steps of rendezvous in producer-consumer example
1. The producer waits until the consumer is ready to receive data 2. The consumer waits until the producer has the data available 3. Data exchange occurs when both parties are willing to transfer it. At that point, the two are synchronised.
32
Disadvantages of synchronised data transfer
can decrease efficiency because one process must wait for another. can be difficult to implement when many processes are involved.
33
what are exceptions?
unexpected events or conditions that disrupt the normal flow of program execution.
34
what are the two types of exceptions?
hardware and software exceptions
35
are hardware and software exceptions synchronous or asynchronous?
both types are synchronous events because they occur due to the machine code instruction they are currently running
36
are interupt events synchronous or asynchronous?
These events occur asynchronously as they happen independently of the current instruction flow.
37
When do hardware exceptions occur?
Hardware exceptions happen when the processor detects an abnormal condition during instruction execution, such as division by zero, invalid memory access or arithmetic overflow.
38
what are the two main ways to handle issues in code?
1. returning error codes 2. raising software exceptions
39
what are hardware exceptions?
Hardware exceptions are events triggered by the CPU hardware when it encounters an error condition while executing instructions.
40
why are software exceptions preferred over running error codes?
-make programs more reliable, readable and maintainable -allow errors to be handled without tangling normal control flow with manual error-checking code.
41
What are software exceptions?
-errors recognised by the program itself or its runtime software environment. -allow the program to jump out of the normal sequence of instructions and directly to a specified error-handling routine.
42
name three examples showing that exceptions are synchronous events.
1. A divide-by-zero hardware exception happens at the point when the CPU executes the division instruction. 2. A null pointer dereference error occurs at the moment the invalid memory access is attempted. 3. A user-thrown exception is triggered at the throw point in the running code
43
when is the yield command ineffective?
If there are no other threads with that priority, the thread will be chosen again after yield() meaning yield() will have no effect.
44
what will happen if any linked threads are still present when the parent thread terminates? What are the methods to resolve this issue?
the OS sees this as an error and calls an "abort" signal. join() and detach() can be used to resolve this issue.
45
What is the functionality of detach()?
After calling the detach() function: -child thread's connection to the parent thread is severed and the child thread becomes independant. -This allows the detached thread to continue running even after the main program has exited.
46
When would we use detach() specifically?
Detach() is typically used on threads that perform long-running background tasks requiring no involvement from the creator thread.
47
what does join() do?
the F.join() call blocks the main thread until the F completes its execution
48
When would I use detach() rather than join()?
Detach() is used when the main thread does not need to wait for the spawned thread to finish.
49
what is inter-process communication?
The mechanism by which processes exchange information, such as data and control signals.
50
why does the OS use separate processes rather than a single multithreaded process?
provide enhanced security, language flexibility and independant execution.
51
what are the 3 main types of inter-process communication (IPC) mechanisms?
-streaming -message-based -synchronisation
52
What does a streaming IPC mechanism do?
enables continuous, ordered data transfer between processes and generally ensures that data is delivered in order
53
What does a message-based IPC mechanism do?
-transmits discrete packets of data instead of continuous stream. -Usually used with message queues, datagram (UDP) sockets, or mailboxes
54
What does a synchronisation based ILPC (inter-local-process communication) mechanism do?
enables processes to work together smoothly by managing access to shared resources.
55
Streaming IPC mechanisms
-pipes (named/unnamed) -stream sockets (TCP)
56
Streaming IPC data flow characteristics
-continuous, ordered byte stream -persistent connection -often unbounded in length
57
Streaming IPC typical use cases
-command pipelines -real-time data feeds
58
Message-Based IPC mechanisms
-Message Queues -datagram sockets (UDP)
59
message-based IPC data flow characteristics
-discrete messages -with UDP, order may not be guaranteed
60
message-based IPC typical use cases
-Event-driven systems -request/response messaging -distributed queues
61
Synchronisation/control IPC mechanisms
OS provides memory for synchronisation methods or events
62
Synchronisation/control IPC data flow characteristics
-no data, only for ILPC (inter-local-process-communication) synchronisation
63
Synchronisation/control IPC typical use cases
-ILPC synchronization -interrupt-style event handling
64
how does file mapping work?
-similarly to shared memory. -OS employs virtual memory paging mechanism to map a file into the address spaces of multiple processes -allowing them to access it as if it were memory.
65
why can't inter-process communication directly use the global shared-memory implementation of message queues?
because each process has its own memory space and cannot see each other's variables, file mapping is required
66
what are the benefits and costs of shared-memory ILPC method using OS API
-high speed -low-latency communication However -can result in race conditions
67
when is shared memory as a communication method suitable?
suitable for ILPC (processes on the same CPU) but not for inter-remote communications (processes running on different computers
68
what is an OS "pipe"?
-a unidirectional channel for transferring data between processes.
69
How are pipes implemented?
-in the kernel as a shared memory buffer: -one process writes data into the queue while the other process reads it in the same order (FIFO buffer)
70
What are the properties of anonymous pipes?
-anonymous pipes are temporary and exist only as long as the related processes are running
71
What are the properties of named pipes?
-persistent communication endpoints -can exist independently of the processes that use them, allowing unrelated processes to exchange data.
72
what is the function of pipe()?
creates a unidirectional data channel with two file descriptors fd[0] for reading and fd[1] for writing
73
what is the function of fork()?
-creates a child process. -child process is a complete copy of the parent at the point the fork() operation is executing. -child inherits the file descriptors and everything else at that point in the code
74
what is the functionality of read() and write()?
used to send and receive data through the pipe
75
what are sockets?
software-based interprocess communication mechanism that allows bidirectional data exchange between programs, whether on the same computer or across a network
76
how are sockets used inter-process vs inter-computer?
-in inter-process communications within the same PC, sockets may use shared memory -sockets for inter-computer communication use network links such as ethernet
77
how can sockets bypass IP layer in ILPC?
-using the AF_UNIX/LOCAL domain to reduce overhead. -,no INET domain IP stack or protocol header is included. -Instead "packets" are kernel buffers shared between the sender and receiver sockets.
78
what is a connection-oriented communication channel?
-before data is sent, the two programmes establish communication session using a handshake -ensures that the data arrives in the same order it was sent, without loss or duplication
79
what does the term "continous link" mean?
-indicates that the OS maintains state information about data exchange -such as packet sequence numbers, retransmission times, and buffers for data reordering.
80
what does the continous link mean that TCP automatically does when sending a massive block of data?
-Segment the data blocks into smaller packets (around 1500 bytes each on ethernet) -Gives the packets sequence numbers -transmit and reassemble the packets in order on the receiver side -retransmit if any are lost -each packet belongs to a stream associated with that connection
81
What is the common socket type UDP used for?
-UDP is used to send short messages, called datagrams from one host to another without establishing a connection (no handshaking)
82
What is the common socket type TCP used for?
-TCP offers a reliable, connection-oriented byte stream
83
what is Maximum Transmission Unit (MTU)
The MTU is the largest size, in bytes, of a single packet payload that can be transmitted over a specific network link
84
differences between UDP and TCP?
Type: TCP connection-oriented, UDP connectionless Reliability: TCP Guaranteed delivery and order, UDP no guarantee Overhead: TCP higher overhead (handshake + ACKs), UDP lower Typical use: TCP - file transfer, HTTP. UDP - Real-time streaming
85
Properties of UDP data transfer
-unreliable in that packets can be lost, duplicated or arrive out of order -however, fast because it has lower overhead than TCP
86
What are websockets?
a high-level protocol built on top of TCP and have an event-driven implementation. Websockets allow browsers and servers to maintain persistent, full-duplex communication using standard web ports. Websockets enable this by providing data framing, message semantics and standardised handshake.
87
how do WebSockets simplify the coding process compared to just TCP?
TCP sockets transmit and receive raw byte streams, so programmers need to write buffers and define their own message boundaries. Websockets simplify this process by introducing frames that encapsulate individual text or binary messages, allowing applications to reliably send structured data
88
How is a websocket initiated, in relevance to HTTP
The Websocket connection begins with a HTTP handshake. When the client connects to the server, it sends a HTTP "upgrade" request with the WebSocket header. If the server supports Websockets it responds with an ACK and upgrades the connection from HTTP to WebSocket protocol. From that point onwards, connection shifts from the request-response model to a lightweight message-based system
89
What is the purpose of an OS?
Provides a software library layer on top of the hardware. The OS software manages the computer's resources and responsible for system security
90
How does an OS handle security?
Ensures that a user's programs cannot compromise the system by monolising resources or improperly accessing other programs or the kernel
91
With a focus on concurrent systems, what does the OS software layer provide?
1. Enables independant programs and multithreaded programming by sharing CPU time between threads and allows multiple CPU cores to be used in parallel 2.Appropriately controls hardware access between threads (shares hardware resources between threads where appropriate and prevents other hardware resources from being shared) 3.Provides device driver interfaces to encapsulate the hardware details 4.Prevents a poorly written program from crashing the whole system 5.prevents malicious program from stealing system information
92
Benefits of using OS over bare metal
main advantage is the functionality that the OS provides. mainly OS provides time-sliced concurrent multitasking
93
main disadvantage of OS over bare metal
OS software consumes resources so may require more powerful CPU/RAM to run. OS software is large so greater likelihood of bugs and increased security vunerabilites
94
What does it mean when a process is said to have one thread of execution?
A fully/totally ordered program, meaning it has no concurrency. Its machine code instructions are executed strictly in sequence.
95
What is the essential information for a single-process operating on a single-core bare-metal system
-the CPU state -the program (machine code) -process's data memory
96
When executing a single-threaded process, how is memory divided?
Memory is divided into four regions, one for the program code and three for data
97
Describe the stack and heap memory layout
4 components. Top - bottom: 1. Code segment 2. Static/Global data 3. Heap 4.Stack
98
What data is stored where in the stack and heap memory layout?
1. Data labelled as static or global in the program is placed in the data segment of the memory 2.Dynamically allocated memory is stored on the heap 3.Automatic (local) variables are stored in the stack
99
where is the information stored for char *pA=malloc(1000);
pointer variable pA is stored on the stack as part of the function's local variables The 1000 bytes of memory allocated by malloc() reside on the heap
100
What is the difference in memory management between the stack and the heap?
Stack memory is automatically managed when functions are called and returned, heap memory must be managed manually with malloc() and free()
101
What is the stack?
A structured section of memory that stores data in a last-in, first out (LIFO) order. It holds information about function calls. When a function is called, a new stack frame is vreated and when the function returns, that frame is automatically removed, making stack allocation very quick and self-managed.
102
What does the compiler add to the machine code function unkown to the programmer?
-adjust the stack pointer to make room for local variables -stores the local variables in that space -restores the stack pointer when the function returns
103
What is the heap?
A large memory area used for dynamic allocation, where data can be created and destroyed while the program runs with greater flexibility. Objects and data stored on the heap remain allocated until the programmer explicitly releases them.
104
Heap vs Stack
-stack is faster and self-managing -stack has limited size, heap offers more space -heap demands careful memory management to prevent issues such as leaks and fragmentation
105
How does an OS assign CPU processing time among multiple processes?
If there is only one CPU core, the OS will divide CPU time between the threads of different processes. If multiple CPUs are available, the OS will distribute threads across cores and execute them in parallel.
106
How does a thread know its position in the program if it is interrupted by the OS?
The thread maintains its own program counter and related information such as the call-return stack. The process is responsible for storing information about resources linked to the program that all threads can access, it is the threads resposibility to remember its current position however and the state of the CPU registers.
107
What is the process control block?
When the OS allocates the CPU time among programs, the data related to each process and its thread(s) must be switched in and out of the CPU. The complete information about the process and its threads is the process control block. This information enables the OS to start and stop all the program's execution threads so they can share the CPU
108
What does the process control block comprise of?
Contains two types of information: the process context and the thread context. Each thread has its own thread context.
109
What information does the process context store in the process control block?
It holds details about the process that are useful for the OS, such as gglobal variables and files shared between threads.
110
Where must the process control block be stored?
In a secure memory area that the process cannot access to prevent a malicious process from modifying its own context information or that of another process.
111
What information does the thread context store in the process control block?
Stores information about running the thread on the CPU, including the CPU's status and when it was stopped. Each thread has its own thread context.
112
What is the process heap?
Each process has a single heap. Within this, threads can share global variables which are stored in the process's data segment
113
What does a thread stack contain?
A series of function call frames. Each frame contains: 1. Function arguments 2. Local variables 3. Function return address 4. Saved register values
114
What is a thread stack?
Threads contain information that other threads cannot access, such as thread ID and stack memory. Having separate stacks allows each thread to operate independently.
115
What is the thread control block (TCB)
The thread control block contains all the information needed to restore the thread to its previous running state
116
What are three key points about thread stacks?
1. Isolation: each thread has its own stack, ensuring local variables and call data don't interfere with other threads 2.Limited size 3.Automatic cleanup
117
What is the difference between a process and a thread?
- A process is an isolated porgram instance with its own address spacxe and OS resources which contains one or more threads -A thread is an execution unit within a process sharing code and heap but owning its own stack and registers
118
What is hyperthreading/ simultaneously multi-threaded (SMT)?
A hardware feature of a single-core CPU that gives it two (logical/virtual) cores and can run two processes simultaneously, but these virtual cores share resources and so are not as powerful as separate cores.
119
What does the thread control block consist of?
1. Thread identification, i.e. unique thread id 2. Processor state/Context (i.e. snapshot of CPU registers to ensure execution resumes correctly after a context switch) 3.The thread execution state 4.Scheduling information 5.Information about the thread's stack memory 6.Memory and resource pointers for shared and private results 7.Synchronisation info
120
what is CPU time slicing?
Important feature of an OS that allows many tasks to operate near simultaneously, they receive the CPU one after another. The OS kernel software sets up a timer interrupt to execute its scheduler code and allocates acces to the CPU for different threads of code. OS can prioritise which thread receives precedence, and the scheduler algorithm within the kernel makes this decision
121
What is needed for CPU time-slicing to work?
-Full OS -processes -TCB
122
What is context switching?
The process by which an OS saves the context of the currently running process and loads the context or state of the next process or thread ready for execution.
123
What must the OS do when shifting CPU usage from one thread to another?
must update the stystem with the new process and thread context. If new thread belongs to the same process, only thread context needs to be switched
124
What is faster? Switching between threads of the same process or switching between threads of different processes?
Switching between threads of the same process is much faster as the process context switch requires CPU registers restored, instruction pipeline must be flushed and cache levels refilled, structures that take longer to switch. Context switch between threads on the same process don't require process context switch.
125
What is the overhead of time slicing?
Time slicing the CPU introduces overhead from context switiching, which uses up CPU time without accomplishing any useful work. Some processors include shadow registers to speed up thread context switching.
126
Name three reasons for increased time to switch between threads of different processes
-switching processor state -emory address space switch -cache invalidations as each process runs within its own memory space
127
What are the three states in the life cycle of a thread?
- Ready: the thread is waiting to be chosen by the scheduler for execution -Running: the thread executes on the CPU -Waiting: The thread is temporarily unable to progress. It may be waiting for an IO transaction to complete or for another thread to perform an action
128
what threads are runnable and non-runnable
runnable: includes all threads that are in the ready or running state non-runnable: all threads that are in the waiting state
129
What happens when a thread has finished its code/ when all threads associated with a process are finished?
Its information can be deleted, when all the threads associated with a process are complete, the process if finished and the entire process context information can be deleted.
130
Name 4 reasons for ending a thread's running state
1. Waiting for input or output from a device or user (running -> waiting) 2.Waiting for interaction with another process or thread (running -> waiting) 3.It is the turn of another process (running -> ready) 4. Thread has completed its code or exited for another reason (running -> terminated)
131
How many CPU cores does a multi-processor system require vs a multi-threading system
multi-processor requires >1, multi-thread can run on one or more CPU cores
132
What is contiguous memory?
A single block of adjacent address space with no gaps
133
What is the memory map used in microcontrollers?
a single-address space called a flat memory map. This range of addresses contrains RAM for data, program memory for machine code and peripheral memory-mapped I/O
134
Why does flat memory mapping have protection due to execute-in-place?
-microcontrollers typically execute code directly from their permanent storage location in non-volatile flash. -Flash is usually configured as read-only which means: -running code cannot accidentally overwrite program instructions -self-modifying code is prevented by design
135
Why is it essential to provide RAM security in flat memory mapping?
-When running multiple programs using an OS, the OS may load the programs into RAM before execution -the OS will need those regions of RAM to be executable -in flat memory model, RAM stores all sorts of information -therefore essential to prevent underpriveleged code from overwriting e.g. kernel data structures or interrupt handlers -similarly memory-mapped peripherals must also be protected
136
What is the differenced between privilged and unpriviliged mode?
-Code in the priviliged mode has unrestricted access to all processor instructions, registers and memory regions code in the unpriviliged mode: -cannot access system registers -cannot configure critical peripherals -must obey the memory access restrictions enforced by the protection hardware
137
What different AP values mean for privileged and unprivileged access?
-0b000 - P:no access, U: no access -0b001 - P: Read/Write, U: no access -0b010 - P: Read/Write, U: read-only
138
What can attributes do?
Attributes can explicitly safe read/write/execute permissions within the flat address space that contains code, RAM etc.
139
Why are memory attributes not used for each memory location?
The attributes for each memory location need to be stored somewhere, likely in RAM. If each data memory location, this would halve the available memory which is an unacceptable overhead.
140
How does a CPU implement enforct of privilige boundaries?
A CPU generally defines only a few protection regions rather than having attributes for every individual memory address.
141
Where is the mechanism for verifying whether a memory address has the necessary read, execute etc. permissions implemented?
In the hardware by the memory protection unit (MPU).
142
What does the MPU do?
The MPU acts as a gatekeeper for every load, store and instruction fetch permformed by the CPU. The MPU: -allows or denies read/writes -allows or denies an instruction fetch -restrict privileged vs unpriviliged access
143
What is MPU?
A memory protection unit is the hardware logic that allows the microcontroller firmware programmer to enforce memory attributes on regions within the flat memory structure's address space. The cortex-M MPU provides region-based protection which uses two numbers, base and limit to define the region in the physical memory map
144
What is allocated where in memory on a microcontroller?
-the code will be placed in the program memory -global initialsied variables (int i=1;)are placed in the .data region -global unitialised variables (int i;) are placed in the .bss region -the static and local variables of functions are placed in the stack -dynamically allocated variables are placed in the heap
145
What does a microcontroller do on reset?
1. CPU reads initial SP from the vector table in flash 2. The CPU reads the reset_handler address from the vector table, this address points to the startup code 3. Startup code runs: -initialise .data -zero .bss -configure clocks -set up C routine 4.startup calls main() 5.user application run
146
How do you allocate dynamic memory in C/C++?
Using the standard library malloc() function
147
What does malloc() do if a suitable free block of size >=n exists after it traverses the free list in the heap
-it may split it if it's much bigger than size n -it will mark it as allocated by putting information above it in the heap -it returns a pointer to RAM
148
What does malloc() do if no free block of size >=n exists after it traverses the free list in the heap
-if there is free space between the heap and the stack, it grows the heap upward using an allocator function (e.g. sbrk()) and returns a pointer to RAM
149
What does malloc() do if no free space is available after it traverses the free list in the heap
it returns "fail"
150
What happens if you use malloc() and an MPU?
if the heap crosses a guard region, a hardware exception occurs
151
What does the sbrk() function do?
Increases the heap size. Whenever this function is called, it returns the allocators pointer valu and then increments it towards the stack
152
What is memory fragmentation?
Before a program runs, the available memory is a contiguous block of free space ready to be allocated. After a program has performed many dynamic allocations etc., that block of memory will no longer consist of a contiguous region of allocated memory followed by a contigious region of free memory. Instead there will be many free memory regions scateered between allocated areas. This can result in wasted memory.
153
What is external fragmentation?
-free memory exists but is distributed across non-contiguous blocks, the heap experiences external fragmentation.
154
How does the first-fit allocation algorithm for variable-size partitioning work?
-the allocator scans memory from the beginning and chooses the first free block that is large enough to meet the request. If the free block is larger than needed, the allocator splits it to fit, giving the process the required portion and leaving the remaining part as a smaller free block.
155
What is internal fragmentation?
Allocated blocks are larger than necessary. This can happen because of the allocation of fixed-size memory blocks. It can also occur when memory word size does not match the size of the data
156
How does variable-size partitioning work?
physical memory is dynamically allocated based on the required size -memory is one contiguous space -free blocks of varying sizes exist -freeing returns a block and coalesces adjacent free blocks
157
What is the trade-off with variable partitioning algorithms?
There is a trade-off between how effectively the allocation algorithm optimises the memory usage and how long the algorithm takes to run
158
How does the best-fit allocation algorithm for variable-size partitioning work?
The allocator scans all free blocks and selects the smallest block that is large enough to meet the request. This minimises wasted space by choosing the tightest fit but can result in many small holes
159
How does the worst-fit allocation algorithm for variable-size partitioning work?
The allocator looks for the largest available free block. The process is allocated from this block, leaving a large remainder. The goal is to avoid creating many tiny, unusable holes.
160
How does the buddy partitioning allocation algorithm for variable-size partitioning work?
When you request some memory: -the allocator rounds your request up to the nearest power of two -it finds a free block of that size -if none exist, it splits a larger block into two buddies -when blocks are freed, if a block's buddy is also free, the two merge
161
What is the main advantage of buddy allocation?
It's capacity to merge (coalesce) freed regions
162
How does buddy identification work?
Since buddy allocation organises memory using powers of two and binary alignment, buddies differ in exactly one binary bit. We use XOR to calculate the addresses -XOR the buddies to find the correct base address -If a block has size = S and starting address A, the its buddy has address A XOR S
163
advantages of buddy allocation
-very easy coalescing - no need to examine neighbours in a free list -uses segregated free lists - one free list per block size -minimal metadata - size class + free bit is enough
164
disadvantages of buddy allocation
-buddy allocation suffers from internal fragmentation since all allocations of powers of 2 -external fragmentation is reduced but not eliminated
165
Why is use of malloc() one microcontrollers often discouraged?
-allocation time varies as it is non-deterministic -can fragment RAM -can fail unpredictably
166
What is a static memory pool?
A pre-allocated block of fixed-size memory reserved during compile time. This ensures quick, predictable allocation without relying on the heap or dynamic memory. Real time systems use this.
167
What is partitioning?
An early memory management method in which the OS divides physical main memory into fixed- or variable- sized regions, with each process occupying one. Used before virtual memory was introduced, so processes had to fit entirely into physical RAM to execute.
168
What is segmentation?
Segmentation divides a program's memory into variable-sized regions, each with a base and a limit that define its location in physical memory.
169
What is the primary aim of partitioning?
To load as many processes as possible into memory while minimising wasted space.
170
Does partitioning memory provide security?
Partitioning memory does not prevent a process from accessing an area of memory it should not. There needs to be a layer of software (OS) or hardware (MPU) that enforces memory restrictions.
171
Early OS's used fixed-size partitioning, what is this?
Dividing physical memory into fixed-size partitions during startup. Each process fits into exactly one partition, and the partition size remains constant throughout the execution. The partition size must be at least as large as the most extensive expected program
172
negatives of fixed-size partitioning
internal fragmentation
173
How does base/limit virtual memory system work?
Each process uses virtual addresses starting at zero. The base register, which holds the start of the process's physical memory region. The limit register ensures that a process can access only memory withing its assigned segment. It defines the length of the valid address range measured from the base.
174
What does the base register hold in a base/limit virtual memory system?
starting physical address of the proccess
175
What does the limit register hold in a base/limit virtual memory system?
Size/length of the allowed memory region
176
What is the physical address comprised of in a base/limit virtual memory system?
Base + virtual address
177
What does the MMU do in base/limit virtual memory system?
When the CPU issues a memory access, the MMU checks that the virtual address is less than the process's limit value; if it exceeds the limit, a memory-violation exception is raised. If the address is valid, the MMU performs dynamic relocation by adding the virtual address to the base register, and the resulting physical address is put on the address bus
178
What is physical memory?
The addresses sent to the address bus of RAM
179
What is contiguous memory?
Contigouous memory means all allocatable memory is in one block (i.e. all its addresses are sequential with no gaps)
180
Benefits of virtual memory
-enables process isolation -simplifies programming -makes efficient use of limited physical resources
181
What does virtual memory provide?
It provides an abstraction that separates a process's logical address space from the physical memory of the system. This gives each proces the illusion of having a large, contiguous address space, regardless of the actual physical memory available
182
Modern OS implement virtual memory by paging, how does paging work?
Paging works by dividing a process's virtual address space into small fixed-size pages while physical memory is divided into page frames of the same size. The process sees a linear address space, but the storage in physical memory is non-contigouous
183
What is a page in paging?
The memory block in virtual memory
184
What is a frame in paging?
The associated memory block in physical memory
185
Is paging contiguous or non-contiguous?
Non-contiguous physical memory allocation. The program's storage is distributed throughout the physical memory rather than placed in a single continouous block.
186
What is a page table?
When a process runs, its pages are loaded into available frames. The OS maintains a page table for each process. This page table maps virtual addresses to physical frames. An MMU uses the page table for address translation, converting virtual address into corresponding physical address.
187
What happens to the page table when an OS kernel performs a context switch?
Since each process has its own page table, the page table must be swapped over whenever there is a context switch.
188
How is a process's virtual address interpreted when translating to physical address?
Interpreted as having two parts: the page number (MSBs) and the offset.
189
What is the page number and offset in a virtual address?
The page number is used to find the MSBs of the frame number. The frame number becomes the MSBs of the physical address, with the original offest giving the lower bits.
190
What happens when the frame number is 0 and there is an invalid flag in a page table?
This indicates to the CPU that the virtual page currently lacks a valid mapping in physical memory, either because it is located elsewhere or because it does not exist.
191
When does a soft page fault occur?
when the page is not currently mapped in the process's page table, but the memory already exists somewhere in RAM.
192
What happens when a soft page fault occurs?
The OS loads the required page from disk, sets the page table entry to valid anf updates the physical frame number
193
When does a hard page fault occur?
When the requested page must be retreived from secondary storage (disk or SSD)
194
When does a segmentation fault occur?
When the page is illegal. Either because the virtual addresses are not part of the process or because the type of request is not valid for that location
195
What happens when the MMU detects a hard page fault
1. The OS pauses the process that caused the fault 2. It locates the missing page on disk 3.It loads the page into a free frame in RAM 4. The page table is updated with the new mapping 5.The process is resumed once the data is in RAM
196
Why are multi-level page tables used?
To prevent allocating a single large table. Instead of one extensive arrage of page tables, the address space is segmented into multiple tiers of smaller tables. The root page table holds pointers to secondary tables which may in turn point to additional tables.
197
What different page replacement algorithms can be used by the OS to allocate when a new page is needed and which frame should be moved to secondary storage?
-FIFO policy evicts the page that has been in memory the longest -LRU (least recently used) policy always replaces the page that has not been used for the longest time
198
FIFO paged memory steps in detail
-maintain a queue of the pages in the order they were loaded -if a requested frame is already in memory then do nothing -On a page fault: if a free frame exists, load the new page into the space and enqueue its number. Else if memory is full then dequeue the oldest page number and store the frame in secondary storage then load required page into its frame and enqueue the new page number at the back
199
LRU method in detailed steps
1. Whenever a page is referenced, it is marked as the most recently used page, and its page number is moved to the front of the history list 2. When a page fault occurs and memory is full, the LRU algorithm selects the page that was accessed the longest time ago for replacement. The last-used page is the one at the back of the history list, that page is replaced with the new page. 3. The new page number is added to the front of the history list
200
Comparison of LRU and FIFO
Causes fewer page faults because it simulates the programs behaviour of keeping recently used pages in memory. However, it requires continuous updating of the order of page accesses.
201
What is the translation lookaside buffer (TLB)?
The MMU has a small cache called the TLB. The TLB stores a cache of recently used virtual-to-physical address mappings. When the TLB contains the required mapping, the translation occurs in a single instruction cycle. If the TLB does not have the virtual memory address, it triggers a "TLB miss" meaning the address must be retreived from the page table in RAM, which is slower.
202
What does each thread in a process have?
-its own stack -its own register state -its own current instruction location
203
What do modern OS's schedule?
threads, not processes.
204
What are the 3 different algorithms that the kernel's scheduling manager could use to schedule threads
1. Fairness - all processes are treated equally 2.Priority- the programmers can specify which threads are important 3.Real-time: the process must be performed at a specific time or for a particular duration
205
What is a priority-based scheduler?
Threads are assigned priority levels that influence how the scheduler selects the next thread to run
206
How does a priority-based scheduler work?
-a priority-based scheduler always chooses a thread with the highest priority among those ready to run -threads with the same priority are scheduled fairly, typically using round-robin time-sliciing -a highest-priority threads runs until it either uses up its time slice, waits for I/O, a synchronisation event occurs or it finishes -if a higher-priority thread becomes ready, it pre-empts the currently running thread
207
What are the two policies used by the scheduler for threads at the same priority level?
-First-come, first-served policy -round robin
208
How does the scheduler first-come, first-served policy work?
Under the FCFS policy, a thread runs without time slicing, it continues running until it finishes, waits for I/O or is pre-empted by a higher-priority ready thread
209
How does the scheduler round robin policy work?
Under the round robin policy, threads with the same priority receive equal CPU time slices in turn. When a thread's time slice expires, it is placed at the end of the ready queue for that priority level
210
What is the difference between a pre-emptive and non-pre-emptive schedulers?
-A pre-emptive scheduler will interrupt a running process if a higher-priority process becomes runnable -A non-pre-emptive scheduler will not interrupt a running process when a higher-priority process becomes runnable; it continues running until the process finishes its CPU burst or blocks (waiting for I/O)
211
What is a completely fair scheduler?
-aims to divide CPU time fairly among runnable threads instead of using fixed time slices -assings variable time slices based on number of threads and their priorities -threads with equal priorities receive roughly equal shares of CPU time over the long term -Fair schedulers are preemptive; they do not use round-robin -they select the next thread based on fairness metrics
212
What is a real-time scheduler?
-a standard priority scheduler tries to share the CPU fairly and responsively but offers no timing guarantees -a real-time scheduler is priority-based, pre-emptive -either use round robin of FCFS
213
What is a deadline scheduler?
-used for tasks that must be finished before a given time -the early deadline first (EDF) scheduler will run the task in the ready queue closest to its deadline -however a lengthy taks that is not completed by its deadline will keep running and could cause all subsequent processes in the list to miss their deadlines
214
What is the difference between real-time scheduling and general-purpose scheduling?
-real-time scheduling ensures that tasks meet timing constrains -because correctness of the system depends on when tasks run, not just on what order they run in -general purpose scheduling focuses on fairness, responsiveness or CPU utilisation -rather than on guaranteeing that tasks complete withing specific time limits
215
What can cause thread starvation in pre-emptive scheduling?
-if a higher priority thread pre-empts a lower priority thread -if high priority thread executes without yielding execution, the lower-priority thread will suffer resource starvation
216
What are the four metrics that help assess a concurrent system's scheduling algorithm performance?
1. Waiting time 2.Turnaround time 3.Response time 4.Throughput
217
What is the response time scheduling performance metric?
time from submission to the moment the process first starts running or produces its first output
218
What is the waiting time scheduling performance metric?
Time a process spends in the ready queue before being scheduled on the CPU
219
What is the turnaround time scheduling performance metric?
total time from process submission to completion
220
What is the throughput scheduling performance metric?
number of processes completed per unit time
221
IMC architectural coupling for peer-to-peer
-symmetric roles -direct links -bidirectional -peers discover each other dynamically
222
What does CPU utilisation measure?
-The percentage of time the CPU is actively executing tasks -indicator of how effectively the system is using the processor
223
What do different CPU utilisation levels indicate?
-moderate utilisation typically suggests that the system is operating efficiently and is not being stalled by slow I/O -high utilisation may indicate that the system is overloaded with compute-intensive threads, leading to interactive programs being unresponsive
224
What are the CPU-related metrics?
-CPU utilisation -CPU load average (measures how many tasks are waiting for the CPU) -context switch rate (scheduling overhead) -CPU saturation (the number of threads queued per core)
225
Inter-machine communication (IMC) architectural coupling for client-server
-asymmetric roles -request -> response -direct addressing -often synchronous -client must know the server's address
226
What are user-level threads (ULTs)
-alternative form of multitasking implemented entirely in user space without involvement from OS -unlike OS threads or processes, ULTs are created scheduled and managed by a library rather than the kernel -ULT have their own stacks and register contexts in contrast to coroutines which are similar to ULTs but instead use coroutine frame -most ULTs rely on cooperative scheduling
227
Three metrics for categorising methods of inter-machine communication
-communication architectural patterns (the number of intended recipients of the data, and the relationship between the devices) -temporal behaviour (synchronous vs asynchronous and continuous vs intermediate) -communication data type (stream-oriented or message oriented)
228
What is inter-machine communication?
Communication between processes running on different computers
229
IMC architecural coupling for producer-consumer
-buffer/queue decoupling -unidirectional flow -often asynchronous
230
Typical examples of stream-oriented communication
-pipes -TCP sockets
231
What is unicast?
Sending UDP to a specific destination -one sender -one receiever -e.g. peer to peer
232
what is broadcast?
UDP sending -one sender -all hosts on the local broadcast domain receive e.g. DHCP
233
IMC architectural coupling for publish-subscribe
-topics/broker or brokerless -many-to-many -asyncrhonous
234
When is an inter-process communication described as asynchronous?
-the sender continues execution after sending a message and the receiver processes it later -asynchronous models enable greater concurrency and scalability and are commonly used in publish-subscribe -requires buffer for temporary storage
235
What is intermediate data transfer?
-communication occurs when data is transmitted sporadically/irregular intervals -a one-off communication still qualifies as intermediate
236
what is multicast?
UDP sending -one sender -group of interested receivers -e.g. DDS
237
what is continuous communication?
-situations where data is streamed over the communication channel -continuous data transfer
238
When is inter-process communication described as synchronous?
-the sender blocks until the receiver acknowledges receipt or finishes processing the message. -Offers predictable order and is often used in tightly couple P2P or client-server where timing is crucial
239
What is message-oriented communication?
-transmits discrete, self-contained messages instead of continuous byte stream -each message is sent as a complete unit with clear boundaries, metadata or routing details
240
What communication patters do message passing channels typically use?
-Publish-subscribe -Request-response
241
What is stream-oriented communication?
-delivers a continuous flow of bytes between processes -information is read and writted sequentially through a persistent channel -can either be synchronous or asynchronous
242
Typical examples of message-oriented communication
-message queues -UDP sockets (UDP preserves message boundaries even though messages can be dropped or arrive out of order) -shared memory buffers
243
Characterisitcs of message-oriented communications systems
1.sender and receiver are decoupled which improves system efficiency and responsiveness 2.they may provide message reliability to prevent loss in the event of failures 3.The message includes metadata information 4.Messages have a defined length and agreed-upon data format
244
3 main functions of a broker
1.Publish discovery: identify publishers and the topics they publish 2.Subsriber discovery:maintain a list of susbscribers and the topics they subscribe to 3.Distributed routing:match published messages to subscribers and route each message accordingly
245
What is P2P-base publish-subscriber
-to avoid bottleneck of a central broker -distributed discovery (nodes announce themselves and their endpoints (Publishers/subscribers) using discovery protocols) -every node maintains own local list of subscriptions and publications -matching and direct communication
246
What is publish-subscribe
-a publisher sends messages to multiple subscribers -subscribers only receive notifications if they are subscribed to the relevant message topic -communications seem intermittent however model maintains a persistent channel -used in flexible systems where nodes can change dynamically
247
Disadvantage of broker?
-the start network topology -all message traffic flows through the broker which can become a computational or network bottleneck or a single point of failure
248
What are message queues?
-message queues are used between processes running on the same OS -facilitate asyncrhonous communication by storing messages in a buffer queue (FIFO) -provide buffered channels with flexible, often prioritised message handling
249
What is broker-base publish-subscribe
-message broker receives messages from programs that publish data -broker then routes these messages to subscribers that have indicated they want to recveive them -broker acts as intermediary rather than permitting direct P2P communication
250
What is real-time publish-subscribe (RTPS)?
-RTPS uses UDP as its transport layer -low-level network protocol used by DDS implementation for processes to communicate with each other
251
What is a wire protocol?
the set of rules that defines how the data in UPD payloaf is formatted
252
What does DDS use RTPS for?
-Discovery -data transport -Quality of service enforcement -Reliable vs best-effort delivery -multicast/unicast communication
253
What is the data distribution service (DDS)
-standard from the object management group (OMG) -P2P publish-subscribe model
254
What three broker-like capabilities does DDS implement?
1. Participant discovery - employs SPDP (simple participant discovery protocol) in RTPS to enable all domain participants to discover each other 2.Endpoint discovery 3.Matching publishers with subscribers
255
What is the equation for the average dynamic power dissipation (Pav) in a continiously active switching logic gate
Pav = fclk x C x Vdd^2, where fclk is the clock frequency, C is the switching capacitance and Vdd is the supply voltage
256
Why can we not continue to increase clock frequency in order to speed up processing?
Pav = fclk x C x Vdd^2 -continuing to increase clock speed increases power dissipation -too much power produced per mm^2 of IC, at some point we will not be able to dissipate the heat from these tiny areas quickly enough
257
How can we reduce the dynamic power consumption of the CPU?
-reducing CPU voltage -reducing the CPU clock frequency and using lower clock speeds in parts of the chip -reducing the gate capacitance -only power the parts of the processor that you need
258
Hardware methods to increase computational speed and efficiency
-increasing clock speed (limited by heat dissipation) -have multiple CPUs work together on same task
259
What are totally ordered tasks?
For a fixed input data, the program instructions always occur in the same order
260
What are unordered and partially ordered tasks?
-unordered tasks can be completed in any order -partially oprdered tasks, some tasks can be completed in any order, some tasks rely on other tasks being completed before they can start
261
What is the equation for the execution speedup using Amdahls law?
S(N) = 1/((1-P) + P/N) Where P is the fraction of the parallelisable code and (1-P) is the part of the algorithm taht can't be parallelised. N is the number of processors.
262
What is the equation for speed up in throughput using Gustafson's Law?
(1-P) + NP, where P is the fraction of the code that can be parallelised and N is the number processors = number of blocks of data
263
Comparison between Amdahl's law and Gustafson's law
Amdhal's law: adding more processors has law of diminishing returns, eventually adding more processors will not cause increased throughput. Gustafason's law argues that increasing processors will increase throughput linearly/forever
264
When are concurrent systems needed?
-multiple tasks must be performed simultaneously -tasks may require low latency -tasks require high data throughput
265
What does a kernel do?
A kernel provides task scheduling and software objects that facilitate the writing of concurrent programs
266
What does bare metal mean?
Running without an OS
267
What is the difference between a concurrent program and a concurrent system?
-A concurrent program can execute using cooperative multitasking. -A concurrent system executes processes in parallel or time-sliced
268
What are the characteristics of a cooperative task?
Cooperative tasks run until they explicitly yield or block, meaning that the programmer controls when the CPU is released
269
What are the characterisitics of periodic tasks?
Periodic tasks are scheduled to be executed at fixed, refular intervals, often driven by a timer or a program loop
270
What are the characteristics of event-driven tasks?
Event-driven tasks remain dormant until a specific trigger occurs
271
Why doe programs rely on the OS?
to access hardware resources like the file system or screen
272
What is a real time system?
One where the correctness of the outcome depends on when its tasks are executed.
273
What is hard real-time?
Any missed deadline is a failure
274
What is soft real-time
occasional misses reduce quality but are not catastrophic
275
what might real-time systems have that can pause less important tasks to ensure critical tasks get the time they need?
A monitoring thread
276
What is a cyclic executive?
a simple form of real-time scheduler in which tasks are executed in a fixed, repeating sequence, at predetermined times, without an OS
277
What does a cyclic executive implementation have?
-timing-based execution -deterministic timing(all tassks are scheduled at known intervals) -static schedule(each tasks start time and duration are predetermined) -more complicatad to implement
278
What is a key advantages of cyclic executive approach?
it is simple to demonstrate that strict timing requirements for input and output are met
279
drawbacks of cyclic executive
-no prioritisation -periodic tasks only, event-triggered tasks could break the timing schedule -CPU still not being used to full potential
280
What is a real-time operating system?
A software library compiled into your program, giving it some features of a full OS while remainig small enough to run on microcontrollers
281
What does a RTOS do?
ensures that events are processed withing a designated time frame -has a task scheduler
282
What can FreeRTOS task scheduler do?
-provide a scheduler that can run tasks concurrently -support priority-based scheduling -handle both periodic tasks and event-driven tasks
283
Drawback of RTOS compared to bare-metal cyclic executive implementation
-larger footprint -greater complexity
284
What ddoes cooperative multitasking involve?
-a list of tasks where each tasks decides when to relinquish the CPU to another task -the OS does not initiate a context switch from a running process to another -simplifies OS code
285
disadvantage of cooperative multitasking
if a task must wait a long time for a hardware resource to become available, other functions will be delayed -causing system to slow down
286
Advantages of cooperative multitasking?
-cooperative multitasking OS code can be straightforward and operate on a memory-limited processor -avoids performance costs associated with the overhead of running complex OS -more programmer control over when to release the CPU
287
What does a regular subroutine do?
runs from start to finish before returning control
288
what is the difference between a subroutine and a coroutine?
-regular subroutine runs from start to finish before returning control -coroutine extends the capability of a subroutine by allowing execution to be paused and resumed at points within it
289
what can a coroutine do?
can yield control back to its caller, saving its state and later continue from where it left off
290
what is the scheduler's role in cooperative multitasking?
to decide which tasks to resume next when a task suspends itself
291
What is a generator?
A generator is a coroutine that produces a sequene of values, yielding each one on suspension -generator calculates values one at a time instead of computing and returning them all at once
292
Where are generators useful?
-large datasets -streams -pipelines as they can save memory
293
advantages of coroutines
-good for writing asynchronous tasks -lightweight -non-blocking multitasking -scalable -simplified state managment
294
Disadvantages of coroutines
-no parallel processing -frame overhead -integration difficulty
295
What is event-driven programming?
-revolves the entire system around events and removes the need for a main program -software waits for events then executes the suitable event handlers
296
how are callbacks related to events in event-driven programming?
callbacks are linked to specific events, there can be multiple callbacks for an event, or none