SWhat is the lesson from this question?

we must set the interest field to false before we allow the access of other threads in the CS.

Another mutex!! we wrap the mutex which is used in order to get into the CS in another mutex. That is, all of the “Readers” are now waiting on the additional mutex, and not on the main mutex.

One-way + barrier of 5 threads.
That is, the barrier maintains the demand that exactly 5 vehicles pass the tunnel and additional 5 might enter only after all of them has left the tunnel.
It is done by adding a new mutex which is responsible of having the first 4 vehicles wait for the fifth to come. Once the fifth comes, it releases them.
NOTE! DOWN(busy) comes after UP(mutex). That is super important becuase the release of busy is placed after the acquisition of mutex.
In order words, if i sleep on busy while holding mutex, i create a deadlock =[

How can we implement FIFO waking order?
The attached algorithm with the addition of wrapping mutex around flag = 0; sleep;

There are 2^10 page directory entries, each of them refers to a page table with 2^10 page table entries.
Each page table entry referes to a base address of a physical page-frame, which can hold 2^12 addresses.
In total, 2^10*2^10^2^12 = 4*2^30 = 4GB of bytes.
Each pde refers to 2^10*2^12 =4*2^20 = 4MB.
ket point: when we think of the number of pages need to be allocated to memory for some program, we should first ask ourselves how many first-level pages we need. According to that, we shall add the number of second-level pages that will satisfy such an amount. Them combined should be the answer. For a 12MB process, we need 3 first-level pages. to provide these for also need 1 second-level page. In total - we need 4 pages.
Find the given address: 0x00403004
p1[1] = [4,194,304,8,388,607] (address is here)
p3[3] - [12,288, 16383] (address is here)
12292 - 12288 = 4.
p1 = 1, p2 =3 ,p3 = 4
1. Answer is: 3 accesses
A computer has an address space of 32 bits, and page size of 2KB, first level:
Assume an address space of 32 bits, 2KB size pages and a two-level page table, in which 8 bits are dedicated for the first level table.
What is the size of the 2nd table?
Virtual address - 2^32
*2^13* 4 = 2^15 =32KB*
*That is, 2^20*4 = 2^22 = 4MB*
That is, 2^52*8 = 2^55. 32 Peta Bytes
How many pages are there in one page table?
2^12/2^3 = 2^9.
The Ofer2000 Operating Systems, based on UNIX, provides the following system call:
rename(char *old, char *new)
This call changes a file’s name from ‘old’ to ‘new’. What is the difference between using this call, and just copying ‘old’ to a new file, ‘new’, followed by deleting ‘old’? Answer in terms of disk access and allocation.
In terms of allocation:
in terms of disk accesses:
What would be the maximal size of a file in a UNIX system with an address size of 32 bits if :
block size is 4K:


very important points regarding threads:

What are the steps of finding /usr/ast/mbox
Does it matter how many references a page had with FIFO?
With FIFO it does not matter how much a page has been referenced. All matters is when it was first loaded in the memory.
The the optimal page replacement is not realistic?
When we’re told the page-frame is of size 16KB, we can be sure the “offset” part is constructed of 14 bits. (2^14 = 16KB)
Now, if the first-level page size is 16KB, and each PTE is of size 4B, then the second-level page has 2^14/2^2 entries. That is, its offset is constructed of 12 bits.
In total. Second level - 12 bits. offset - 14 bits.
Because the address is of 38 bits, 12 bits construct the first-level page
Decreases, because it sure has now less entries.
Yes, because the memory is more “dense”, instead of being spread over x page, it is now spread over y pages, when y<x.></x.>
Yes, because copying from disk to memory takes longer.
No, it most probably increase because the minimum size for allocation is now bigger, thus requests for allocation of smaller chunks will hold a much bigger space than they need.
Given a path, the function looks for the relevant inode and retruns its number. If not found returns 0.
dirlookup searches within disk, in inode ip, in jumps of “struct dirent”, the direct which has the name of name, if found, it returns its inode.
what is the difference between link and ref in the inode struct
Find the inode with number inum on device dev
and return the in-memory copy. Does not lock
the inode and does not read it from disk.
What if during the execution context-switch between two threads of different processes, the TLB will not flush?