This article is part of our on-going UNIX kernel overview series.
In the previous article of this series, we discussed about the UNIX process overview.
This article explains on a high-level about Reentrant kernels, synchronization and critical sections of UNIX kernel architecture.
As the name suggests, a reentrant kernel is the one which allows multiple processes to be executing in the kernel mode at any given point of time and that too without causing any consistency problems among the kernel data structures.
Well, we know that in a single processor system only one process can execute at any given instant but there could be other processes blocked in kernel mode waiting to be executed.
For example, in a reentrant kernel a process waiting on a ‘read()’ call may decide to release CPU to a process which is waiting for execution in kernel mode.
Now, one might get a question in mind about why a kernel is made reentrant? Well, lets start with a example where a kernel is not reentrant and see what if it allows multiple processes to execute in kernel mode.
Lets assume that a process is executing in kernel mode and accessing a kernel data structure and some global values associated with it.
- Suppose the process name is ‘A’.
- Now ‘A’ accesses a global variable to see if the value is non zero (so that it can do some calculations etc) and just before it tries to use this value in some of its logic, a context switch to process ‘B’ happens.
- Now this process ‘B’ tries to access the value of the same global variable and decrements it.
- Another context switch happens and process ‘A’ comes back into execution.
- Since ‘A’ does not know that ‘B’ has already decremented the value, it tries to use this value again.
- So here is the catch, process ‘A’ sees two different values of the global variable as the value was changed by another process ‘B’.
So, now we know that why a kernel needs to be reentrant. Another question that may arise is how to make a kernel reentrant?
On a basic note, following points could be considered for making a kernel reentrant :
- Write kernel functions that modify only the local (stack) variables and do not alter the global variables or data structures. Such type of functions are also known as reentrant functions.
- Strictly adhering to the use of only reentrant functions in a kernel is not a feasible solution. So another technique used is ‘locking mechanisms’ that ensure that only one process can use a non reentrant function at a given time.
From the above points it is clear that use of reentrant functions and locking mechanisms for non-reentrant functions is the core of making a kernel reentrant. Since implementing reentrant functions is more related to good programming, the locking mechanisms are related with the concept of synchronization.
Synchronization and Critical Sections
As discussed above in an example, a reentrant kernel requires synchronized access to kernel global variables and data structures.
The chunk of code that operates on these global variables and data structures is known as a critical section.
If a kernel control path is suspended (while using a global value or data structure) due to a context switch then no other control path should be able to access the same global value or data structure. Otherwise it could have disastrous effects.
If we look back and see why we need synchronization? The answer is to safely use kernel global variables and data structures. Well, this can also be achieved through atomic operations. An atomic operation is an operation that will always be executed without any other process being able to read or change state that is read or changed during the operation. Unfortunately, atomic operations cannot be applied everywhere. For example, removing an element from a linked list inside kernel cannot be made an atomic operation.
Now lets focus back on how to synchronize kernel control paths.
Kernel Preemption Disabling
Kernel preemption is a concept where-in the kernel allows forceful suspension/interruption of one task and bring in execution another high priority task that has been waiting on kernel resources.
In simpler terms it is context switching of processes in kernel mode where the running process is suspended forcefully by the kernel and the other process is brought into execution.
If we go by the definition then we realize that its this very capability of the kernel (to preempt when processes are in kernel mode) that causes synchronization issues. A solution to the problem is to disable kernel preemption. This makes sure that context switch in kernel mode only happens when a process which is currently executing in the kernel mode voluntarily releases the CPU and makes sure that all the kernel data structures and global variables are in consistent state.
Clearly disabling the kernel preemption is not a very elegant solution and this solution falls flat when we are using multiprocessor systems as two CPUs can concurrently access a same critical section.
Another mechanism that can be applied for achieving synchronization inside the kernel is for a process to disable all hardware interrupts before entering a critical region and enabling them after leaving that very critical region. This solution again is not an elegant solution as in cases where the critical region is large then interrupts may get disabled for very long time defeating there very own purpose of being an interrupt and may cause hardware activities to freeze.
This is a most popular method for providing synchronization inside the kernel.
It is effective on both uniprocessor and multiprocessor systems. According to this concept, a semaphore can be thought of as a counter that is associated with each data structure and checked by all the kernel threads when they try to access that particular data structure.
A Semaphore contains information about the counter value, a list of processes waiting to acquire the semaphore(to access the data structure) and two methods to increase or decrease the value of counter associated with this semaphore.
The working logic is as follows:
- Suppose a process wants to access a particular data structure, it will first check the counter associated with semaphore of the data structure.
- If the counter is anything positive then the process will acquire Semaphore, decrement the value of the counter, execute the critical region and increment the Semaphore counter.
- But if a process finds the value of the counter as zero then the process is added to the list (associated with semaphore) of processes waiting to acquire the semaphore.
- Now, whenever the counter becomes positive all the processes waiting for the semaphore try to acquire it.
- The one that acquires again decreases the counter, executes the critical region and then increases the counter back while the other processes go back to the waiting mode.
Working with a synchronization scheme like Semaphores comes with a side effect of ‘Deadlocks’.
Lets take an example:
- Suppose a process A acquires a Semaphore for a particular data structure while process B acquires Semaphore for a another data structure.
- Now, in the next step, both processes want to acquire Semaphore for the data structures that are acquired by each other ie Process A wants to acquire Semaphore that is already acquired by Process B and vice-versa.
- This type of situation where a process is waiting for another process to release a resource while the other one is waiting for the first one to release a resource is known as a deadlock.
- Deadlocks may cause complete freeze of kernel control paths.
These type of deadlocks are more frequent in designs where huge numbers of kernel locks are being used. In these designs this becomes extremely difficult to determine that a deadlock condition would never occur. In OS like Linux, the deadlocks are avoided by acquiring them in order.
Comments on this entry are closed.
The semaphore is used to protect critical sections of code, not just data structures.