Priority inversion
Encyclopedia
In computer science
Computer science
Computer science or computing science is the study of the theoretical foundations of information and computation and of practical techniques for their implementation and application in computer systems...

, priority inversion is a problematic scenario in scheduling
Scheduling (computing)
In computer science, a scheduling is the method by which threads, processes or data flows are given access to system resources . This is usually done to load balance a system effectively or achieve a target quality of service...

 when a higher priority task is indirectly preempted
Preemption (computing)
In computing, preemption is the act of temporarily interrupting a task being carried out by a computer system, without requiring its cooperation, and with the intention of resuming the task at a later time. Such a change is known as a context switch...

 by a lower priority task effectively "inverting" the relative priorities of the two tasks.

This violates the priority model that high priority tasks can only be prevented from running by higher priority tasks and briefly by low priority tasks which will quickly complete their use of a resource shared by the high and low priority tasks.

Example of a priority inversion

Consider there is a task L, with low priority. This task requires resource R. Consider that L is running and it acquires resource R.
Now, there is another task H, with high priority. This task also requires resource R. Consider H starts after L has acquired resource R. Now H has to wait until L relinquishes resource R.

Everything works as expected up to this point, but problems arise when a new task M (which does not use R) starts with medium priority during this time.
Since R is still in use (by L), H cannot run.
Since M is the highest priority unblocked task, it will be scheduled before L.
Since L has been preempted by M, L cannot relinquish R.
So M will run till it is finished, then L will run - at least up to a point where it can relinquish R - and then H will run.
Thus, in above scenario, a task with medium priority ran before a task with high priority, effectively giving us a priority inversion.

In some cases, priority inversion can occur without causing immediate harm—the delayed execution of the high priority task goes unnoticed, and eventually the low priority task releases the shared resource. However, there are also many situations in which priority inversion can cause serious problems. If the high priority task is left starved
Resource starvation
In computer science, starvation is a multitasking-related problem, where a process is perpetually denied necessary resources. Without those resources, the program can never finish its task....

 of the resources, it might lead to a system malfunction or the triggering of pre-defined corrective measures, such as a watch dog timer
Watchdog timer
A watchdog timer is a computer hardware or software timer that triggers a system reset or other corrective action if the main program, due to some fault condition, such as a hang, neglects to regularly service the watchdog A watchdog timer (or computer operating properly (COP) timer) is a computer...

 resetting the entire system. The trouble experienced by the Mars lander "Mars Pathfinder
Mars Pathfinder
Mars Pathfinder was an American spacecraft that landed a base station with roving probe on Mars in 1997. It consisted of a lander, renamed the Carl Sagan Memorial Station, and a lightweight wheeled robotic rover named Sojourner.Launched on December 4, 1996 by NASA aboard a Delta II booster a...

" is a classic example of problems caused by priority inversion in realtime
Real-time computing
In computer science, real-time computing , or reactive computing, is the study of hardware and software systems that are subject to a "real-time constraint"— e.g. operational deadlines from event to system response. Real-time programs must guarantee response within strict time constraints...

 systems.

Priority inversion can also reduce the perceived performance
Perceived performance
Perceived performance, in computer engineering, refers to how quickly a software feature appears to perform its task. The concept applies mainly to user acceptance aspects....

 of the system. Low priority tasks usually have a low priority because it is not important for them to finish promptly (for example, they might be a batch job or another non-interactive activity). Similarly, a high priority task has a high priority because it is more likely to be subject to strict time constraints—it may be providing data to an interactive user, or acting subject to realtime response guarantees. Because priority inversion results in the execution of the low priority task blocking the high priority task, it can lead to reduced system responsiveness, or even the violation of response time guarantees.

A similar problem called deadline interchange can occur within earliest deadline first scheduling
Earliest deadline first scheduling
Earliest deadline first or least time to go is a dynamic scheduling algorithm used in real-time operating systems. It places processes in a priority queue. Whenever a scheduling event occurs the queue will be searched for the process closest to its deadline...

 (EDF).

Solutions

The existence of this problem has been known since the 1970s, but there is no fool-proof method to predict the situation. There are however many existing solutions, of which the most common ones are:

Disabling all interrupts to protect critical sections
When disabled interrupts are used to prevent priority inversion, there are only two priorities: preemptible, and interrupts disabled. With no third priority, inversion is impossible. Since there's only one piece of lock data (the interrupt-enable bit), misordering locking is impossible, and so deadlocks cannot occur. Since the critical regions always run to completion, hangs do not occur. Note that this only works if all interrupts are disabled. If only a particular hardware device's interrupt is disabled, priority inversion is reintroduced by the hardware's prioritization of interrupts. A simple variation, "single shared-flag locking" is used on some systems with multiple CPUs. This scheme provides a single flag in shared memory that is used by all CPUs to lock all inter-processor critical sections with a busy-wait. Interprocessor communications are expensive and slow on most multiple CPU systems. Therefore, most such systems are designed to minimize shared resources. As a result, this scheme actually works well on many practical systems. These methods are widely used in simple embedded system
Embedded system
An embedded system is a computer system designed for specific control functions within a larger system. often with real-time computing constraints. It is embedded as part of a complete device often including hardware and mechanical parts. By contrast, a general-purpose computer, such as a personal...

s, where they are prized for their reliability, simplicity and low resource use. These schemes also require clever programming to keep the critical sections very brief. Many software engineers consider them impractical in general-purpose computers.


A priority ceiling
With priority ceilings, the shared mutex
Mutual exclusion
Mutual exclusion algorithms are used in concurrent programming to avoid the simultaneous use of a common resource, such as a global variable, by pieces of computer code called critical sections. A critical section is a piece of code in which a process or thread accesses a common resource...

 process (that runs the operating system code) has a characteristic (high) priority of its own, which is assigned to the task locking the mutex. This works well, provided the other high priority task(s) that tries to access the mutex does not have a priority higher than the ceiling priority.


Priority inheritance
Priority inheritance
In real-time computing, priority inheritance is a method for eliminating priority inversion problems. Using this programming method, a process scheduling algorithm will increase the priority of a process to the maximum priority of any process waiting for any resource on which the process has a...

Under the policy of priority inheritance, whenever a high priority task has to wait for some resource shared with an executing low priority task, the low priority task is temporarily assigned the priority of the highest waiting priority task for the duration of its own use of the shared resource, thus keeping medium priority tasks from pre-empting the (originally) low priority task, and thereby affecting the waiting high priority task as well. Once the resource is released, the low priority task continues at its original priority level.

See also

  • Resource starvation
    Resource starvation
    In computer science, starvation is a multitasking-related problem, where a process is perpetually denied necessary resources. Without those resources, the program can never finish its task....

  • Pre-emptive multitasking
  • Non-blocking synchronization
    Non-blocking synchronization
    In computer science, a non-blocking algorithm ensures that threads competing for a shared resource do not have their execution indefinitely postponed by mutual exclusion...

  • Read-copy-update
    Read-copy-update
    In computer operating systems, read-copy-update is a synchronization mechanism implementing a kind of mutual exclusionPlease note that RCU does not implement mutual exclusion in the conventional sense: RCU readers can and do run concurrently with RCU updates...

     (RCU)
  • Nice (Unix)
    Nice (Unix)
    nice is a program found on Unix and Unix-like operating systems such as Linux. nice directly maps to a kernel call of the same name. For a given process, it changes the priority in the kernel's scheduler. A niceness of −20 is the highest priority and 19 or 20 is the lowest priority...


External links

The source of this article is wikipedia, the free encyclopedia.  The text of this article is licensed under the GFDL.
 
x
OK