RC 4000 Multiprogramming System
Encyclopedia
The RC 4000 Multiprogramming System was an operating system
Operating system
An operating system is a set of programs that manage computer hardware resources and provide common services for application software. The operating system is the most important type of system software in a computer system...

 developed for the RC 4000 minicomputer
Minicomputer
A minicomputer is a class of multi-user computers that lies in the middle range of the computing spectrum, in between the largest multi-user systems and the smallest single-user systems...

 in 1969. It is historically notable for being the first attempt to break down an operating system into a group of interacting programs communicating via a message passing kernel. Although RC 4000 itself was not very successful it was nevertheless extremely influential, sparking the microkernel
Microkernel
In computer science, a microkernel is the near-minimum amount of software that can provide the mechanisms needed to implement an operating system . These mechanisms include low-level address space management, thread management, and inter-process communication...

 concept that dominated operating system research through the 1970s and 80s. The system is also known as Monitor and, somewhat confusingly, simply RC 4000 depending on the reference. For clarity, this article will use the term Monitor.

Monitor was created largely by one programmer, Per Brinch Hansen
Per Brinch Hansen
Per Brinch Hansen was a Danish-American computer scientist known for concurrent programming theory.-Biography:He was born in Frederiksberg, in Copenhagen, Denmark....

, who worked at Regnecentralen
Regnecentralen
Regnecentralen, or RC for short, was the first Danish computer company, founded on October 12, 1955. Through the 1950s and 60s they designed a series of computers, originally for their own use, and later to be sold commercially. Descendants of these systems sold well into the 1980s...

 where the RC 4000 was being designed. Leif Svalgaard participated in the implementation and testing of Monitor. Brinch Hansen found that no existing operating system was suited to the new machine, and was tired of having to adapt existing systems. He felt that a better solution was to build an underlying kernel, which he referred to as the nucleus, that could be used to build up an operating system from interacting programs. Unix
Unix
Unix is a multitasking, multi-user computer operating system originally developed in 1969 by a group of AT&T employees at Bell Labs, including Ken Thompson, Dennis Ritchie, Brian Kernighan, Douglas McIlroy, and Joe Ossanna...

, for instance, uses small interacting programs for many tasks, transferring data through a system known as pipes. However a large amount of fundamental code is buried in the kernel itself, notably things like file systems and program control. Monitor would remove this code as well, making almost the entire system a set of interacting programs, reducing the kernel (nucleus) to a communications and support system only.

Monitor used a pipe-like system of shared memory as the basis of its inter-process communication
Inter-process communication
In computing, Inter-process communication is a set of methods for the exchange of data among multiple threads in one or more processes. Processes may be running on one or more computers connected by a network. IPC methods are divided into methods for message passing, synchronization, shared...

s. Data to be sent from one process to another was copied into an empty memory buffer, and when the receiving program was ready, back out again. The buffer was then returned to the pool. Programs had a very simple API for passing data, using an asynchronous set of four methods. Client applications send data with send message and could optionally block using wait answer. Servers used a mirroring set of calls, wait message and send answer. Note that messages had an implicit "return path" for every message sent, making the semantics more like a remote procedure call
Remote procedure call
In computer science, a remote procedure call is an inter-process communication that allows a computer program to cause a subroutine or procedure to execute in another address space without the programmer explicitly coding the details for this remote interaction...

 than Mach's completely I/O-based system.

Monitor divided the application space in two; internal processes were traditional programs, started on request, while external programs were effectively device drivers. External processes were actually handled outside of user space by the nucleus, although they could be started and stopped just like any other program. Internal programs were started in the context of the "parent" that launched them, so each user could effectively build up their own operating system by starting and stopping programs in their own context.

Scheduling
Scheduling (computing)
In computer science, a scheduling is the method by which threads, processes or data flows are given access to system resources . This is usually done to load balance a system effectively or achieve a target quality of service...

 was left entirely to the programs, if required at all (in the 1960s, multitasking
Computer multitasking
In computing, multitasking is a method where multiple tasks, also known as processes, share common processing resources such as a CPU. In the case of a computer with a single CPU, only one task is said to be running at any point in time, meaning that the CPU is actively executing instructions for...

 was a debatable feature). One user could start up a session in a pre-emptive multitasking environment, while another might start in a single-user mode to run batch processing
Batch processing
Batch processing is execution of a series of programs on a computer without manual intervention.Batch jobs are set up so they can be run to completion without manual intervention, so all input data is preselected through scripts or command-line parameters...

 at higher speed. Real-time
Real-time computing
In computer science, real-time computing , or reactive computing, is the study of hardware and software systems that are subject to a "real-time constraint"— e.g. operational deadlines from event to system response. Real-time programs must guarantee response within strict time constraints...

 scheduling could be supported by sending messages to a timer process that would only return at the appropriate time.

Monitor proved to have truly terrible performance. Much of this was due to the cost of IPC, a problem that has since plagued most microkernels. Under Monitor data was copied twice for every message, and memory handling on the RC 4000 was not particularly fast. Another area of serious concern was launching and killing programs to handle requests, which happened all the time.

These two areas have seen the vast majority of development since Monitor's release, driving newer designs to use hardware to support messaging, and supporting threads within applications to reduce launch times. For instance, Mach required a memory management unit
Memory management unit
A memory management unit , sometimes called paged memory management unit , is a computer hardware component responsible for handling accesses to memory requested by the CPU...

 to improve messaging by using the copy-on-write
Copy-on-write
Copy-on-write is an optimization strategy used in computer programming. The fundamental idea is that if multiple callers ask for resources which are initially indistinguishable, they can all be given pointers to the same resource...

 protocol and mapping (instead of copying) data from process to process. Mach also used threading extensively, allowing the external programs, or servers in more modern terms, to easily start up new handlers for incoming requests. Still, Mach IPC was too slow to make the microkernel approach practically useful. This only changed when Liedtke L4 microkernel
L4 microkernel family
L4 is a family of second-generation microkernels, generally used to implement Unix-like operating systems, but also used in a variety of other systems.L4 was a response to the poor performance of earlier microkernel-base operating systems...

demonstrated an order-of-magnitude improvement in IPC overheads.
The source of this article is wikipedia, the free encyclopedia.  The text of this article is licensed under the GFDL.
 
x
OK