Unisys OS 2200 operating system
Encyclopedia
OS 2200 is the operating system
currently used for the Unisys
ClearPath Dorado family of mainframe systems. OS 2200 is a lineal descendant of Exec 8 for the UNIVAC
1108. The name Exec 8 was shorthand for “Executive System for the UNIVAC 1108.” The UNIVAC 1108 computer was announced in 1964 and delivered in late 1965. The first 1108 computers used Exec I and Exec II which had been developed for the UNIVAC 1107
. However, UNIVAC
planned to offer symmetric multiprocessor
versions of the 1108 with up to 4 processors and the earlier operating systems (really basic monitor programs) weren’t designed for that. The specifications for Exec 8 were drawn up by December 1964 as a preliminary Programmers Reference Manual (user guide) and work began in May 1965.,
Documentation and other information on current and past Unisys systems can be found on the Unisys public support website and the Unisys eCommunity
in 1951, but the 1108 marked the introduction of the first 1100 Series
computer designed for a modern operating system. Previous systems were all real-mode systems with no hardware support for protection and separation of programs and operating system. While there was support for multi-programming in previous software, it was limited to running one user job and multiple, known to be well-behaved, supporting functions such as card reader, printer, and card punch spoolers.
Along with the new hardware system came the introduction of an operating system known as Exec 8 (the Executive System for the UNIVAC 1108). Previous system software instantiations (Exec I, Exec II) are more properly known as monitor systems even though they did support limited multi-programming. Exec 8 was designed from the very beginning to be a multi-programming, multi-processing operating system since the 1108 was designed to have up to 4 CPUs. Memory size and cost and mass storage size and cost were the primary constraints. The predecessor systems were also heavily used for scientific computing, process control, message switching, and other real time functions. While the 1100 Series was envisioned as targeting a more general market, extreme real time processing was a primary requirement.
When the UNIVAC 1110
was introduced in 1972, the operating system name was changed to OS 1100 to reflect its support for the wider range of systems. The name OS 1100 was retained until 1988 with the introduction of the Sperry 2200 Series
as a follow on to the 1100 Series when its name was changed to OS 2200. Since that time, the 2200 Series became the Unisys ClearPath IX Series and then the Unisys ClearPath Dorado Series, but the operating system retained the OS 2200 name.
The company name and its product names also changed over time. Engineering Research Associates
(ERA) of Saint Paul was acquired by Remington Rand Corporation
. Remington Rand also acquired the Eckert–Mauchly Computer Corporation of Philadelphia which was then building the UNIVAC
computer. The two were combined into the UNIVAC division of Remington Rand under the direction of William Norris. William Norris
had been one of the founders of ERA and later left Remington Rand to start Control Data Corporation
. The UNIVAC division of Remington Rand Corporation became the UNIVAC division of Sperry Rand Corporation after Remington Rand merged with Sperry Corporation
. In the 1970s Sperry Rand began a corporate identity program that changed its name to Sperry Corporation and all the division names to Sperry ___ so that the computer systems division became Sperry UNIVAC. Later the division names were dropped and everything simply became Sperry. The operating system is still referred to as “the Exec” by most Unisys and customer personnel.
In 1986 Burroughs and Sperry corporations merged to become Unisys (which some long time 2200 Series clients say stands for “UNIVAC Is Still Your Supplier”). The major mainframe products lines of both companies have continued in development including the MCP Operating System from Burroughs and OS 2200 from Sperry.
for a description of the machine architecture and its relationship to the OS 2200 operating system.
Exec 8 was designed to run on systems that often only had 128K words (~512K bytes) and was focused on real time batch processing. While the earliest release levels did work in the 128KW (usually called 131K by UNIVAC), increasing function in subsequent releases made that untenable as it didn't leave enough space for programs of any useful size. The maximum memory capacity of an 1108 was 256KW (called 262KW or about 1MB) so efficient use of memory was probably the single most important constraint. The core memory of the time was also the single most expensive part of the system. Mass storage devices weren't much better. High speed storage consisted of drum memories that held 256KW (FH-432) to 2MW (FH-1782). The highest capacity mass storage was the FASTRAND
drum which held 22MW or about 100MB.
With the memory constraints and the real time focus, the idea of having only a single copy of code loaded in memory for multiple uses was included as a requirement. Since the 1108 was a multi-processing system, the operating system was designed to be fully reentrant (thread safe in today’s terminology) and to use fine-grained locking to protect shared data during updates. The design included having compilers, utilities, and even user applications that might have multiple copies running concurrently written so that their code could be shared. This would require loading only one copy of the code in memory which saved both space and the time it took to load the code. Each use would get its own copy of the data.
A supporting idea in the hardware was that the memory was designed with two physical banks (cabinets) each with its own access paths. Thus a CPU could access both banks simultaneously. By loading the code into one bank and data into the other bank of memory, the average execution time of the instructions could be halved. So it was natural for this reason as well to separate code and data into different load entities. These were called banks (from the memory banks and because segment referred to a commonly used programming technique known as overlay segments
).
Code that was going to be shared had to be thread safe which means it had to be read/execute only. Writing into the code stream was not allowed if it was going to be shared. While writing into the code stream was still a fairly common programming technique in 1967, 1100 Series users were encouraged not to do so simply because of the performance benefit. The security and reliability benefits were touted but not highly valued at that time by customers.
Exec 8 began as a real time
operating system with early use for message switching, process control of simulators and missile firings, and general scientific and engineering work. It was primarily a batch processing
system that allowed applications very fine control of processor priority for each thread (called activities). All processor switching was preemptive with higher priority threads gaining control of the processor currently running the lowest priority thread of any program. It was a multi-programming and multi-processing operating system with fully symmetric processor management. A test-and-set instruction built into the hardware allowed very efficient and fine-grained locking both within the OS and within multi-threaded applications.
In 1968 work began on adding time-sharing
capability to Exec 8 and that was delivered with level 23 of Exec 8 in 1969. Time sharing (called Demand from “on demand”) provided identical capabilities to batch and real time processing. Everything that could be done in batch could be done in Demand mode. In Demand mode the job stream input and output were attached to a terminal handler rather than spool files. The same job control language (ECL – Executive Control Language) was used for both. A few years later some more specific time sharing capabilities were added to allow some of the ECL statements to be issued for immediate processing even while a program was running. At first these were just the statements to kill the program or redirect output to a file, but later all control statements related to files and devices were also allowed to be “immediate.”
In Exec 8 work is organized into jobs, called “runs,” which are scheduled based on their priority and need for specific resources such as tape reels or exclusive control of files. The control language ECL uses a syntax with the “@” as the control statement recognition symbol followed by the command or program name. That is then followed by a comma and option letters a space and then positional parameters that depend on the nature of the specific command. So a command to compile a C module would look like “@UC,opt source, object.” Input data for an application may be read from a file or be in line following the command that starts the application. All lines that do not have the leading “@” are presumed to be in-line data.
A transaction processing
capability was developed in the late 1960s as a joint project with United Airlines and later refined in another joint project with Air Canada. This capability was fully integrated into the operating system in 1972 and became the basis of much of the future growth of the 1100 Series. Early users controlled communication lines directly from within their real time programs. Part of the development of transaction processing included a communication message system that managed the communication lines and presented messages to Exec 8 to be scheduled as transactions. This moved all the low level communication physical line management and protocols out of the applications and into the CMS 1100 application.
CMS 1100 itself ran as a real time multi-threaded program with the privilege of acquiring control of communication lines and submitting transaction messages for scheduling. This led to the notions in Exec 8 that applications of any nature needed to be carefully controlled to ensure that they could not cause integrity issues. Security was certainly a concern, but in the early days system reliability and integrity were much larger issues. The system was still primarily batch and transaction processing and there was little chance that anyone could install unauthorized code on the system. CMS 1100 later added the capability to be the interface for Demand terminals as well as transaction terminals so that terminals could be used for both and the early terminal drivers could be removed from the Exec. CMS 1100 was later replaced by CPCOMM.
As of 2010 the OS 2200 release is level 48R6 (48.370.4). The internal system levels use a three-part number such as 21.92.42 (which was the first widely used production system although earlier releases were used in production at a number of sites). The first number part indicates the major level and indicates a new version of the Exec with all previous updates integrated into a new base version. This is an infrequent process and occurs at intervals of years. The second number part indicates versions of updates to the major level and often occurs several times per week. When a decision is made to freeze the feature content and prepare for release, the third part comes into play and indicates versions of the pre-release level as fixes and minor feature updates are applied. Concurrently with preparing a level for release, updates to the “mainline” continue as engineers integrate changes in preparation for a future release. For many years the official release level was the full three-part number. Later releases were named simply 44R1, 44R2, 45R1, and so on although the three-part number is still used internally.
The Exec is responsible for managing the system hardware, scheduling and managing work, and communicating with operators and administrators.
in Windows or Daemons
in Linux and UNIX are implemented as either activities within the Exec or as batch programs that are always running in the background.
Time-sharing
(known as Demand from “on demand”) and transaction processing are implemented as special cases of batch. One result is that there are few restrictions on what a time-sharing user or transaction program can do. There are lots of warnings for writers of transaction programs that they will not be happy with performance if for example they call for a tape mount, but it is permitted.
The largest unit of work is the “Run.” This is taken from the factory “production run” terminology and generally equates to job or session on other systems. A Run is defined by its “run stream.” A run stream is a sequence of control statements that represent the steps to be taken. They may include file handling, program execution, and branches of control. A batch Run is typically stored as a file and is scheduled by a “Start” command from within another Run or by the operator. A time sharing Run is initiated by logging in from a time-sharing terminal and inputting the @RUN command. Often the @RUN statement and the second control statement (often @ADD or a program execution) are generated automatically based on the user profile. Security authorizations are validated based on the authenticated user-id and other information supplied on the Run control statement.
Transactions are a special case. There aren’t actually any control statements, but the internal data structures of a run are created. This enables the Exec to associate the same security, accounting, debugging, etc. mechanisms with transaction programs. Generally a security profile is cached in memory at the time the transaction user is authenticated and is copied from the user’s session data to the transaction run state when the transaction is scheduled. Because each transaction instance is essentially a Run, accounting, logging, and error handling are all encapsulated by the Run mechanism.
Batch jobs (Runs) are characterized by having a runstream (job control language statements) stored in a file. A batch job always contains an @RUN statement as the first record in the file. This statement gives the run a name (runid), defines priorities, and defines the maximum number of SUPS (Standard Units of Processing) the job is expected to use. The job is started from some other job with a @START control statement or by the operator via an ST keyin. The system may be configured to automatically issue @START statements for any number of jobs when it boots. These jobs serve the purpose of performing initialization, recovery, and background functions.
All of the fields on the @RUN statement may be overridden by corresponding fields on the @START statement. Except when the @START is executed by a privileged user, the userid and other security state are always taken from the run doing the @START.
There are two priority fields on the @RUN statement. One is used to specify the backlog priority. There are 26 backlog priority levels (A – Z). The Exec has a configured maximum number of open batch runs. When that level is reached, jobs are then selected from the backlog queues in priority order. Within a priority selection is usually FIFO. However, the Exec pre-scans the job control statements up to the first program execution looking for file names and reel numbers. If the job would immediately stall because some resources it needs are not available, it may be bypassed to start other jobs at the same priority level.
The second priority level defines an execution processor resource group. We’ll talk later more about how processor allocation is done, but higher execution group priorities typically get more processor time.
While the OS 2200 job control language does not support full programmability, it does allow dynamic additions of sequences of control language through an @ADD control statement. The file to be added may have been created by the same job immediately preceding adding it. The @ADD and most other control statements may also be submitted from within a running program via an API. Additional programmability is available indirectly through the use of the Symbolic Stream Generator (SSG). SSG is a programming language for manipulating and creating text files from input parameters and system information. It is used heavily for configuration management (make) processing and other functions where text images need to be created programmatically. The resulting output can be “@ADD”ed in the same run thus providing the indirectly programmable runstream.
Operator keyins are available to change both the backlog and execution priorities of runs. As all operator keyins are available by API to suitably privileged users, this can be automated or controlled by a remote administrator.
Deadline is a special case of batch. A deadline run looks just like any other batch run except that a deadline time is specified on the @RUN or @START control statement. The deadline time is used in conjunction with the maximum SUPS (time estimate) on the control statement. A deadline job runs at normal batch priorities unless or until it appears that it could miss its deadline time. Then the more the mismatch between time until the deadline and remaining SUPS, the higher the priority. While deadline can’t totally shut off transactions and has no effect on real time, it can effectively shut off most other processing in the system if necessary to achieve its goal.
OS 2200 time-sharing sessions are called demand runs. They use the same control language as batch runs with a few additions known as “immediate” control statements. Immediate control statements use the “@@” sentinel which indicates that they are to be executed immediately even if a program is running. While they can be used to create or assign files, the most important ones allow a demand user to error terminate a running program or even send it a signal.
Transactions execute as runs but without any stored or submitted control statements. Instead when a message is received from a session defined as a transaction session, it is scanned to determine the transaction queue on which it is to be placed. This is normally determined by the first characters of the message but user-written scanners may be added.
The communication manager, which is capable of handling up to 250,000 active sessions, takes incoming transaction messages and passes them to the message queuing software. It can handle an unlimited number of queued messages using the message queuing architecture. A call is made to the Transaction Interface Package (TIP) APIs in the operating system to queue the transaction on the appropriate queuing point. Each queuing point identifies the priority and concurrency level of the work and the associated transaction program to be executed.
A transaction program scheduling tree allows the client to establish relative usage for groups of transaction programs. Concurrency limits avoid one type of work dominating the system to the exclusion of other work and avoid creating an over commitment of resources. Up to 4094 nodes may be created in the tree.
Priority (0 to 63) and concurrency level (1 to 512) specified for each transaction program.
The highest priority transaction is selected for scheduling except as limited by the concurrency policies in effect for its node and higher nodes.
Real time is not another type of run. Rather it is a set of priority levels which any activity may request. Real time is most typically used by long running batch programs, like the OS 2200 communications manager CPComm, but is not restricted to such.
There are 36 real time priority levels available by API for applications to use. The user and account must have the privilege to use real time priorities. It is up to the site to control how their applications use the priority levels. Real time priorities totally dominate all lower priorities so it’s quite possible for a misbehaved real time program to tie up one or more processors.
The real time priority applies to an individual activity (thread) so a program may have both real time and non-real time threads executing at the same time.
Once a run has been started, getting access to the processor controls its rate of progress. The heart of the Exec is the Dispatcher
which manages all the processors.
The Exec supports up to 4095 dispatching priorities although most sites define only a small subset of those. The two highest “priorities” aren’t switchable. They are recognition of certain types of processing that must be allowed to continue on the processor on which they started until they voluntarily give up control. Interrupt lockout occurs when an interrupt comes in or in a few special cases when other Exec code prevents all interrupts (in order to change some data that an interrupt handler may also access).
Interlock is used by interrupt post processing routines that either need to run on the same physical processor or simply should not be interrupted. The Dispatcher, I/O completions, and I/O initiation are some examples. All locks used by both of these priorities are spin locks as the only way they can be set by someone else is on another processor and the design requires that they only be set for very short instruction sequences.
High Exec priority is used by the operator keyin service and some other functions that may have to run even when a real time program has control. They are expected to use only very short amounts of time. If they need more time, they should queue the work to be processed by a Low Exec activity.
Real time activities have an unlimited processor quantum and run without switching unless interrupted by a higher priority real time activity or High Exec activity. Real Time activities are given control of any available processor that is running something of lower priority. Interrupts are sent between processors when necessary to ensure immediate availability. Real time is used by customers to fly missiles, run simulators, and other functions that require immediate response. OS 2200 has long guaranteed real time events get control of a processor within 50 microseconds.
Transaction priorities may be handled in two ways as defined by the site. They may be a sort of lower priority real time in that only the priority matters and the quantum size is essentially infinite. This is appropriate for very short-lived transactions such as airline reservations; if one loops due to a programming error, the Exec will terminate it when it reaches its very small configured maximum time. The other form allows the Exec to vary the priority within a range to optimize system resource usage. The approach gives higher priority and shorter time slices to programs that are I/O limited and progressively lower priorities but longer time slices to those that are computing. The Exec dynamically adjusts these priorities based on behavior as programs often behave both ways at different times. This approach is appropriate for longer running transactions like database queries or airline fare quotes.
Batch and demand always use dynamically adjusted priorities. Programs that are I/O limited or are in a conversation with a time-sharing user get higher priorities but short time slices. More compute-oriented programs get lower priorities and longer time slices.
The Exec has two additional mechanisms for optimizing dispatching. One is affinity-based dispatching. When possible the Exec will run an activity on the same processor that it was on the last time to get the greatest advantage of residual cache contents. If that isn’t possible it tries to keep the activity on the “nearest” processor from the standpoint of cache and memory access times. The second is a “fairness” policy mechanism. The site can define the relative percentage of resources to be allocated to each of transactions, demand and batch. Within transactions and batch there are priority groupings that can further indicate what percentage of their group’s time is to be allocated to the priority. This ensures that transactions cannot so dominate the system that no batch work gets done. Within the various priority groupings it ensures that some progress can be assured for each group (unless the group percentage is zero). These “fairness” algorithms only come into play when the processors are very busy, but OS 2200 systems often run with all processors at near 100% utilized.
OS 2200 supports several models for system performance management. Customers may purchase a certain fixed performance level, and the Exec will monitor processor usage to ensure that performance does not exceed that level. Customers can also purchase additional performance either temporarily or permanently up to the full capacity of the system if their workload increases or an emergency requires it.
More recently the system has added a metered usage capability. In this mode the full power of the system is always available to the customer (although they may administratively limit that). The usage is accumulated over a month and then the reported usage is submitted to Unisys billing. Depending on the specific contract terms the client may receive a bill for excess usage above some contracted baseline for the month or just a statement showing that the total contracted usage has been decremented. The first form is like a cell phone bill with the potential for charging for excess minutes. The latter is like buying a pre-paid phone card.
system as do most other operating systems. Rather it has a structured naming convention and the notion of container files called program files.
Files in OS 2200 are simply containers that may be addressed either by word offset in the file or by sector (28-word unit) offset in the file. The 28 words is a historical unit from an early mass storage device (the FASTRAND drum) that could hold 64 such units per physical track. Nonetheless, it is a fortunate historical accident. Four such 28-word units or 112 words occupy 504 bytes. With today’s mass storage devices all using 512-byte physical records, OS 2200 clients have almost all adopted some multiple of 112 words as their physical record size and database page size. I/O processors automatically adjust for the 504<->512 byte mapping adding 8 bytes of zeros on writes and stripping them off on reads of each physical record. OS 2200 handles applications that use sizes other than multiples of 112 words by indivisibly reading the containing physical records and writing back out the unchanged and changed portions with data chaining. Special locking functions guarantee indivisibility even when there are device errors and across multiple systems in a cluster.
File formats and other internal data structures are described in the Data Structures Programming Reference Manual.
Any file may be used as a program file. A program file contains elements which generally act as files. Element naming is Q*F(c).Element/version(e-cycle) (e.g., "PERSONNEL*PROGRAMS.TAXCALC/2008"). Element and version are twelve-character names used in any way a user desires. E-cycle is similar to f-cycle in that it represents a generation number but without the restriction to 32 concurrent cycles and the limit is 256K cycles. However, e-cycle only applies to text elements and each line in a text element is marked with the cycle numbers at which it was inserted and deleted. Elements also have a type and sub-type. The most commonly used types are “text” and “object.” The default element name of an object file is the same as the text file from which it was created. Generally, which one you mean is obvious from context. Where it is not, options select the appropriate type. Text elements also have sub-types that typically represent the programming language (e.g., “ASM”, “C”, “COB”, “FOR”).
An object element may be executed if it is a main program or linked with other object elements including a main program. The linking may be static or dynamic. A main program may be executed without pre-linking provided all required sub-programs are in the same program file, are system libraries, or are otherwise known. Rules may be included in a program file to direct the dynamic linker’s search for unfulfilled references. The linker may also be used to statically link multiple object modules together to form a new object module containing all instructions, data, and other information in the original object modules.
Omnibus elements may be used as data by applications or may serve to hold structured information for applications and system utilities. There is no assumed structure to an omnibus element.
For compatibility with earlier (basic mode) programming models, there are relocatable and absolute element types. Relocatable elements are the output of basic mode compilers. They may be combined by the basic mode static linker (@MAP – the collector) to form an “absolute” element which is executable.
Any reference to an unloaded file will be queued while the file is staged back to mass storage. The whole system is automatic and generally transparent to users.
s. Files are simply containers. Access methods are provided by the language run time systems and the database manager. The one exception is a fixed-block access method provided for high-volume transaction processing. It has much less overhead than the database manager, but does participate in all locking, clustering, and recovery mechanisms.
Today they are still used to place files, usually database files or transaction files, on one or more disk volumes. Files may still span multiple disk volumes, however, now the list of volume names is given when the file is created. Files that are on such volume groups are still backed up but are not subject to automatic virtual space management.
software. With CIFS OS 2200 appears to be both a file server and file client to other CIFS-compliant systems. This includes desktop PCs running Windows.
In order to maintain OS 2200 security two levels of protection are provided. First, OS 2200 files are not visible to the network until they have been declared as “shares” with a CIFS command. A specific privilege exists to control who may declare a share. The second level of control is that all access is still protected by OS 2200 security. Clients accessing OS 2200 via CIFS will either have to be automatically identified via NTLM
or Kerberos or they will be presented with a query for their OS 2200 user id and password.
CIFS allows OS 2200 files to be presented in a hierarchical view. Typically the qualifier will appear as the highest level in the tree followed by filename, element name, and version. In addition, files may be stored on OS 2200 servers using the full Windows filename format. Windows applications will see OS 2200 as another file server.
OS 2200 applications have APIs available to read and write files existing on other CIFS-compliant servers, such as Windows file servers, in the network. Text files are automatically converted to and from OS 2200 internal formats. Binary files must be understood by the application program.
for more information.
The database managers, run time libraries, messaging system, and many other system functions are implemented as subsystems. Some subsystems, usually consisting of pure code, such as the run time libraries may be the direct target of a Call instruction without requiring a gate. These subsystems run in the user program's protection environment. Other subsystems, such as the database managers, consist of code and data or privileged code and may only be called via a gate. These subsystems may also have access control lists associated with them to control who may call them. More importantly, the gate controls the specific entry points that are visible, the protection environment in which the subsystem will run, and often a user-specific parameter that provides additional secure information about the caller.
B1 level specification. OS 2200 first obtained a successful B1 evaluation in September, 1989. That evaluation was maintained until 1994. After that point, OS 2200 developers continued to follow development and documentation practices required by the B1 evaluation.
Central to a B1 system are the concepts of users and objects. Users have identities, clearance levels, compartments and privileges. Objects require certain combinations of those for various types of access. Objects in OS 2200 consist of files, protected subsystems, devices, and tape reels.
The security profile of a user session includes the user identity, clearance level (0-63), compartment set, and set of allowed privileges. OS 2200 implements both Mandatory Access Control
(MAC) and Discretionary Access Control
(DAC) based on the Bell-La Padula model for confidentiality (no read up, no write down) and the Biba integrity model
(no read down, no write up). For a run to read or execute a file, the run’s executing clearance level must be greater than or equal to the clearance level of the file, and the file’s clearance level must be 0 or within the clearance level range of the run; in addition, the run's executing compartment set must contain the file's compartment set. Because OS 2200 combines the Bell-La Padula and Biba model requirements, a run's executing clearance level and compartment set must exactly match those of a file to permit writing to the file or deleting it.
DAC associates an access control list with an object; the list identifies users and user groups that have access and defines the type of access that user or group is allowed (read, write, execute, or delete).
OS 2200 provides a fine-grained security mechanism based on the principal of least privilege. This principal demands that only the minimum privilege be granted necessary to perform the task required. Thus, OS 2200 has no “Super User” or “Administrator” concept. Rather it uses a large set of specific privileges which may be granted separately. Each privilege is associated with a specific authority.
The security officer or the user who creates an object may create an access control list for that object. The default is that the object is private to the creating user.
Because the full set of B1 controls is too restrictive for some environments, system administrators can configure servers by choosing which controls to apply. A set of security levels from Fundamental Security through Security Level 3 serves as a starting point.
OS 2200 offers a flexible authentication system. Multiple authentication mechanisms are supported concurrently. Client- or third party-written authentication software may also be used. Standard authentication capabilities include:
The last two permit the use of biometrics, smart cards, and any other authentication mechanism supported by those technologies.
-certified implementations of the AES
and 3DES algorithms. Cipher API also supports the use of a hardware accelerator card for bulk data encryption.
CPCOMM provides SSL/TLS encryption for data in transit.
OS 2200 also supports encrypting tape drives, which provide encryption for archive data.
to achieve greater performance and availability than a single system. Up to 4 systems may be combined into a cluster sharing databases and files via shared disks. A hardware device, the XPC-L, provides coordination among the systems by providing a high-speed lock manager for database and file access.
A clustered environment allows each system to have its own local files, databases, and application groups along with shared files and one or more shared application groups. Local files and databases are accessed only by a single system. Shared files and databases must be on disks that are simultaneously accessible from all systems in the cluster.
The XPC-L provides a communication path among the systems for coordination of actions. It also provides a very fast lock engine. Connection to the XPC-L is via a special I/O processor that operates with extremely low latencies. The lock manager in the XPC-L provides all the functions required for both file and database locks. This includes deadlock detection and the ability to free up locks of failed applications.
The XPC-L is implemented with two physical servers to create a fully redundant configuration. Maintenance, including loading new versions of the XPC-L firmware
, may be performed on one of the servers while the other continues to run. Failures, including physical damage to one server, do not stop the cluster, as all information is kept in both servers.
The rest of the console is used as a scrolling display of events. When a message is issued that requires an operator response, it is given a number from 0 to 9 and remains on the display until it is answered. Tape mount messages do scroll with other messages but will be repeated every two minutes until the tape is mounted.
Operations Sentinel is used for all OS 2200 operations. OS 2200 consoles are simply windows within an Operations Sentinel display. There may be as many display PCs as desired. Remote operation is typical. Operations Sentinel supports any number of ClearPath, Windows, Linux, and UNIX systems.
An auto-action message database is released with the product. This database allows Operations Sentinel to recognize messages. Scripts may be written to automatically respond to messages that require a response, hide unwanted messages, translate them to other languages, create events, etc. Full dark room operation is used by some clients. At most they will have Operations Sentinel displays at remote locations monitoring the system and creating alerts when certain events occur.
Other tools are specific to the Security Officer and allow creation of users, changing allowed privileges, changing system security settings, etc., .
Most of the tools have a GUI interface although some do not. All provide a batch stored file interface where all actions are specified in the control stream. This allows scripting any and all of the administrative interfaces from either local sites, maybe based on time of day or other events, or from remote sites. Unique privileges are required for each administrative area.
The notion of application group corresponds to what is often called “an application.” That is, a set of programs and data that represent some larger unit of connected processing. For example, an application group might represent an airline system. Another application group might represent the corporate finance system. Or, application groups might represent instances of the same application and data models, as in bank branches. The important thing is that each application group has its own environment, sessions, recovery, etc.
Application groups may be started, stopped, and recovered independently.
Application groups do not have their own accounting and scheduling rules. Transactions in multiple application groups may share the same priorities and have interleaved priorities. This permits the site to control the relative priorities of transactions across the entire system.
Most of the historical archives of Unisys are at the Charles Babbage Institute at the University of Minnesota and at the Hagley Museum and Library in Delaware. The Charles Babbage Institute holds the archives from ERA, some early Remington Rand archives from Saint Paul, MN, and the Burroughs archives. The Hagley Museum and Library holds the bulk of the Sperry archives.
Operating system
An operating system is a set of programs that manage computer hardware resources and provide common services for application software. The operating system is the most important type of system software in a computer system...
currently used for the Unisys
Unisys
Unisys Corporation , headquartered in Blue Bell, Pennsylvania, United States, and incorporated in Delaware, is a long established business whose core products now involves computing and networking.-History:...
ClearPath Dorado family of mainframe systems. OS 2200 is a lineal descendant of Exec 8 for the UNIVAC
UNIVAC
UNIVAC is the name of a business unit and division of the Remington Rand company formed by the 1950 purchase of the Eckert-Mauchly Computer Corporation, founded four years earlier by ENIAC inventors J. Presper Eckert and John Mauchly, and the associated line of computers which continues to this day...
1108. The name Exec 8 was shorthand for “Executive System for the UNIVAC 1108.” The UNIVAC 1108 computer was announced in 1964 and delivered in late 1965. The first 1108 computers used Exec I and Exec II which had been developed for the UNIVAC 1107
UNIVAC 1107
The UNIVAC 1107 was the first member of Sperry Rand's UNIVAC 1100 series of computers, introduced in October 1962. It was also known as the Thin Film Computer because of its use of thin film memory for its register storage...
. However, UNIVAC
UNIVAC
UNIVAC is the name of a business unit and division of the Remington Rand company formed by the 1950 purchase of the Eckert-Mauchly Computer Corporation, founded four years earlier by ENIAC inventors J. Presper Eckert and John Mauchly, and the associated line of computers which continues to this day...
planned to offer symmetric multiprocessor
Symmetric multiprocessing
In computing, symmetric multiprocessing involves a multiprocessor computer hardware architecture where two or more identical processors are connected to a single shared main memory and are controlled by a single OS instance. Most common multiprocessor systems today use an SMP architecture...
versions of the 1108 with up to 4 processors and the earlier operating systems (really basic monitor programs) weren’t designed for that. The specifications for Exec 8 were drawn up by December 1964 as a preliminary Programmers Reference Manual (user guide) and work began in May 1965.,
Documentation and other information on current and past Unisys systems can be found on the Unisys public support website and the Unisys eCommunity
History
There were earlier 1100 systems going back to the 1101UNIVAC 1101
The UNIVAC 1101, or ERA 1101, was a computer system designed by Engineering Research Associates and built by the Remington Rand corporation in the 1950s. It was the first stored program computer in the U.S. that was moved from its site of manufacture and successfully installed at a distant site...
in 1951, but the 1108 marked the introduction of the first 1100 Series
UNIVAC 1100/2200 series
The UNIVAC 1100/2200 series is a series of compatible 36-bit computer systems, beginning with the UNIVAC 1107 in 1962, initially made by Sperry Rand...
computer designed for a modern operating system. Previous systems were all real-mode systems with no hardware support for protection and separation of programs and operating system. While there was support for multi-programming in previous software, it was limited to running one user job and multiple, known to be well-behaved, supporting functions such as card reader, printer, and card punch spoolers.
Along with the new hardware system came the introduction of an operating system known as Exec 8 (the Executive System for the UNIVAC 1108). Previous system software instantiations (Exec I, Exec II) are more properly known as monitor systems even though they did support limited multi-programming. Exec 8 was designed from the very beginning to be a multi-programming, multi-processing operating system since the 1108 was designed to have up to 4 CPUs. Memory size and cost and mass storage size and cost were the primary constraints. The predecessor systems were also heavily used for scientific computing, process control, message switching, and other real time functions. While the 1100 Series was envisioned as targeting a more general market, extreme real time processing was a primary requirement.
When the UNIVAC 1110
UNIVAC 1110
The UNIVAC 1110 was the fourth member of Sperry Rand's UNIVAC 1100 series of computers, introduced in 1972.The UNIVAC 1110 had enhanced multiprocessing support: sixteen-way memory access allowed up to six CAUs and four IOAUs The UNIVAC 1110 was the fourth member of Sperry Rand's UNIVAC 1100 series...
was introduced in 1972, the operating system name was changed to OS 1100 to reflect its support for the wider range of systems. The name OS 1100 was retained until 1988 with the introduction of the Sperry 2200 Series
UNIVAC 1100/2200 series
The UNIVAC 1100/2200 series is a series of compatible 36-bit computer systems, beginning with the UNIVAC 1107 in 1962, initially made by Sperry Rand...
as a follow on to the 1100 Series when its name was changed to OS 2200. Since that time, the 2200 Series became the Unisys ClearPath IX Series and then the Unisys ClearPath Dorado Series, but the operating system retained the OS 2200 name.
The company name and its product names also changed over time. Engineering Research Associates
Engineering Research Associates
Engineering Research Associates, commonly known as ERA, was a pioneering computer firm from the 1950s. They became famous for their numerical computers, but as the market expanded they became better known for their drum memory systems. They were eventually purchased by Remington Rand and merged...
(ERA) of Saint Paul was acquired by Remington Rand Corporation
Remington Rand
Remington Rand was an early American business machines manufacturer, best known originally as a typewriter manufacturer and in a later incarnation as the manufacturer of the UNIVAC line of mainframe computers but with antecedents in Remington Arms in the early nineteenth century. For a time, the...
. Remington Rand also acquired the Eckert–Mauchly Computer Corporation of Philadelphia which was then building the UNIVAC
UNIVAC I
The UNIVAC I was the first commercial computer produced in the United States. It was designed principally by J. Presper Eckert and John Mauchly, the inventors of the ENIAC...
computer. The two were combined into the UNIVAC division of Remington Rand under the direction of William Norris. William Norris
William Norris
William Charles Norris was the pioneering CEO of Control Data Corporation, at one time one of the most powerful and respected computer companies in the world...
had been one of the founders of ERA and later left Remington Rand to start Control Data Corporation
Control Data Corporation
Control Data Corporation was a supercomputer firm. For most of the 1960s, it built the fastest computers in the world by far, only losing that crown in the 1970s after Seymour Cray left the company to found Cray Research, Inc....
. The UNIVAC division of Remington Rand Corporation became the UNIVAC division of Sperry Rand Corporation after Remington Rand merged with Sperry Corporation
Sperry Corporation
Sperry Corporation was a major American equipment and electronics company whose existence spanned more than seven decades of the twentieth century...
. In the 1970s Sperry Rand began a corporate identity program that changed its name to Sperry Corporation and all the division names to Sperry ___ so that the computer systems division became Sperry UNIVAC. Later the division names were dropped and everything simply became Sperry. The operating system is still referred to as “the Exec” by most Unisys and customer personnel.
In 1986 Burroughs and Sperry corporations merged to become Unisys (which some long time 2200 Series clients say stands for “UNIVAC Is Still Your Supplier”). The major mainframe products lines of both companies have continued in development including the MCP Operating System from Burroughs and OS 2200 from Sperry.
Design focus
See Unisys 2200 Series system architectureUnisys 2200 Series system architecture
The figure shows a high-level architecture of the of an OS 2200 system identifying major hardware and software components. The majority of the Unisys software is included in the subsystems and applications area of the model...
for a description of the machine architecture and its relationship to the OS 2200 operating system.
Exec 8 was designed to run on systems that often only had 128K words (~512K bytes) and was focused on real time batch processing. While the earliest release levels did work in the 128KW (usually called 131K by UNIVAC), increasing function in subsequent releases made that untenable as it didn't leave enough space for programs of any useful size. The maximum memory capacity of an 1108 was 256KW (called 262KW or about 1MB) so efficient use of memory was probably the single most important constraint. The core memory of the time was also the single most expensive part of the system. Mass storage devices weren't much better. High speed storage consisted of drum memories that held 256KW (FH-432) to 2MW (FH-1782). The highest capacity mass storage was the FASTRAND
FASTRAND
FASTRAND was a magnetic drum mass storage system built by Sperry Rand Corporation for their UNIVAC 1100 series and 490/494 series computers.A voice coil actuator moved a bar containing multiple single track recording heads, so these drums operated much like moving head disk drives with multiple...
drum which held 22MW or about 100MB.
With the memory constraints and the real time focus, the idea of having only a single copy of code loaded in memory for multiple uses was included as a requirement. Since the 1108 was a multi-processing system, the operating system was designed to be fully reentrant (thread safe in today’s terminology) and to use fine-grained locking to protect shared data during updates. The design included having compilers, utilities, and even user applications that might have multiple copies running concurrently written so that their code could be shared. This would require loading only one copy of the code in memory which saved both space and the time it took to load the code. Each use would get its own copy of the data.
A supporting idea in the hardware was that the memory was designed with two physical banks (cabinets) each with its own access paths. Thus a CPU could access both banks simultaneously. By loading the code into one bank and data into the other bank of memory, the average execution time of the instructions could be halved. So it was natural for this reason as well to separate code and data into different load entities. These were called banks (from the memory banks and because segment referred to a commonly used programming technique known as overlay segments
Overlay (programming)
In a general computing sense, overlaying means "replacement of a block of stored instructions or data with another" Overlaying is a programming method that allows programs to be larger than the computer's main memory...
).
Code that was going to be shared had to be thread safe which means it had to be read/execute only. Writing into the code stream was not allowed if it was going to be shared. While writing into the code stream was still a fairly common programming technique in 1967, 1100 Series users were encouraged not to do so simply because of the performance benefit. The security and reliability benefits were touted but not highly valued at that time by customers.
Exec 8 began as a real time
Real-time computing
In computer science, real-time computing , or reactive computing, is the study of hardware and software systems that are subject to a "real-time constraint"— e.g. operational deadlines from event to system response. Real-time programs must guarantee response within strict time constraints...
operating system with early use for message switching, process control of simulators and missile firings, and general scientific and engineering work. It was primarily a batch processing
Batch processing
Batch processing is execution of a series of programs on a computer without manual intervention.Batch jobs are set up so they can be run to completion without manual intervention, so all input data is preselected through scripts or command-line parameters...
system that allowed applications very fine control of processor priority for each thread (called activities). All processor switching was preemptive with higher priority threads gaining control of the processor currently running the lowest priority thread of any program. It was a multi-programming and multi-processing operating system with fully symmetric processor management. A test-and-set instruction built into the hardware allowed very efficient and fine-grained locking both within the OS and within multi-threaded applications.
In 1968 work began on adding time-sharing
Time-sharing
Time-sharing is the sharing of a computing resource among many users by means of multiprogramming and multi-tasking. Its introduction in the 1960s, and emergence as the prominent model of computing in the 1970s, represents a major technological shift in the history of computing.By allowing a large...
capability to Exec 8 and that was delivered with level 23 of Exec 8 in 1969. Time sharing (called Demand from “on demand”) provided identical capabilities to batch and real time processing. Everything that could be done in batch could be done in Demand mode. In Demand mode the job stream input and output were attached to a terminal handler rather than spool files. The same job control language (ECL – Executive Control Language) was used for both. A few years later some more specific time sharing capabilities were added to allow some of the ECL statements to be issued for immediate processing even while a program was running. At first these were just the statements to kill the program or redirect output to a file, but later all control statements related to files and devices were also allowed to be “immediate.”
In Exec 8 work is organized into jobs, called “runs,” which are scheduled based on their priority and need for specific resources such as tape reels or exclusive control of files. The control language ECL uses a syntax with the “@” as the control statement recognition symbol followed by the command or program name. That is then followed by a comma and option letters a space and then positional parameters that depend on the nature of the specific command. So a command to compile a C module would look like “@UC,opt source, object.” Input data for an application may be read from a file or be in line following the command that starts the application. All lines that do not have the leading “@” are presumed to be in-line data.
A transaction processing
Transaction processing
In computer science, transaction processing is information processing that is divided into individual, indivisible operations, called transactions. Each transaction must succeed or fail as a complete unit; it cannot remain in an intermediate state...
capability was developed in the late 1960s as a joint project with United Airlines and later refined in another joint project with Air Canada. This capability was fully integrated into the operating system in 1972 and became the basis of much of the future growth of the 1100 Series. Early users controlled communication lines directly from within their real time programs. Part of the development of transaction processing included a communication message system that managed the communication lines and presented messages to Exec 8 to be scheduled as transactions. This moved all the low level communication physical line management and protocols out of the applications and into the CMS 1100 application.
CMS 1100 itself ran as a real time multi-threaded program with the privilege of acquiring control of communication lines and submitting transaction messages for scheduling. This led to the notions in Exec 8 that applications of any nature needed to be carefully controlled to ensure that they could not cause integrity issues. Security was certainly a concern, but in the early days system reliability and integrity were much larger issues. The system was still primarily batch and transaction processing and there was little chance that anyone could install unauthorized code on the system. CMS 1100 later added the capability to be the interface for Demand terminals as well as transaction terminals so that terminals could be used for both and the early terminal drivers could be removed from the Exec. CMS 1100 was later replaced by CPCOMM.
As of 2010 the OS 2200 release is level 48R6 (48.370.4). The internal system levels use a three-part number such as 21.92.42 (which was the first widely used production system although earlier releases were used in production at a number of sites). The first number part indicates the major level and indicates a new version of the Exec with all previous updates integrated into a new base version. This is an infrequent process and occurs at intervals of years. The second number part indicates versions of updates to the major level and often occurs several times per week. When a decision is made to freeze the feature content and prepare for release, the third part comes into play and indicates versions of the pre-release level as fixes and minor feature updates are applied. Concurrently with preparing a level for release, updates to the “mainline” continue as engineers integrate changes in preparation for a future release. For many years the official release level was the full three-part number. Later releases were named simply 44R1, 44R2, 45R1, and so on although the three-part number is still used internally.
OS 2200 Operating System (The EXEC)
The Exec contains all the code in the system that is allowed to run at the highest privilege levels. There are no mechanisms for other code to be promoted to those privilege levels.The Exec is responsible for managing the system hardware, scheduling and managing work, and communicating with operators and administrators.
Performing Work
The Exec is at heart a real time, multi-threaded batch processing system. Everything has been built around that model. The Exec itself is largely structured as a real time program. Functions that are performed as ServicesWindows Service
On Microsoft Windows operating systems, a Windows service is a long-running executable that performs specific functions and which is designed not to require user intervention. Windows services can be configured to start when the operating system is booted and run in the background as long as...
in Windows or Daemons
Daemon (computer software)
In Unix and other multitasking computer operating systems, a daemon is a computer program that runs as a background process, rather than being under the direct control of an interactive user...
in Linux and UNIX are implemented as either activities within the Exec or as batch programs that are always running in the background.
Time-sharing
Time-sharing
Time-sharing is the sharing of a computing resource among many users by means of multiprogramming and multi-tasking. Its introduction in the 1960s, and emergence as the prominent model of computing in the 1970s, represents a major technological shift in the history of computing.By allowing a large...
(known as Demand from “on demand”) and transaction processing are implemented as special cases of batch. One result is that there are few restrictions on what a time-sharing user or transaction program can do. There are lots of warnings for writers of transaction programs that they will not be happy with performance if for example they call for a tape mount, but it is permitted.
The largest unit of work is the “Run.” This is taken from the factory “production run” terminology and generally equates to job or session on other systems. A Run is defined by its “run stream.” A run stream is a sequence of control statements that represent the steps to be taken. They may include file handling, program execution, and branches of control. A batch Run is typically stored as a file and is scheduled by a “Start” command from within another Run or by the operator. A time sharing Run is initiated by logging in from a time-sharing terminal and inputting the @RUN command. Often the @RUN statement and the second control statement (often @ADD or a program execution) are generated automatically based on the user profile. Security authorizations are validated based on the authenticated user-id and other information supplied on the Run control statement.
Transactions are a special case. There aren’t actually any control statements, but the internal data structures of a run are created. This enables the Exec to associate the same security, accounting, debugging, etc. mechanisms with transaction programs. Generally a security profile is cached in memory at the time the transaction user is authenticated and is copied from the user’s session data to the transaction run state when the transaction is scheduled. Because each transaction instance is essentially a Run, accounting, logging, and error handling are all encapsulated by the Run mechanism.
Batch
Batch jobs (Runs) are characterized by having a runstream (job control language statements) stored in a file. A batch job always contains an @RUN statement as the first record in the file. This statement gives the run a name (runid), defines priorities, and defines the maximum number of SUPS (Standard Units of Processing) the job is expected to use. The job is started from some other job with a @START control statement or by the operator via an ST keyin. The system may be configured to automatically issue @START statements for any number of jobs when it boots. These jobs serve the purpose of performing initialization, recovery, and background functions.
All of the fields on the @RUN statement may be overridden by corresponding fields on the @START statement. Except when the @START is executed by a privileged user, the userid and other security state are always taken from the run doing the @START.
There are two priority fields on the @RUN statement. One is used to specify the backlog priority. There are 26 backlog priority levels (A – Z). The Exec has a configured maximum number of open batch runs. When that level is reached, jobs are then selected from the backlog queues in priority order. Within a priority selection is usually FIFO. However, the Exec pre-scans the job control statements up to the first program execution looking for file names and reel numbers. If the job would immediately stall because some resources it needs are not available, it may be bypassed to start other jobs at the same priority level.
The second priority level defines an execution processor resource group. We’ll talk later more about how processor allocation is done, but higher execution group priorities typically get more processor time.
While the OS 2200 job control language does not support full programmability, it does allow dynamic additions of sequences of control language through an @ADD control statement. The file to be added may have been created by the same job immediately preceding adding it. The @ADD and most other control statements may also be submitted from within a running program via an API. Additional programmability is available indirectly through the use of the Symbolic Stream Generator (SSG). SSG is a programming language for manipulating and creating text files from input parameters and system information. It is used heavily for configuration management (make) processing and other functions where text images need to be created programmatically. The resulting output can be “@ADD”ed in the same run thus providing the indirectly programmable runstream.
Operator keyins are available to change both the backlog and execution priorities of runs. As all operator keyins are available by API to suitably privileged users, this can be automated or controlled by a remote administrator.
Deadline is a special case of batch. A deadline run looks just like any other batch run except that a deadline time is specified on the @RUN or @START control statement. The deadline time is used in conjunction with the maximum SUPS (time estimate) on the control statement. A deadline job runs at normal batch priorities unless or until it appears that it could miss its deadline time. Then the more the mismatch between time until the deadline and remaining SUPS, the higher the priority. While deadline can’t totally shut off transactions and has no effect on real time, it can effectively shut off most other processing in the system if necessary to achieve its goal.
Demand
OS 2200 time-sharing sessions are called demand runs. They use the same control language as batch runs with a few additions known as “immediate” control statements. Immediate control statements use the “@@” sentinel which indicates that they are to be executed immediately even if a program is running. While they can be used to create or assign files, the most important ones allow a demand user to error terminate a running program or even send it a signal.
Transactions
Transactions execute as runs but without any stored or submitted control statements. Instead when a message is received from a session defined as a transaction session, it is scanned to determine the transaction queue on which it is to be placed. This is normally determined by the first characters of the message but user-written scanners may be added.
The communication manager, which is capable of handling up to 250,000 active sessions, takes incoming transaction messages and passes them to the message queuing software. It can handle an unlimited number of queued messages using the message queuing architecture. A call is made to the Transaction Interface Package (TIP) APIs in the operating system to queue the transaction on the appropriate queuing point. Each queuing point identifies the priority and concurrency level of the work and the associated transaction program to be executed.
A transaction program scheduling tree allows the client to establish relative usage for groups of transaction programs. Concurrency limits avoid one type of work dominating the system to the exclusion of other work and avoid creating an over commitment of resources. Up to 4094 nodes may be created in the tree.
- Maximum concurrency specified for each node in the tree
- Concurrency of higher node limits total concurrency of dependent nodes
- Concurrency of highest node limits system concurrency
Priority (0 to 63) and concurrency level (1 to 512) specified for each transaction program.
The highest priority transaction is selected for scheduling except as limited by the concurrency policies in effect for its node and higher nodes.
Real Time
Real time is not another type of run. Rather it is a set of priority levels which any activity may request. Real time is most typically used by long running batch programs, like the OS 2200 communications manager CPComm, but is not restricted to such.
There are 36 real time priority levels available by API for applications to use. The user and account must have the privilege to use real time priorities. It is up to the site to control how their applications use the priority levels. Real time priorities totally dominate all lower priorities so it’s quite possible for a misbehaved real time program to tie up one or more processors.
The real time priority applies to an individual activity (thread) so a program may have both real time and non-real time threads executing at the same time.
Dispatching
Once a run has been started, getting access to the processor controls its rate of progress. The heart of the Exec is the Dispatcher
Context switch
A context switch is the computing process of storing and restoring the state of a CPU so that execution can be resumed from the same point at a later time. This enables multiple processes to share a single CPU. The context switch is an essential feature of a multitasking operating system...
which manages all the processors.
The Exec supports up to 4095 dispatching priorities although most sites define only a small subset of those. The two highest “priorities” aren’t switchable. They are recognition of certain types of processing that must be allowed to continue on the processor on which they started until they voluntarily give up control. Interrupt lockout occurs when an interrupt comes in or in a few special cases when other Exec code prevents all interrupts (in order to change some data that an interrupt handler may also access).
Interlock is used by interrupt post processing routines that either need to run on the same physical processor or simply should not be interrupted. The Dispatcher, I/O completions, and I/O initiation are some examples. All locks used by both of these priorities are spin locks as the only way they can be set by someone else is on another processor and the design requires that they only be set for very short instruction sequences.
High Exec priority is used by the operator keyin service and some other functions that may have to run even when a real time program has control. They are expected to use only very short amounts of time. If they need more time, they should queue the work to be processed by a Low Exec activity.
Real time activities have an unlimited processor quantum and run without switching unless interrupted by a higher priority real time activity or High Exec activity. Real Time activities are given control of any available processor that is running something of lower priority. Interrupts are sent between processors when necessary to ensure immediate availability. Real time is used by customers to fly missiles, run simulators, and other functions that require immediate response. OS 2200 has long guaranteed real time events get control of a processor within 50 microseconds.
Transaction priorities may be handled in two ways as defined by the site. They may be a sort of lower priority real time in that only the priority matters and the quantum size is essentially infinite. This is appropriate for very short-lived transactions such as airline reservations; if one loops due to a programming error, the Exec will terminate it when it reaches its very small configured maximum time. The other form allows the Exec to vary the priority within a range to optimize system resource usage. The approach gives higher priority and shorter time slices to programs that are I/O limited and progressively lower priorities but longer time slices to those that are computing. The Exec dynamically adjusts these priorities based on behavior as programs often behave both ways at different times. This approach is appropriate for longer running transactions like database queries or airline fare quotes.
Batch and demand always use dynamically adjusted priorities. Programs that are I/O limited or are in a conversation with a time-sharing user get higher priorities but short time slices. More compute-oriented programs get lower priorities and longer time slices.
The Exec has two additional mechanisms for optimizing dispatching. One is affinity-based dispatching. When possible the Exec will run an activity on the same processor that it was on the last time to get the greatest advantage of residual cache contents. If that isn’t possible it tries to keep the activity on the “nearest” processor from the standpoint of cache and memory access times. The second is a “fairness” policy mechanism. The site can define the relative percentage of resources to be allocated to each of transactions, demand and batch. Within transactions and batch there are priority groupings that can further indicate what percentage of their group’s time is to be allocated to the priority. This ensures that transactions cannot so dominate the system that no batch work gets done. Within the various priority groupings it ensures that some progress can be assured for each group (unless the group percentage is zero). These “fairness” algorithms only come into play when the processors are very busy, but OS 2200 systems often run with all processors at near 100% utilized.
Metering
OS 2200 supports several models for system performance management. Customers may purchase a certain fixed performance level, and the Exec will monitor processor usage to ensure that performance does not exceed that level. Customers can also purchase additional performance either temporarily or permanently up to the full capacity of the system if their workload increases or an emergency requires it.
More recently the system has added a metered usage capability. In this mode the full power of the system is always available to the customer (although they may administratively limit that). The usage is accumulated over a month and then the reported usage is submitted to Unisys billing. Depending on the specific contract terms the client may receive a bill for excess usage above some contracted baseline for the month or just a statement showing that the total contracted usage has been decremented. The first form is like a cell phone bill with the potential for charging for excess minutes. The latter is like buying a pre-paid phone card.
The File System
OS 2200 does not have a hierarchical fileComputer file
A computer file is a block of arbitrary information, or resource for storing information, which is available to a computer program and is usually based on some kind of durable storage. A file is durable in the sense that it remains available for programs to use after the current program has finished...
system as do most other operating systems. Rather it has a structured naming convention and the notion of container files called program files.
Files in OS 2200 are simply containers that may be addressed either by word offset in the file or by sector (28-word unit) offset in the file. The 28 words is a historical unit from an early mass storage device (the FASTRAND drum) that could hold 64 such units per physical track. Nonetheless, it is a fortunate historical accident. Four such 28-word units or 112 words occupy 504 bytes. With today’s mass storage devices all using 512-byte physical records, OS 2200 clients have almost all adopted some multiple of 112 words as their physical record size and database page size. I/O processors automatically adjust for the 504<->512 byte mapping adding 8 bytes of zeros on writes and stripping them off on reads of each physical record. OS 2200 handles applications that use sizes other than multiples of 112 words by indivisibly reading the containing physical records and writing back out the unchanged and changed portions with data chaining. Special locking functions guarantee indivisibility even when there are device errors and across multiple systems in a cluster.
File formats and other internal data structures are described in the Data Structures Programming Reference Manual.
File Names
File names take the form Qualifier*Filename(F-cycle) (e.g., "PERSONNEL*EMPLOYEES(+1)"). Qualifier and filename are simply twelve-character strings used to create whatever naming structure the client desires. F-cycle is a number from 0 to 999 that allows multiple generations of a file. These may be referenced by relative numbers (+1) next or new cycle, (-1) previous cycle, (+0) current cycle. Leaving the cycle off defaults to the current cycle. Batch production runs that create new generations of files use this approach. The numbers wrap around after 999. Only 32 consecutive cycle numbers may exist at one time. Creating a (+1) deletes (-31).Any file may be used as a program file. A program file contains elements which generally act as files. Element naming is Q*F(c).Element/version(e-cycle) (e.g., "PERSONNEL*PROGRAMS.TAXCALC/2008"). Element and version are twelve-character names used in any way a user desires. E-cycle is similar to f-cycle in that it represents a generation number but without the restriction to 32 concurrent cycles and the limit is 256K cycles. However, e-cycle only applies to text elements and each line in a text element is marked with the cycle numbers at which it was inserted and deleted. Elements also have a type and sub-type. The most commonly used types are “text” and “object.” The default element name of an object file is the same as the text file from which it was created. Generally, which one you mean is obvious from context. Where it is not, options select the appropriate type. Text elements also have sub-types that typically represent the programming language (e.g., “ASM”, “C”, “COB”, “FOR”).
An object element may be executed if it is a main program or linked with other object elements including a main program. The linking may be static or dynamic. A main program may be executed without pre-linking provided all required sub-programs are in the same program file, are system libraries, or are otherwise known. Rules may be included in a program file to direct the dynamic linker’s search for unfulfilled references. The linker may also be used to statically link multiple object modules together to form a new object module containing all instructions, data, and other information in the original object modules.
Omnibus elements may be used as data by applications or may serve to hold structured information for applications and system utilities. There is no assumed structure to an omnibus element.
For compatibility with earlier (basic mode) programming models, there are relocatable and absolute element types. Relocatable elements are the output of basic mode compilers. They may be combined by the basic mode static linker (@MAP – the collector) to form an “absolute” element which is executable.
File Management
OS 2200 implements a fully virtual file system. Files may be allocated anywhere across any and all mass storage devices. Mass storage is treated as a large space pool similar to the way virtual memory is managed. While contiguous space is allocated if possible, mass storage is treated as a set of pages of 8KB size and a file can be placed in as many areas of the same or different devices as is required. Dynamic expansion of files attempts to allocate space adjacent to the previous allocation, but will again find space wherever it is available. In fact, files need not even be present on mass storage to be used. The Exec and the file backup system are fully integrated. When file backups are made, the tape reel number(s) are recorded in the file directory. If space gets short on mass storage, some files are simply marked as “unloaded” if they have a current backup copy. If enough space can’t be found that way, a backup is started.Any reference to an unloaded file will be queued while the file is staged back to mass storage. The whole system is automatic and generally transparent to users.
Access Methods
In general, the Exec does not provide access methodAccess method
An access method is a function of a mainframe operating system that enables access to data on disk, tape or other external devices. They were introduced in 1963 in IBM OS/360 operating system...
s. Files are simply containers. Access methods are provided by the language run time systems and the database manager. The one exception is a fixed-block access method provided for high-volume transaction processing. It has much less overhead than the database manager, but does participate in all locking, clustering, and recovery mechanisms.
Removable Packs
When clients do want more explicit control over the location of files, they can use the “removable pack” concept. At one time these truly represented physically removable disk packs, and the operating system would automatically generate pack mount requests to operators as needed.Today they are still used to place files, usually database files or transaction files, on one or more disk volumes. Files may still span multiple disk volumes, however, now the list of volume names is given when the file is created. Files that are on such volume groups are still backed up but are not subject to automatic virtual space management.
CIFS
OS 2200 also provides a full implementation of the Common Internet File System (CIFS). CIFS implements the SMB protocol used by Microsoft servers and the UNIX/Linux SambaSamba (software)
Samba is a free software re-implementation, originally developed by Andrew Tridgell, of the SMB/CIFS networking protocol. As of version 3, Samba provides file and print services for various Microsoft Windows clients and can integrate with a Windows Server domain, either as a Primary Domain...
software. With CIFS OS 2200 appears to be both a file server and file client to other CIFS-compliant systems. This includes desktop PCs running Windows.
In order to maintain OS 2200 security two levels of protection are provided. First, OS 2200 files are not visible to the network until they have been declared as “shares” with a CIFS command. A specific privilege exists to control who may declare a share. The second level of control is that all access is still protected by OS 2200 security. Clients accessing OS 2200 via CIFS will either have to be automatically identified via NTLM
NTLM
In a Windows network, NTLM is a suite of Microsoft security protocols that provides authentication, integrity, and confidentiality to users....
or Kerberos or they will be presented with a query for their OS 2200 user id and password.
CIFS allows OS 2200 files to be presented in a hierarchical view. Typically the qualifier will appear as the highest level in the tree followed by filename, element name, and version. In addition, files may be stored on OS 2200 servers using the full Windows filename format. Windows applications will see OS 2200 as another file server.
OS 2200 applications have APIs available to read and write files existing on other CIFS-compliant servers, such as Windows file servers, in the network. Text files are automatically converted to and from OS 2200 internal formats. Binary files must be understood by the application program.
Subsystems
The concept of subsystems and protected subsystems are central to the design of OS 2200. A subsystem is most analogous to a .dll in Windows. It is code and data that may be shared among all programs running in the system. In OS 2200 each subsystem has its own set of banks that reside in a separate part of the address space that cannot be directly accessed by any user program. Instead the hardware and the OS provide a "Gate" that may be the target of a Call instruction. See Unisys 2200 Series system architectureUnisys 2200 Series system architecture
The figure shows a high-level architecture of the of an OS 2200 system identifying major hardware and software components. The majority of the Unisys software is included in the subsystems and applications area of the model...
for more information.
The database managers, run time libraries, messaging system, and many other system functions are implemented as subsystems. Some subsystems, usually consisting of pure code, such as the run time libraries may be the direct target of a Call instruction without requiring a gate. These subsystems run in the user program's protection environment. Other subsystems, such as the database managers, consist of code and data or privileged code and may only be called via a gate. These subsystems may also have access control lists associated with them to control who may call them. More importantly, the gate controls the specific entry points that are visible, the protection environment in which the subsystem will run, and often a user-specific parameter that provides additional secure information about the caller.
B1 Security
OS 2200 security is an implementation of the DoD Orange BookTrusted Computer System Evaluation Criteria
Trusted Computer System Evaluation Criteria is a United States Government Department of Defense standard that sets basic requirements for assessing the effectiveness of computer security controls built into a computer system...
B1 level specification. OS 2200 first obtained a successful B1 evaluation in September, 1989. That evaluation was maintained until 1994. After that point, OS 2200 developers continued to follow development and documentation practices required by the B1 evaluation.
Central to a B1 system are the concepts of users and objects. Users have identities, clearance levels, compartments and privileges. Objects require certain combinations of those for various types of access. Objects in OS 2200 consist of files, protected subsystems, devices, and tape reels.
The security profile of a user session includes the user identity, clearance level (0-63), compartment set, and set of allowed privileges. OS 2200 implements both Mandatory Access Control
Mandatory access control
In computer security, mandatory access control refers to a type of access control by which the operating system constrains the ability of a subject or initiator to access or generally perform some sort of operation on an object or target...
(MAC) and Discretionary Access Control
Discretionary access control
In computer security, discretionary access control is a kind of access control defined by the Trusted Computer System Evaluation Criteria "as a means of restricting access to objects based on the identity of subjects and/or groups to which they belong...
(DAC) based on the Bell-La Padula model for confidentiality (no read up, no write down) and the Biba integrity model
Biba model
The Biba Model or Biba Integrity Model developed by Kenneth J. Biba in 1977, is a formal state transition system of computer security policy that describes a set of access control rules designed to ensure data integrity. Data and subjects are grouped into ordered levels of integrity...
(no read down, no write up). For a run to read or execute a file, the run’s executing clearance level must be greater than or equal to the clearance level of the file, and the file’s clearance level must be 0 or within the clearance level range of the run; in addition, the run's executing compartment set must contain the file's compartment set. Because OS 2200 combines the Bell-La Padula and Biba model requirements, a run's executing clearance level and compartment set must exactly match those of a file to permit writing to the file or deleting it.
DAC associates an access control list with an object; the list identifies users and user groups that have access and defines the type of access that user or group is allowed (read, write, execute, or delete).
OS 2200 provides a fine-grained security mechanism based on the principal of least privilege. This principal demands that only the minimum privilege be granted necessary to perform the task required. Thus, OS 2200 has no “Super User” or “Administrator” concept. Rather it uses a large set of specific privileges which may be granted separately. Each privilege is associated with a specific authority.
The security officer or the user who creates an object may create an access control list for that object. The default is that the object is private to the creating user.
Because the full set of B1 controls is too restrictive for some environments, system administrators can configure servers by choosing which controls to apply. A set of security levels from Fundamental Security through Security Level 3 serves as a starting point.
Authentication
When users log on to the system, they identify themselves and optionally select the clearance level and compartment set they will use for this session.OS 2200 offers a flexible authentication system. Multiple authentication mechanisms are supported concurrently. Client- or third party-written authentication software may also be used. Standard authentication capabilities include:
- User id and password maintained in an encrypted file by OS 2200
- Authentication performed by an external system such as Microsoft Windows using its user id and password mechanism
- NTLM
- Kerberos
The last two permit the use of biometrics, smart cards, and any other authentication mechanism supported by those technologies.
Encryption
OS 2200 provides encryption for data at rest through Cipher API, a software subsystem that encrypts and decrypts caller data using FIPSFIPS 140
The 140 series of Federal Information Processing Standards are U.S. government computer security standards that specify requirements for cryptography modules...
-certified implementations of the AES
Advanced Encryption Standard
Advanced Encryption Standard is a specification for the encryption of electronic data. It has been adopted by the U.S. government and is now used worldwide. It supersedes DES...
and 3DES algorithms. Cipher API also supports the use of a hardware accelerator card for bulk data encryption.
CPCOMM provides SSL/TLS encryption for data in transit.
OS 2200 also supports encrypting tape drives, which provide encryption for archive data.
Clustering
OS 2200 systems may be clusteredCluster (computing)
A computer cluster is a group of linked computers, working together closely thus in many respects forming a single computer. The components of a cluster are commonly, but not always, connected to each other through fast local area networks...
to achieve greater performance and availability than a single system. Up to 4 systems may be combined into a cluster sharing databases and files via shared disks. A hardware device, the XPC-L, provides coordination among the systems by providing a high-speed lock manager for database and file access.
A clustered environment allows each system to have its own local files, databases, and application groups along with shared files and one or more shared application groups. Local files and databases are accessed only by a single system. Shared files and databases must be on disks that are simultaneously accessible from all systems in the cluster.
The XPC-L provides a communication path among the systems for coordination of actions. It also provides a very fast lock engine. Connection to the XPC-L is via a special I/O processor that operates with extremely low latencies. The lock manager in the XPC-L provides all the functions required for both file and database locks. This includes deadlock detection and the ability to free up locks of failed applications.
The XPC-L is implemented with two physical servers to create a fully redundant configuration. Maintenance, including loading new versions of the XPC-L firmware
Firmware
In electronic systems and computing, firmware is a term often used to denote the fixed, usually rather small, programs and/or data structures that internally control various electronic devices...
, may be performed on one of the servers while the other continues to run. Failures, including physical damage to one server, do not stop the cluster, as all information is kept in both servers.
Operations
OS 2200 operations is built around active operators and one or more consoles. Each console is a terminal window, part of which is reserved for a fixed display that is frequently updated with summary information about activity in the system.The rest of the console is used as a scrolling display of events. When a message is issued that requires an operator response, it is given a number from 0 to 9 and remains on the display until it is answered. Tape mount messages do scroll with other messages but will be repeated every two minutes until the tape is mounted.
Operations Sentinel is used for all OS 2200 operations. OS 2200 consoles are simply windows within an Operations Sentinel display. There may be as many display PCs as desired. Remote operation is typical. Operations Sentinel supports any number of ClearPath, Windows, Linux, and UNIX systems.
An auto-action message database is released with the product. This database allows Operations Sentinel to recognize messages. Scripts may be written to automatically respond to messages that require a response, hide unwanted messages, translate them to other languages, create events, etc. Full dark room operation is used by some clients. At most they will have Operations Sentinel displays at remote locations monitoring the system and creating alerts when certain events occur.
Administration
Administration of OS 2200 systems is performed using a wide variety of tools, each specialized to a particular area of the system. For example, there is a tool used for administering the transaction environment that allows new transaction programs to be installed, specifies all the necessary information about them, changes the queuing structure, priorities, and concurrency levels, and so on.Other tools are specific to the Security Officer and allow creation of users, changing allowed privileges, changing system security settings, etc., .
Most of the tools have a GUI interface although some do not. All provide a batch stored file interface where all actions are specified in the control stream. This allows scripting any and all of the administrative interfaces from either local sites, maybe based on time of day or other events, or from remote sites. Unique privileges are required for each administrative area.
Application Groups
Application groups are a logical construct consisting of an instance of the Universal Data System (UDS), an instance of the message queue subsystem, and some set of transactions. Each application group has its own audit trail. OS 2200 supports a maximum of 15 application groups in a system.The notion of application group corresponds to what is often called “an application.” That is, a set of programs and data that represent some larger unit of connected processing. For example, an application group might represent an airline system. Another application group might represent the corporate finance system. Or, application groups might represent instances of the same application and data models, as in bank branches. The important thing is that each application group has its own environment, sessions, recovery, etc.
Application groups may be started, stopped, and recovered independently.
Application groups do not have their own accounting and scheduling rules. Transactions in multiple application groups may share the same priorities and have interleaved priorities. This permits the site to control the relative priorities of transactions across the entire system.
See also
- Unisys 2200 Series system architectureUnisys 2200 Series system architectureThe figure shows a high-level architecture of the of an OS 2200 system identifying major hardware and software components. The majority of the Unisys software is included in the subsystems and applications area of the model...
for information about the hardware architecture. - Unisys OS 2200 programming languagesUnisys OS 2200 programming languagesOS 2200 has had several generations of compilers and linkers in its history supporting a wide variety of programming languages. In the first releases, the Exec II assembler and compilers were used...
for information about the OS 2200 compilers and linkers. - Unisys OS 2200 databasesUnisys OS 2200 databasesThe OS 2200 database managers are all part of the Universal Data System . UDS provides a common control structure for multiple different data models. Flat files , network , and relational data models all share a common locking, recovery, and clustering mechanism...
for information about the OS 2200 databases and database manager. - Unisys OS 2200 communicationsUnisys OS 2200 communicationsThe OS 2200 communications management system includes CPComm and MCB along with many programs that provide communications related functions such as file transfer, e-mail, and distributed transaction processing protocols.- CPComm :...
for information about OS 2200 communications. - Unisys OS 2200 distributed processingUnisys OS 2200 distributed processingOS 2200 supports all commonly used, and many not so commonly used, distributed processing protocols, APIs, and development technology.The X/Open Distributed Transaction Processing model and standards are the basis for much of the OS 2200 capability...
for information about OS 2200 distributed processing.
Other Locations of Source Material
The Unisys History Newsletter contains articles about Unisys history and computers. In addition to all of the Unisys History Newsletters there are links to other sites.Most of the historical archives of Unisys are at the Charles Babbage Institute at the University of Minnesota and at the Hagley Museum and Library in Delaware. The Charles Babbage Institute holds the archives from ERA, some early Remington Rand archives from Saint Paul, MN, and the Burroughs archives. The Hagley Museum and Library holds the bulk of the Sperry archives.