ChorusOS is a microkernel real-time operating system designed as a message-based computational model. ChorusOS started as the Chorus distributed real-time operating system research project at Institut National de Recherche en Informatique et Automatique (INRIA) in France in 1979. During the 1980s, Chorus was one of two earliest microkernels (the other being Mach) and was developed commercially by Chorus Systèmes. Over time, development effort shifted away from distribution aspects to real-time for embedded systems.
3. Monolithic Vs Micro Kernel
Introduction to CHORUS
HISTORY
Versions of Chorus
System Architecture
Content
4. All the parts of a kernel like the Scheduler, File System, Memory
Management, Networking Stacks, Device Drivers, etc., are
maintained in one unit within the kernel in Monolithic Kernel.
Advantages
•Faster processing
Disadvantages
•Crash Insecure •Porting Inflexibility •Kernel Size explosion
Examples •MS-DOS, Unix, Linux
Monolithic Vs Micro Kernel
11/13/2018 Atri Saxena-17203103 4
5. Only the very important parts like IPC(Inter process
Communication), basic scheduler, basic memory handling, basic
I/O primitives etc., are put into the kernel. Communication
happen via message passing. Others are maintained as server
processes in User Space
Advantages
•Crash Resistant, Portable, Smaller Size
Disadvantages
•Slower Processing due to additional Message Passing
Examples •Mach, Chorus, Amoeba
Monolithic Vs Micro Kernel(Cont…)
11/13/2018 Atri Saxena-17203103 5
6. Introduction to CHORUS
Chorus is a microkernel real time Distributed System.
It is designed as a message based computational model.
GOAL:
To provide UNIX compatibility.
Use on Distributed Systems.
Real Time applications.
Integrating Object Oriented programming.
USED IN: public switches, cellular based stations, web phones,
cellular telephones etc.
11/13/2018 Atri Saxena-17203103 6
7. HISTORY
Research project at INRIA in France in 1979
1980s, Chorus was one of the two earliest
microkernel, developed by Chorus Systems.
Sun Microsystems acquired Chorus Systems, in
1997.
11/13/2018 Atri Saxena-17203103 7
8. Chorus V0(1979-1982)
Actor concept-Alternating sequence of indivisible execution and
communication phases.
Distributed application as actors communicating by messages through
ports or groups of ports
Nucleus on each site.
Chorus V1(1982-1984)
Multiprocessor configuration
Structured Messages, Activity messages
Chorus V2(1984-1986),V3(1987)(C++ implementation)
UNIX subsystem(Distant fork, distributed signals, distributed files)
Versions of Chorus
11/13/2018 Atri Saxena-17203103 8
9. UNIX emulation & enhancements
Open System Architecture
Efficient & Flexible Communication
Transparency
Flexible memory management
Support for real-time applications
Object oriented programming interface
Design Goals & Main Features
11/13/2018 Atri Saxena-17203103 9
13. Process
Definition
A process in Chorus is a collection of active and passive
elements that work together to perform some computation
Process with ONE thread is like traditional UNIX process
Process with NO thread
Every Process has protection identifier
Threads Address Space
• cannot do anything useful
• exists only for short interval
• Provide mechanism used for authentication
• Process fork this identifier is inherited
14. Three kinds of processes
Type Trust Privilage Mode Space
User Untrusted Unpriviledged User User
System Trusted Unpriviledged User User
Kernel Trusted Privileged Kernel Kernel
■ 3 kinds of process depending on Trust and Privilege
Ability to execute IO & Protected InstuctionAbility to call Kernal Directly
15. • Kernal Process
• Are most powerful
• They run in kernal mode
• They share same address space with each other and microkernal
• They can be loaded and unloaded during execution
• Can communicate with each other using special lightweight RPC
• System Process
• Runs in its own address space
• Are unprivileged -> cannot execute IO or other protected instruction directly
• But can obtain kernal services directly
• User Process
• Untrusted and Unprivileged
• Cannot call IO directly
• Cannot call kernal directly
• Each process has two parts
• Regular User part
• System part
16. Threads
Every active process has one or more threads
Every thread has its own private context (i.e., stack, program counter, and
registers).
A thread is tied to the process in which it was created, and cannot be moved
to another process.
Chorus threads are known to the kernel and scheduled by the kernel
so creating and destroying them requires making kernel calls.
An advantage of having kernel threads is that
when one thread blocks waiting for some event (e.g., a message
arrival), the kernel can schedule other threads.
ability to run different threads on different CPUs when a
multiprocessor is available.
The disadvantage of kernel threads is the extra overhead required to manage
them
Threads communicate with one another by sending and receiving messages.
17. Thread State
1. ACTIVE – The thread is logically able to run.
2. SUSPENDED – The thread has been intentionally
suspended.
3. STOPPED – The thread’s process has been suspended.
4. WAITING – The thread is waiting for some event to
happen.
18. Two synchronization mechanisms Chorus
Provide
• Traditional counting semaphore, with operations UP and DOWN.
• Operations always implemented by kernal calls
• Expensive
• Mutex, which is essentially a semaphore whose values are restricted to 0 and 1.
• Mutexes are used only for mutual exclusion.
19. Scheduling
CPU scheduling is done using priorities on a per-thread basis.
Each process has a priority and each thread has a relative priority within its
process.
The absolute priority of a thread = its process’ priority + its own
relative priority.
The kernel keeps track of the priority of each thread in ACTIVE state
Runs the one with the highest absolute priority.
On a multiprocessor with k CPUs, the k highest-priority threads are run.
20. Scheduling – Real Time System
An additional feature has been added
A distinction in made between threads whose priority is
above certain level &
A,B in fig
Are not time sliced
Continues to Run until voluntarily releases CPU (or)
High priority process moves to active state
Below certain level
C ,D in fig
Time sliced
C Run after consuming one quantum time put it at
end of queue
22. Kernel Calls for Process Management
actorCreate Create a new process
ActorDelete Remove a process
ActorStop Stop a process, put its threads in STOPPED state
actoreStart Restart a process from STOPPED state
actorPriority Get or set a process’ priority
actorExcept Get or set the port used for exception handling
23. Selected Thread Calls supported by Chorus
threadCreate Create a new thread
threadDelete Delete a thread
threadSuspend Suspend a thread
threadResume Restart a suspended thread
threadPriority Get or set a thread’s priority
threadLoad Get a thread’s context pointer
threadStore Set a thread’s context pointer
threadContext Get or set a thread’s execution context
24. mutexInit Initialize a mutex
mutexGet Try to acquire a mutex
mutexRel Release a mutex
semInit Initialize a semaphore
semP Do a DOWN on a semaphore
semV Do an UP on a semaphore
Selected Syncronization Calls supported by
Chorus
28. Region
A region is a contiguous range of virtual address.
Each region is associated with some pieces of data such as
program of file.
In system that supports virtual memory and paging , region
maybe paged.
29. Segment
A segment is a linear sequence of bytes identified by a
capability.
When a segment is mapped on to a region the byte of the
segment are accessible to thread of the region process.
31. Virtual Memory manager which handles the low
level part of the paging system.
Virtual memory manager do not do all the work
of managing paging system.
A third party, the mapper is done the high level
part
32. Mapper in Chorus
User level memory manager
Each mapper controls one or more segments that
are mapped on to region
A segment can be mapped into multiple region
,even in different address space at the same time.
S1
S2
Segments can be mapped into multiple address space at the same time
33. Distributed Shared Memory
Chorus Supports page distributed shared
memory .
It used a dynamic decentralized algorithm ,
meaning that different managers keep track of
different pages and the manager for a page
changes as the page moves around the system.
34. Distributed Shared Memory
Chorus Supports page distributed shared
memory .
It used a dynamic decentralized algorithm ,
meaning that different managers keep track of
different pages and the manager for a page
changes as the page moves around the system.
35. 2 4 6 8 10 121 3 5 7 9
CPU 1 CPU 2 CPU 3
1 3 4
6
8
12
9
Shared Global Address Space
36. 2 4 6 8 10 121 3 5 7 9
CPU 1 CPU 2 CPU 3
1 3
4
6
8
12
9
Shared Global Address Space
37. 2 4 6 8 10 121 3 5 7 9
CPU 1 CPU 2 CPU 3
1 3
4
68
9
9
Shared Global Address Space
12
38. Find the Owner
Page Manager
P Owner
1. Request
2. Request
Forward
3. Request reply
40. Introduction
The basic communication paradigm in Chorus is message
passing.
During the Version 1 era, when the research was focused on
multiprocessors, using shared memory as the communication
paradigm was considered, but rejected as not being general
enough.
we will discuss messages, ports, and
the communication operations, concluding with a summary of
the kemel calls
available for communication.
41. Messages
Each message contains a header ,the header identifies the source and
destination and contains various protection identifiers and flags.
The fixed part , if present, is always 64 bytes long and is entirely under user control.
The body is variable sized, with a maximum of 64K bytes, and also entirely under
user control.
When a message is sent to a thread on a different machine, it is always
copied. However, when it is sent to a thread on the same machine, there is a
choice between actually copying it and just mapping it into the receiver’s
address space.
In the latter case, if the receiver writes onto a mapped page, a
genuine copy is made on the spot
42. Another form of message is the mini message, which is only
used between
kernel processes for short synchronization messages, typically to
signal the
occurrence of an interrupt.
The mini messages are sent to special low-overhead miniports.
MINI MESSAGE
43. PORTS
Messages are addressed to ports, each of which contains storage
for a certain number of messages.
If a message is sent to a port that is full, the sender is suspended
until sufficient space is available.
When a process is created, it automatically gets a default port that
the kemel
uses to send it exception messages. It can also create additional
ports that can be moved to other processes, even on other
machines.
When a port is moved, all the messages currently in it can be
moved with it.
44. • Chorus provides a way to collect several ports together into a port group.
To do so, a process first creates an empty port group and gets back a capability
for it. Using this capability, it can add and delete ports from the group.
• A port may be present in multiple port groups, as illustrated in Fig.
• Groups are commonly used to provide reconfigurable services.
• Clients
can send messages to the group without having to know which servers are
avail-
able to do the work.
45. Communication Operations
Two kinds of communication operations are provided by
Chorus:
Asynchronous Send
RPC (Remote Procedure Calls)
46. Asynchronous send
Asynchronous send allows a thread simply to send a message
to a port. There is no guarantee that the message arrives and no
notification if something goes wrong.
This is the purest form of datagram and allows users to build
arbitrary communication patterns on top of it.
47. RPC
When a process performs an RPC operation, it is blocked
automatically until either the reply comes in or the RPC
timer expires, at which time the sender is unblocked.
The message that unblocks the sender is guaranteed to be
the response to the request.
Any message that does not bear the RPC’s transaction
identifier will be stored in the port for future consumption.
RPCs use at-most-once semantics, meaning that in the
event of an unrecoverable communication or processing
failure, the system guarantees that an RPC will return an
error code rather than executing operation more than once.
48. It is also possible to send a message to a port group.
Various options are
available, as shown in Fig. 9-16. These options determine
how many messages
are sent and to which ports.
49. Option (a) sends the message to all ports in the group. For
highly reliable storage, a process might want to have every
file server store certain data.
Option (b) sends it to just one, but lets the system choose
which one.
When a process just wants some service, such as the current
date, but does not
care where it comes from, this option is the best choice.
. In (c), the caller can specify that the port must be on a
specific site, for example, to balance the system load.
Option (d) says that any port not on the specified site may
be used. A use for this option might be to force a
backup copy of a file onto a different machine than the
primary copy.
50. To receive a message, a thread makes a kernel call telling
which port it
wants to receive on.
If a message is available, the fixed part of the message is
copied to the caller’s address space, and the body, if any, is
either copied or
mapped in, depending on the options.
If no message is available, the calling thread is suspended
until a message arrives or a user-specified timer expires.
Furthermore, a process can specify that it wants to receive
from one of the
ports it own.
Finally, ports can be assigned priorities, which means that if
more than one enabled port has a message, the enabled
port with the highest
priority will be selected. Ports can be enabled and disabled
51. Kernel Calls for Communication
PORT MANAGEMENT CALLS
The first four are straightforward, allowing ports to be
created, destroyed, enabled, and disabled.
The last one specifies a port and a process. After the call
completes, the port no longer belongs to its original owner
(which need not be the caller) but instead belongs to the
target process. It alone can now read messages from the
port.
52.
53. PORT GROUP MANAGEMENT CALLS
The first, grpAllocate, creates a new port group and returns
a capability for
it to the caller.
Using this capability, the caller or any other process that
subsequently acquires the capability can add or delete ports
from the group.
54. CALLS FOR MANAGING SENDING AND RECEIVING OF MESSAGES
Ipcsend sends a message asynchronously to a specified port or
port group.
IpcReceive blocks until a message that arrives from a specified
port. This message may have been sent directly to the port, to a
port group of which the specified port is a member, or to all
enabled
ports.
If no buffer is provided for the body ,the ipcGetData call can be
executed to acquire the body from the kernel.
Finally, ipcCall performs a remote procedure call.
57. Provides extension to make distributed programming easier.
(Unix process can create and destroy new threads using chorus threads package)
Synchronous Signal Handlers associated with specific thread.
Asynchronous one are process wide.(alarm)
Signals are handled by the process itself.
Control threads listing to the exception ports.
Signal Triggered->examines internal table-> action
Extension to UNIX
58. user can create port/port groups to send and receive messages
.
Can create regions and map segments onto them.
In general , all facilities process mgmt. Memory mgmt. &
interprocess comm. available to Unix as well.
Extension to UNIX
59. Principal components:
1. Process manager
2. Object manager
3. Streams manager
4. Inter-process communication manager
Implementation of UNIX on Chorus
60. The process manager
Central player in emulation.
Catches all system calls and decides what to do.
Handles process mgmt.(creating/terminating/naming process)
When not able to handle system call, does RPC to object
manager or streams manager. Or make kernel calls.
(for ex. forking off a new call)
62. The object manager
Handles files, swap space and other form of tangible
information
May also contain the disk driver.
Has a port for receiving paging requests and a port for receiving
request from local/remote process manager.
Several object manager thread can be active a time.
(when a request comes in , a thread is dispatched to handle
it)
63. The object manager
a. Acts as a mapper for the files it controls.
b. Accept page fault request on a dedicated port
c. Does necessary disk I/O
d. Send appropriate replies
64. The object manager
a. It works in terms of segments named by capabilities
b. When a Unix process references a file descriptor
c. Its runtime system invokes the process manager
d. Process manager maintains a table with segment capabilities
e. Which uses file descriptor as an index into a table to locate the
capability corresponding to the file’s segment.
f. sgRead call to kernel to get the required data.(available)
g. Otherwise MpPulln upcall to appropriate mapper
65. The streams manager
Handles all the System V streams, including the keyboard,
display, mouse and tape devices.
During system Initialization
1. It send a message to object manager along with its port &
which device it is going to handle.
Subsequent request can be send to streams manager.
Also handles Berkeley sockets(internet API) and networking
This way, a process on chorus can communicate TCP/IP and
other internet protocols.
66. The inter-process communication manager
Handles those system calls relating to System V message,
semaphores and shared memory.
67. Implementation of UNIX on Chorus
At system initialization time, process manager tells kernel that it want to handle the trap
numbers standard AT&T UNIX uses for making calls(to achieve binary compatibility.
UNIX process issues a system call by trapping the kernel.
a thread in process manager get the control.
68. Implementation of UNIX on Chorus
Depending upon system call , the process manager may perform requested system call itself.
UNIX process issues a system call by trapping the kernel.
a thread in process manager get the control.(DISK operation,I/O stream ,IPC)
69. Implementation of UNIX on Chorus
for above example , Object manager does the disk operation and then sends a reply back to the
process manager.
which setup the proper return value and restart the blocked unix process.
70. Configurability
Division of labor make straight forward to configure a collection
of machine in different way, just for efficiency
All machine needs the process manager and other managers are
optional.
Managers are needed depends upon applications
71. Configurability
A) Full configuration may be usedon workstation connected to a network
B) Diskless, object manager not needed.
• When a process read/writes a file on this machine , the process manager forwards the
request to the object manager of the use file server.
72. Configurability
C) For dedicate application, it is know in advance which system call the program may do or
may not do .
IF no local file system and no system IPC needed, OM& IPC omitted .
D) Disconnected embedded system
• Controller for the car, TV set, only the process manager needed
73. Real time application
• Designed to handle real time system
• Priority range from 1 to 255.(lower # =>higher priority)
• Unix subsystem running at priorities 64 to 68.
• Ability to reduce the amount of time the CPU disabled after an interruption- when an
interrupt occur, an object manager or stream manager does the required process
immediately.
74. Real time application
• Doing this reduce the amount of time after an interrupt, but it also increases the
overhead of interrupt processing, so the feature must be used care fully.
• These facilities merge well with Unix and change configuration accordingly it will work as
real-time processes.
76. INTRODUCTION
COOL(chorous object oriented layer) is an ongoing research project designed to explore
the issue in building effective object support mechanisms for distribured sustem
77. COOL:(chorus object oriented layer)
Second subsystem develop for Chorus
Designed for research on objects oriented programming
To develop bridge between coarse grained system object ,such as file and fine-grained
language objects such as structure (record)
79. Cool Base Layer
Provide set of services! most important services is memory allocation
Cool Generic Run Time
Use cluster and context space to manage the object
The Language Run Time System
The language run time maps a particular language object model to the generic run time
model
80. Amoeba
Time Sharing DOS
Execuation model : Pool Processor
Automatic load Balancing
Automatic File Replication
81. Mach
Designed for multiprocessor
Memory mapped Objects
Integrated Memory Mgmt
No Group Communication
84. George Coulouris; Jean Dollimore; Tim Kindberg
(1994).Chorus(PDF).
Distributed Operating Systems by Andrew S. Tanenbaum(Book)
Documentation of Chorus on Oracle Click here.
References
11/13/2018 Atri Saxena-17203103 84
INRIA( Institute de Recherche en Informatique et Autimatique)
In version 3 they started Chorus systems to develop & market Chorus. RPC in added in V3.
***Binary compatibility was added so that UNIX program can be run without recompilation.
### Performance improvement. Also borrowed many ideas from other micro kernel based os IPC, Vitual Memory Design, etc.
UNIX compatabiltiy to run UNIX programs.
Provide a base for building new OS and emulating the existing ones. Since kernel is mirco kernel
Since msg passing has a reputation of being less sufficient than shared memory.
NT: implemented by the use of a single global namespace & service reconfiguration transparency which allows service to be reconfigured dynamically without being noticed by user interaction. Implemented by port grouping
Multiple User level memory & Page distributed shared memory
Nucleus: Minimal management of names, process, threads, memory, & communication
Kernel process: Dynamically loaded & removed during system execution so it increase the functionality without increasing the size. Eg. Interrupt Handler
Actors are similar to normal process in os. It owns certain resources & when process disappear so does its resource.
Within a process 1 or more thread can exist. Each thread contain its own stack, stack pointer, PC, reg
All process contains Address space
Message can be passed fixed type or variable both.
A message is addressed by the port. Each process has its own port.
Process & ports are named by UI
As new process created at remote , PM catches system call, does RPC to process manager to remote , then remote process manager asks kernel to create the process.