SlideShare une entreprise Scribd logo
1  sur  20
Shankha




For Solving question with Answers
                     MINTU




                                    11
●●●●●●Lamport’s Algorithm:-
Assuming the presence of the pipelining property & eventual delivery of all messages the solution requires time stamping of a ll
messages & it also assumes that each process maintains a request queue, initially empty, that contains request messages
ordered by the following five rules:
     1.  Initiator: i
         When process pi desires to acquire exclusive ownership of the resources it sends the time stamp message request (t i
         , i) where ti=ci to every other process & records the requests in its own queue.
     2.  Other processes: j, ji
         When process pj receives the request (ti, i) message, it places the request on its own queue & sends a time stamped
         reply (tj, j) to process pi.
     3.  Process pi is allowed to access the resource when the following 2 conditions are satisfied

                     a) Pi’s request message is at the front of the queue
                     b)    Pi has received a message from every other process with a time stamp later than (t i, i).
      4.   Process pi releases the resource by removing the request from its own queue & sending a time stamped release
           messages to every other process.
      5.   Upon receipts of pi‟s release message, process pj removes pi‟s request from its request queue.
Correctness of the algorithm follows from rule 3, which guarantees that the initiating process p i learns about all potentially
conflicting requests that precede it. Given that message can‟t be received out of ordering of events that the relation  provides
a total ordering of events in the system & in the preprocess request queues, rule 3(a) permits only one process to access the
resource at a time. The solution is dead lock free due to the time stamp ordering of the requests which precludes formation of
the wait for loops. Granting of requests in the order in which they are made prevent process starvation & lockouts.
The communication cost of the algorithm is 3(N-1) messages: (N-1) request messages, (N-1) reply messages, and (N-1)
release messages. Given that requests & release notifications are effectively broadcasts the algorithm clearly performs better
in a broadcast type network such as bus.
●●●●●●●Give a solution Dining Philosophers problem using monitors:-
Consider 5 philosophers who spend their lives thing & eating. The philosophers shares a common circular table
surrounded by 5 chairs, each belonging to one philosopher. In a center of table is a bowel of rice, & the
philosopher thinks she does not share with her colleagues. From time to time the philosopher gets hungry &
tries to pick up the two chop sticks at a time. Obviously she can‟t pick up a chopstick that already in the hand
of her colleagues. When a hungry philosopher has both her chopsticks at a same time she ate it without
releasing chopsticks. When she finished eating she puts down both the chopsticks & start to thinking again.
The dining philosopher problem is a classic synchronization problem neither have its practical importance nor
computer scientist dislike philosophers but because it is an example of a large class of concurrency control
problem. It is a simple representation of the need to allocate several resources among several processes in a
deadlock & starvation free manner. One simple solution is to represent each chopstick by a semaphore. A
philosopher tries to grab all her chopstick by executing a wait operation on that semaphore; she releases her
chopstick by executing the signal operation on the appropriate semaphores. Thus the shared data are:
                                                           semaphore chopstick[5];
                                                           where all the elements of the chopstick are initialized to 1.
                                                           The structure of the philosopher „i‟ shown in the figure.
       do{                                                 Although this solution guarantees that no two neighbors are
                                                           eating simultaneously, it nevertheless of creating a deadlock.
                wait (chopstick[i]);                       Suppose       that      all    5   philosophers become hungry
                wait (chopstick[(i+1)%5]);                 simultaneously, & each grabs her left chopstick. All the
                -----                                      elements of a chopstick will now be equal to 0. When each
                -----                                      philosopher tries to grab her right chopstick she will be
                eat                                        delayed forever. Another high level synchronization construct
                                                           is the monitor type. A monitor is characterized by a set of
                signal(chopstick[i]);
                                                           programmer defined operators. The representation of a
                signal(chopstick[(i+1)%5]);                monitor type consists of declarations of variables whose
                -----                                      values define the state of an instance of the type, as well as
                ------                                     the bodies of procedure or functions that implemented
                think                                      operations on the type. The syntax of a monitor is shown
                                                           below.
         }while(1);
                                                           monitor monitor_name
                Fig: structure of philosopher              {
                                                                     Shared variable declaration
                                                                     Procedure body p1 (---)
                                                                     {
                                                                                  ---
                                                                     }
                                                                     Procedure body p2 (---)
                                                                     {
                                                                                  ---
                                                                          }
                                                                          *****
                                                                          *****
                                                                          Procedure body pn (---)
                                                                      {
                                                                                  ---
                                                                          }
                                                                          Initialization code
                                                               }
Now we are in a position to describe to our solution to the dining philosopher problem using monitor.
DATA:
           condition can_eat[NUM_PHILS];
           enum states {THINKING, HUNGRY, EATING} state[NUM_PHILS-1];
           int index;
INITIALIZATION:
           for (index=0; index<NUM_PHILS; index++)
           {
                         flags[index] = THINKING;
           }
MONITOR PROCEDURES:
           /* request the right to pickup chopsticks and eat */
           entry void pickup(int mynum)
           {

                          /* announce that we're hungry */
                          state[mynum] = HUNGRY;

                          /* if neighbor's aren't eating, proceed */
                          if ((state[mynum-1 mod NUM_PHILS] != EATING) &&
                                         (state [mynum+1 mod NUM_PHILS] != EATING))
                                         {
                                                       state[mynum] = EATING;
                                         }

                          /* otherwise wait for them */
                          else can_eat[mynum].wait;

                          /* ready to eat now */
                          state[mynum] = EATING;
             }
             /* announce that we're finished, give others a chance */
             entry void putdown(int mynum)
             {

                          /* announce that we're done */
                          state[mynum] = THINKING;

                          /* give left (lower) neighbor a chance to eat */
                          if ((state [mynum-1 mod NUM_PHILS] == HUNGRY) &&
                          (state [mynum-2 mod NUM_PHILS] != EATING))
                          {
                                          can_eat[mynum-1 mod NUM_PHILS].signal;
                          }
                          /* give right (higher) neighbor a chance to eat */
                          if ((state [mynum+1 mod NUM_PHILS] == HUNGRY) &&
                          (state [mynum+2 mod NUM_PHILS] != EATING))
                          {
                                          can_eat[mynum+1 mod NUM_PHILS].signal;
                          }
         }
PHILOSOPHER:
         /* find out our id, then repeat forever */
         me = get_my_id();
         while (TRUE)
         {

                          /* think, wait, eat, do it all again ... */
                          think();
                          pickup(me);
                          eat();
                          putdown(me);
             }
●●●●●Explain the Real time operating System (RTOS). Give any 2 example application suitable for RTOS. Differentiate
between time sharing & RTOS
Real time operating systems are used in environments where a large number of events mostly external to the computer system,
must be accepted & processed in a short time or within a certain deadlines. Such applications include industrial central,
telephone, switching & real time simulation. A real-time OS has an advanced algorithm for scheduling. Scheduler flexibility
enables a wider, computer-system orchestration of process priorities, but a real-time OS is more frequently dedicated to a
narrow set of applications. Key factors in a real-time OS are minimal interrupt latency and minimal thread switching latency, but
a real-time OS is valued more for how quickly or how predictably it can respond than for the amount of work it can perform in a
given period of time. The primary issue of real time operating is to provide quick event response times & thus meet the
scheduling deadlines. User convenience & resource utilization are the secondary concern to the RTOS designers. It is not
uncommon for a real time system to be expected to process bursts of thousands of interrupts per second without missing a
single event. Such requirements usually can‟t be met by multiprogramming alone, & real time operating systems usually rely on
some specific policies & techniques for doing their job.
Explicit programmer defined & controlled processes are commonly encountered in real time operating systems. Basically a
separate process is charged with handling a single external event. The process is activated upon occurrence of the related
event, which is often signaled by an interrupt. Multitasking operation is accomplished by scheduling processes for execution
independently of each other. Each process is assigned a curtained level of priority that corresponds to the relative importance
of the event that it services. The processor is normally allocated to the highest priority process among those that are ready to
execute. Higher priority processes usually preempt execution of the lower priority processes. This form of scheduling is known
as priority based preemptive scheduling is used by a majority of real time systems.
Differences between time sharing & RTOS are time sharing is a popular representation of multi-programmed, multi-user system
whereas RTOS are used in environments mostly external to the computer system, must be accepted & processed in a short
time or within a certain deadlines. The primary objectives of real time operating system is good terminal response time whereas
the primary objective of RTOS is to provide quick event response time & thus meet the scheduling deadline.

●●●●●●Explain the windows 2000 operating system architecture
Windows 2000 Architecture Diagram




The Windows 2000® Architecture Roadmap provides a global view of the operating system architecture, its
main components, and mechanisms. It also provides "logical navigation" to other locations for more in depth
discussions.
The goal is to help the user to go from general to specific information, in a way that is logical and based on the
system structure itself. She (he) should be able to become familiar with the operating system main concepts
and components. Novice and experienced users should also benefit from this comprehensive operating system
description, its numerous diagrams, and examples. Refer to the site organization for background information
and for navigation suggestions.
Windows 2000 Overview
The Windows 2000 operating system constitutes the environment in which applications run. It provides the
means to access processor(s) and all other hardware resources. Also, it allows the applications and its own
components to communicate with each other.
Windows 2000 has been built combining the following models:
 Layered Model. The operating system code is grouped in modules layered on top of each other. Each
    module provides a set of functions used by modules of higher levels. This model is applied mainly to the
    operating system executive.
 Client/Server Model. The operating system is divided into several processes, called servers, each
    implementing a set of specific services. Each of these servers runs in user mode, waiting for client
    requests.
User-Mode
The software in user mode runs in a non-privileged state with limited access to system resources. Windows
2000 applications and protected subsystems run in user-mode. The protected subsystems run in their own
protected space and do not interfere with each other. They are divided into the following two groups:
    Environment subsystems. Services that provide Application Programming Interfaces (APIs) specific to an
     operating system.
 Integral subsystems. Services that provide interfaces with important operating system functions such as
     security and network services.
Kernel-Mode versus User-Mode
Windows 2000 divides the executing code in the following two areas or modes.
Kernel-Mode
In the privileged kernel mode, the software can access all the system resources such as computer hardware,
and sensitive system data. The kernel-mode software constitutes the core of the operating system and can be
grouped as follows:
 System Components. Responsible for providing system services, to environment subsystems and other
     executive components. They perform system tasks such as input/output (I/O), file management, virtual
     memory management, resource management, and interposes communications.
 Kernel. The executive core component. It performs crucial functions such as scheduling, interrupt,
     exception dispatching, and multiprocessor synchronization.
 Hardware Abstract Layer (HAL). Isolates the rest of the Windows NT executive from the specific hardware,
     making the operating system compatible with multiple processor platforms.
For more information, refer to:
   Windows 2000 basic techniques. Standard operating system techniques used by Windows 2000.
   User mode components. They execute in their own protected address space and have limited access to
    system resources.
   Kernel mode components. Performance sensitive operating system components.

●●●●●Cache Memory
Cache memory is extremely fast memory that is built into a computer‟s central processing unit (CPU), or
located next to it on a separate chip. The CPU uses cache memory to store instructions that are repeatedly
required to run programs, improving overall system speed. The advantage of cache memory is that the CPU
does not have to use the motherboard‟s system bus for data transfer. Whenever data must be passed through
the system bus, the data transfer speed slows to the motherboard‟s capability. The CPU can process data much
faster by avoiding the bottleneck created by the system bus.
As it happens, once most programs are open and running, they use very few resources. When these resources
are kept in cache, programs can operate more quickly and efficiently. All else being equal, cache is so effective
in system performance that a computer running a fast CPU with little cache can have lower benchmarks than a
system running a somewhat slower CPU with more cache. Cache built into the CPU itself is referred to as Level
1 (L1) cache. Cache that resides on a separate chip next to the CPU is called Level 2 (L2) cache. Some CPUs
have both L1 and L2 cache built-in and designate the separate cache chip as Level 3 (L3) cache.
●●●●●●Physical vs. Virtual memory
Physical Memory is the form of the hardware called RAM. While the virtual memory is exactly the word virtual
which means its not real, because it uses the space in the hard disk to create some memory space.
The problem with virtual memory is that when you have allocated exact number of bytes and then your hard
disk gets low on space; your system will have errors. Just leave the system allocate the virtual memory that it
needs so you won't have any problem.
Virtual memory is a memory management technique, used by multitasking computer operating systems
wherein non-contiguous memory is presented to software as contiguous memory. This contiguous memory is
referred to as the virtual address space. It is used commonly and provides great benefit for users at a very low
cost. The computer hardware that is the primary memory system of a computer also called as RAM is the
physical memory.
●●●●●●Paging vs. Swapping
The difference between swapping and paging is that paging swaps memory pages while swapping swaps the
address space of complete processes. Paging refers to paging out individual pages of memory. Swapping is
when an entire process is swapped out. Paging is normal. Swapping happens when the resource load is
heavy and the entire process is written out to disk. If you're swapping a lot, you might want to look deeper
into looking at things like adding memory. Paging is how all processes normally run. A page fault occurs, which
is normal operation, and a new page of a program is paged in, and freed pages are paged out. This is not
swapping. Swapping is at the process level. This will only occur if the resource load is heavy and system
performance is degrading. The lowest priority process is written out to swap space. This could be a
sleeping process, which are the highest candidates for swap out. When they become active again, a new
candidate is swapped out, if need be, and the process is swapped in.
●●●●●●Scheduling or CPU scheduling
Scheduling refers to a set of policies & mechanisms built into the operating system that govern the order in
which the work to be done by a computer system that select the next job to be admitted into the system & the
next process to run. The primary objective of the scheduling is to optimize the system performance in
accordance with the criteria deemed most important by the system designers. The main objective of the
scheduling is to increase CPU utilization & higher throughput. Throughput is the amount of work accompanied
in a given time interval. CPU scheduling is the basis of operating system which supports multiprogramming
concepts. So the objectives of scheduling are:
          (i)      Scheduling should attempt to service the largest possible number of processes per unit time.
          (ii)     Scheduling should minimize the wasted resources overhead.
          (iii)    The scheduling mechanism should keep the resource of the system busy. Processes that will
                   use under utilized resources should be favored.
(iv)        It would be fair if all processes are treated the same, & no process can suffer indefinite
                      postponement.
           (v)        In environments in which process are given priorities the scheduling should favor the higher
                      priority processes.
●●●●●●●Differences between network operating system & distributed operating system
Distributed operating system is a collection of loosely coupled system interconnected by a communication
network. From the point of view of the specific processor in a distributed system, the rest of the processors &
their respective resources are remote whereas its own resources are local.
But Network operating system it provides an environment in which users who are aware of the multiplicity of
machines can access remote resources by either logging into the appropriate remote machine or transferring
data from the remote machine to their own machine.
●●●●●●Thread
Thread represents a software approach to improving the performance of operating systems by reducing the
overhead of process switching. A thread is lightweight process with a reduced state. State reduction is
achieved by having a group of related thread is equivalent to a classical process. Each thread belongs to
exactly one process. Processes are static & only threads can scheduled for the execution. Threads can
communicate efficiently by means of commonly accessible shared memory within the enclosing process.
Threads have been successfully used in network services.
●●●●●●Authentication
The primary goal of authentication is to allow access to legitimate system user & to deny access to
unauthorized parties. The 2 primary measures of authentication effectiveness are:-
           (i)        The false acceptance ratio i.e. the percentage of illegitimate erroneously admitted.
           (ii)       The false rejection rations i.e. the percentage of legitimate users who are access due to
                      failure of the authentication mechanism.
Obviously the objective is to minimize both the false acceptance & false rejection ratio. One way authentication
is usually based on:
           1> Possession of a secret.
           2> Possession of an artifact.
           3> Unique physiological or behavioral characteristics of the user.
The 2 types of authentication are 1) Mutual authentication & 2) Extensible Authentication. Mutual
authentication or two-way authentication refers to two parties authenticating each other suitably. In technology terms, it
refers to a client or user authenticating themselves to a server and that server authenticating itself to the user in such a way
that both parties are assured of the others' identity. When describing online authentication processes, mutual authentication is
often referred to as website-to-user authentication, or site-to-user authentication. Mutual SSL provides the same things as SSL,
with the addition of authentication and non-repudiation of the client authentication, using digital signatures. However, due to
issues with complexity, cost, logistics, and effectiveness, most web applications are designed so they do not require client-side
certificates. This creates an opening for a man-in-the-middle attack, in particular for online banking. Extensible Authentication
Protocol, or EAP, is an authentication framework frequently used in wireless networks and Point-to-Point connections. EAP is
an authentication framework providing for the transport and usage of keying material and parameters generated by EAP
methods. There are many methods defined by RFCs and a number of vendor specific methods and new proposals exist. EAP
is not a wire protocol; instead it only defines message formats. Each protocol that uses EAP defines a way to encapsulate EAP
messages within that protocol's messages.
●●●●●●Swapping
Removing suspended or preempted process from memory & their subsequent bringing back is called swapping.
Swapping has traditionally been used to implement multiprogramming in systems with respective memory
capacity or with little hardware support for improving processor utilization in partitioned memory environments
by increasing the ratio of ready to resident processes. Swapping is usually employed in memory management
systems with contiguous allocation. Such as fixed & dynamically partitioned memory & segmentation. The
swapper is an operating system process whose major responsibilities include:
           a) Selection of processes to swap out.
           b) Selection of processes to swap in.
           c) Allocation & management of swap space.
The swapper usually selects a victim among the suspended processes that occupy partitions large enough to
satisfy the needs of the incoming process. Among the qualifying processes the more likely candidates for
swapping are the ones with low priority & those waiting for slow events and thus having a higher probability of
being suspended for a comparatively long time. Another important consideration is the time spends in memory
by the potential victim & whether it ran while in memory. Otherwise there is a danger of thrashing caused by
repeatedly removing processes from memory almost immediately after loading them into memory.
                                                            So the benefits of using Swapping are:
                                                                  1) Allows higher of multiprogramming.
                                                                  2) Better memory utilization.
                                                                  3) Less wastage of CPU time based on compaction.
                                                                  4) Can easily be applied on priority based scheduling
                                                            algorithms to improve performance.
●●●●●●Thrashing
If the number of frames allocated to a low priority process falls below the minimum number required by
the computer architecture, we must suspend that process execution. We should then page out its
remaining pages, freeing all its allocating frames. This provision introduces a swap in swap out level of
intermediate CPU scheduling. Any processes that does not have enough frames. Although it is technically
possible to reduce the number of allocated frames to the minimum, there is some number of pages in
active use. If that processes does not have this number of frames, it will quickly page fault. It must
replace some page. Since all its pages are in active use. It must replace a page that will be needed again
right away. Consequently it quickly faults again & again & again. This high paging is called thrashing.
●●●●●●●Four necessary condition to occur Deadlock Condition
a) Mutual Exclusion: - the resources involved are non sharable. At least one resource must be held in a
     non sharable mode i.e. only one process at a time claims exclusive control of the resource. If another
     process requests that resource the requesting process must be delayed until the resource has been
     released.
b) Hold & Wait condition: - in this condition a requesting process already holds the resources &
     waiting for the requested resources. A process holding a resource allocated to it waits for an additional
     resource that is/ are currently being held by another process.
c) No preemptive condition: - resources are already allocated to a process can‟t be preempted.
     Resources can‟t be removed forcibly voluntarily by the process holding it.
d) Circular wait condition: - the process in the system form a circular list or chain where each process
     in the list is waiting for a resource held by the next process in the list.
We emphasize that all four conditions must hold for a deadlock to occur. The circular wait condition implies
the hold & wait condition, so the four conditions are not completely independent.
●●●●●●●Lattice model
The lattice for security levels is widely used to describe the structure of military security levels. A lattice is
a finite set together with a partial ordering on its elements such that for every pair of elements there is a
least upper bound & a greatest lower bound. The simple linear ordering of sensitivity levels gas already
been defined. Compartment sets can be partially ordered by the subset relation: one compartment is
greater than or equal to another if the latter set is a subset of the former. Classification which includes a
sensitivity level & a compartment set can then be partially ordered as follows:
For any sensitivity levels a, b & compartment set c, d; the relation (a,c)≥(b,d) exists if & only if a≥b &
cd i.e. each pair of classifications has a greatest lower bound & a least upper bound follows from these
definitions & the facts that the classification “Unclassified” , no compartments is a global lower bound &
that we can postulate (assume) a classification “top-secret all compartments” as a global upper bound.
Because the lattice model niches (role) the military classification structure so closely it is widely used.
●●●●●●●Three dimensional Hypercube systems
Various cube type multiprocessor topologies address the scalability & cost issues by providing
interconnections whose complexity grows logarithmically with the increasing number of nodes. The figure
                                                      illustrates this. The three degree hypercube will have 2n
                                                      nodes i.e. 23-8 nodes. Nodes are arranged in 3
                                                      dimensional cubes that is each node connected to 3
                                                      numbers of nodes. Each node is assigned with a unique
                                                      number or address lies between 0 to 7(2n-1) i.e. 000,
                                                      001, 010, 011, 100, 101, 110, 111. The adjacent nodes
                                                      differing in 1 bit (001, 010) & the third (nth) node is
                                                      having maximum 3 internodes distance (100).

                                                 Hypercube provides a good basis for scalable system
                                                 since their complexities grow logarithmically with the
                                                 number of nodes. It provides a bi directional
                                                 communication between two processors. It is normally
                                                 used in loosely coupled system because the transfer of
data between two processors goes through several intermediate processors. Increasing the I/O bandwidth
the I/O devices can be attached with every node.




●●●●●●●Access Matrix model
In computer science, an Access Control Matrix or Access Matrix is an abstract, formal security
model of protection state in computer systems that characterize the rights of each subject with respect to
every object in the system. The access matrix model is the policy for user authentication, and has several
implementations such as access control lists (ACLs) and capabilities. It is used to describe which users
have access to what objects.
The access matrix model consists of four major parts:
 A list of objects
 A list of subjects
 A function T which returns an object's type
    The matrix itself, with the objects making the columns and the subjects making the rows
    In the cells where a subject and object meet lie the rights the subject has on that object. Some example
access rights are read, write, execute, list and delete.
    An access matrix has several standard operations associated with it:
     Entry of a right into a specified cell
     Removal of a right from a specified cell
     Creation of a subject
     Creation of an object
     Removal of an subject
     Removal of an object
    ●●●●●●Differences between security policy & security model
    The security policy outlines several level points: how the data is accessed, the amount of security required
    & what the steps are when these requirements are not met. The security model is more in depth &
    supports the security policy. Security model is an important concept in the design of any security system.
    They all have different security policies applying to the systems.

    ●●●●●●Take grant model
    The take-grant protection model is a formal model used in the field of computer security to establish or
    disprove the safety of a given computer system that follows specific rules. It shows that for specific
    systems the question of safety is decidable in linear time, which is in general un-decidable.
    The model represents a system as directed graph, where vertices are either subjects or objects. The edges
    between them are labeled and the label indicates the rights that the source of the edge has over the
    destination. Two rights occur in every instance of the model: take and grant. They play a special role in
    the graph rewriting rules describing admissible changes of the graph.
    There are a total of four such rules:
   take rule allows a subject to take rights of another object (add an edge originating at the subject)
   grant rule allows a subject to grant own rights to another object (add an edge terminating at the subject)
   create rule allows a subject to create new objects (add a vertex and an edge from the subject to the new
    vertex)
   remove rule allows a subject to remove rights it has over on another object (remove an edge originating at
    the subject)
    Preconditions for take (o, p, r):
   Subject s has the right take for o.
   Object o has the right r on p.
    Preconditions for grant (o, p, r):
   Subject s has the right Grant for o.
   s has the right r on p.
    Using the rules of the take-grant protection model, one can reproduce in which states a system can
    change, with respect to the distribution of rights. Therefore one can show if rights can leak with respect to
    a given safety model.

    ●●●●●●●Bakery algorithm
    In computer science, it is common for multiple threads to simultaneously access the same resources. Data corruption can
    occur if two or more threads try to write into the same memory location, or if one thread reads a memory location before
    another has finished writing into it. Lamport's bakery algorithm is one of many mutual exclusion algorithms designed to
    prevent concurrent threads entering critical sections of code concurrently to eliminate the risk of data corruption. Lamport
    envisioned a bakery with a numbering machine at its entrance so each customer is given a unique number. Numbers
    increase by one as customers enter the store. A global counter displays the number of the customer that is currently being
    served. All other customers must wait in a queue until the baker finishes serving the current customer and the next
    number is displayed. When the customer is done shopping and has disposed of his or her number, the clerk increments
    the number, allowing the next customer to be served. That customer must draw another number from the numbering
    machine in order to shop again.
    According to the analogy, the "customers" are threads, identified by the letter i, and obtained from a global variable. Due to
    the limitations of computer architecture, some parts of the Lamport's analogy need slight modification. It is possible that
    more than one thread will get the same number when they request it; this cannot be avoided. Therefore, it is assumed that
    the thread identifier i is also a priority identifier. A lower value of i means a higher priority and threads with higher priority
    will enter the critical section first.
    The critical section is that part of code that requires exclusive access to resources and may only be executed by one
    thread at a time. In the bakery analogy, it is when the customer trades with the baker and others must wait.
    When a thread wants to enter the critical section, it has to check whether it is its turn to do so. It should check the numbers
    of every other thread to make sure that it has the smallest one. In case another thread has the same number, the thread
    with the smallest i will enter the critical section first.

In pseudo code this comparison will be written in the form: (a, b) < (c, d) is similar to (a < c) or ((a == c)
and (b < d))

    Once the thread ends its critical job, it gets rid of its number and enters the non-critical section. The non-critical section is
    the part of code that doesn't need exclusive access. It represents some thread-specific computation that doesn't interfere
    with other threads' resources and execution.
This part is analogous to actions that occur after shopping, such as putting change back into the wallet.
   ●●●●●●Mutual Exclusion
   Mutual exclusion algorithms are used in concurrent programming to avoid the simultaneous use of a common resource,
   such as a global variable, by pieces of computer code called critical sections. A critical section is a piece of code in which a
   process or thread accesses a common resource. The critical section by itself is not a mechanism or algorithm for mutual
   exclusion. A program, process, or thread can have the critical section in it without any mechanism or algorithm which
   implements mutual exclusion.
   Examples of such resources are fine-grained flags, counters or queues, used to communicate between code that runs
   concurrently, such as an application and its interrupt handlers. The synchronization of access to those resources is an
   acute problem because a thread can be stopped or started at any time.
   To illustrate: suppose a section of code is altering a piece of data over several program steps, when another thread,
   perhaps triggered by some unpredictable event, starts executing. If this second thread reads from the same piece of data,
   the data, which is in the process of being overwritten, is in an inconsistent and unpredictable state. If the second thread
   tries overwriting that data, the ensuing state will probably be unrecoverable. These shared data being accessed by critical
   sections of code must, therefore, be protected, so that other processes which read from or write to the chunk of data are
   excluded from running.
   A mutex is also a common name for a program object that negotiates mutual exclusion among threads, also called a lock.
   On a uniprocessor system a common way to achieve mutual exclusion inside kernels is to disable interrupts for the
   smallest possible number of instructions that will prevent corruption of the shared data structure, the critical section. This
   prevents interrupt code from running in the critical section that also protects against interrupt-based process-change.
   In a computer in which several processors share memory, an indivisible test-and-set of a flag could be used in a tight loop
   to wait until the other processor clears the flag. The test-and-set performs both operations without releasing the memory
   bus to another processor.
   ●●●●●●Test & Set instruction
   In computer science, the test-and-set instruction is an instruction used to write to a memory location and return its old
   value as a single atomic (i.e. non-interruptible) operation. If multiple processes may access the same memory and if a
   process is currently performing a test-and-set, no other process may begin another test-and-set until the first process is
   done. CPUs may use test-and-set instructions offered by other electronic components, such as Dual-Port RAM; CPUs may
   also offer a test-and-set instruction themselves. A lock can be built using an atomic test-and-set instruction as follows:

function Lock(boolean *lock)
{
    while (test_and_set (lock) == 1)
};

  The test-and-set operation can solve the wait-free consensus problem for no more than two concurrent processes.
  However, more than two decades before Herlihy's proof, IBM had replaced test-and-set by compare-and-swap, which is a
  more general solution to this problem. Ultimately, IBM would release a processor family with 12 processors,
  whereas Amdahl would release a processor family with the architectural maximum of 16 processors. The test and set
  instruction when used with Boolean values behaves like the following function. Crucially the entire function is executed
  atomically: no process can interrupt the function mid-execution and hence see a state that only exists during the execution
  of the function. This code only serves to help explain the behavior of test-and-set; atomicity requires explicit hardware
  support and hence can't be implemented as a simple function.
  ●●●●●●●Synchronization mechanism
  Inter-process synchronization & communication are necessary for designing concurrent s/w i.e. correct &
  reliable. Parallel program execution & read/write sharing of data place heavy demands on the
  synchronization & communication are handled via messages. Once the necessary data are transmitted to
  individual processors for processing there is usually little need for processes to synchronize while operating
  on data in speed improvements for many application but it also intensified (increase) to need for
  synchronization. Properly designed uni-processor instructions such as test & Set & compare & swap
  implemented using the indivisible (undividable) read modify write cycle can be used as a foundation for
  Inter-process synchronization in multiprocessor system.
  ●●●●●Conditional Critical Region
  The critical region construct can be effectively used to solve the critical section problem. It cannot,
  however, be used to solve some general synchronization problems. For this reason the conditional critical
  region was introduced. The shared variable is declared in the same way the region construct is used again
                                       for controlling access & the only new keyword is await. It is illustrated in the
                                       following sequence of code:
 var v: shared T;                      Implementation of this construct allows a process waiting on a condition
 begin                                 within a critical region to be suspended in a special queue, pending
                                       satisfaction of the related condition. Unlike a semaphore a conditional
 …….
                                       critical section in that case. Consequently a process waiting on a condition
 …….                                   that does not prevent others from using the resource & when the condition
 region v do                           is eventually satisfied, the suspended process is awaited. Since it is
 {                                     cumbersome to keep track of dynamic changes of the numerous possible
                                       individual conditions the common implementation of the conditional critical
            begin
                                       region assumes that each completed process may have modified the system
            ……..                       state in a way that has caused some of the waited on condition to become
            …….                        satisfied. Whenever process leaves the critical section all conditioned that
 }                                     have suspended earlier process are evaluated & if warranted one of the
                                       process is awakened. When that process leaves the next waiting process
 Qwait condition;
                                       whose waiting condition is satisfied is activated. No more suspended
 ……..                                  processes are left or none of them has the necessary conditions to process.
 ……..
 end;
●●●●●Explain the 5 design goals of Distributed shared memory
In order to design a good distributed system there are many design goals among them 5 are explained below
                 1) Concurrency: - A server must handle client requests at the same time distributed systems
                     are naturally concurrent that is there are multiple workstations running programs
                     independently & at the same time. Concurrency is important because any distributed service
                     that is not concurrent would become a bottle neck that would serialize the actions of its
                     clients & thus reduce the natural concurrency of the system.
                 2) Scalability: - The capability of a system to adopt to increase load is its scalability. Systems
                     have bounded resources & can become completely saturated under increased load. A scalable
                     system reacts more gracefully to increased load then does a non scalable one. Its resources
                     reach a saturated state later. Even perfect design can‟t accommodate an ever growing load.
                     Adding new resources might solve the problem. A scalable system should have the potential
                     to grow without problems. In short a scalable design should withstand high service load,
                     accommodate growth of the user community & enable simple integration of added resources.
                 3) Openness: - Two types of openness are important: non- proprietary & extensibility. Public
                     protocols are important because they make it possible for many s/w manufactures that will
                     be able to talk to each other. A system is extensible if it permits customization needed to
                     meet unanticipated requirements. Extensibility is important because it aids (help) scalability
                     & allows a system to survive (live) over time as the demands on it & the ways it is used to
                     change.
                 4) Fault Tolerance (Acceptance): - Many clients are affected by the failure of distributed services, unlike a
                     non distributed system in which a failure affects only single nodes. A distributed service depends on many
                     components like n/w, switches etc all of which must work. Furthermore a client will often depend on multiple
                     distributed services in order to function properly. If a client that depends on the N components that each have failure
                     probability p will fail with probability roughly N*P. this approximation is (1-(1-P)^N) .
                 5) Transparency: - The final goal is transparency. We often use term single system image to
                     refer to this goal of making the distributed system look to programs like it is a tightly coupled
                     system. This id rely what a distributed operating system s/w is all about. There are 8 types of
                     transparencies:
                           a) Access transparency enables local & remote resources to be accessed using identical
                                 operations.
                           b) Local transparency that enables resources to be accessed without knowledge of their
                                 location.
                           c) Concurrency transparency enables several processes to operate concurrently using
                                 shared resources without interfaces b/w them.
                           d) Replication transparency enables multiple instances to be used to increase reliability
                                 & performance without knowledge of the replicas by user.
                           e) Failure transparency enables the concealment of faults allowing users & application
                                 program to complete their task.
                           f) Mobility transparency allows the movement of resources & clients within a system
                                 without affecting the operation of users or programs.
                           g) Performance transparency allows the system to be reconfigured to improve
                                 performance.
                           h) Scaling transparency allows the system & applications to expand in scale without
                                 change to the system structure or the application algorithm.
●●●●●Working Set
Peter Denning (1968) defines “the working set of information W(t,τ) of a process at time t to be the collection of information
referenced by the process during the process time interval (t − τ,t)”. Typically the units of information in question are considered
to be memory pages. This is suggested to be an approximation of the set of pages that the process will access in the future
(say during the next τ time units), and more specifically is suggested to be an indication of what pages ought to be kept in main
memory to allow most progress to be made in the execution of that process.
The effect of choice of what pages to be kept in main memory (as distinct from being paged out to auxiliary storage) is
important: if too many pages of a process are kept in main memory, then fewer other processes can be ready at any one time.
If too few pages of a process are kept in main memory, then the page fault frequency is greatly increased and the number of
active (non-suspended) processes currently executing in the system is set to zero.
The working set model states that a process can be in RAM if and only if all of the pages that it is currently using (often
approximated by the most recently used pages) can be in RAM. The model is an all or nothing model, meaning if the pages it
needs to use increases, and there is no room in RAM, the process is swapped out of memory to free the memory for other
processes to use.
Often a heavily loaded computer has so many processes queued up that, if all the processes were allowed to run for
one scheduling time slice, they would refer to more pages than there is RAM, causing the computer to "thrash".
By swapping some processes from memory, the result is that processes -- even processes that were temporarily removed from
memory -- finish much sooner than they would if the computer attempted to run them all at once. The processes also finish
much sooner than they would if the computer only ran one process at a time to completion, since it allows other processes to
run and make progress during times that one process is waiting on the hard drive or some other global resource.
In other words, the working set strategy prevents thrashing while keeping the degree of multiprogramming as high as possible.
Thus it optimizes CPU utilization and throughput.
The main hurdle in implementing the working set model is keeping track of the working set. The working set window is a moving
window. At each memory reference a new reference appears at one end and the oldest reference drops off the other end. A
page is in the working set if it is referenced in the working set window.
To avoid the overhead of keeping a list of the last k referenced pages, the working set is often implemented by keeping track of
the time t of the last reference, and considering the working set to be all pages referenced within a certain period of time.
The working set isn't a page replacement algorithm, but page-replacement algorithms can be designed to only remove pages
that aren't in the working set for a particular process. One example is a modified version of the clock algorithm called WS-
Clock.
●●●●●●●Bell & LaPadula Model
The Bell-LaPadula Model (abbreviated BLP) is a state machine model used for enforcing access control in government and
military applications. The model is a formal state transition model of computer security policy that describes a set of access
control rules which use security labels on objects and clearances for subjects. Security labels range from the most sensitive
(e.g."Top Secret"), down to the least sensitive (e.g., "Unclassified" or "Public"). The Bell-LaPadula model is an example of a
model where there is no clear distinction of protection and security.
           Features of La Padual Model:-
           The Bell-LaPadula model focuses on data confidentiality and controlled access to classified information, in contrast to
           the Biba Integrity Model which describes rules for the protection of data integrity. In this formal model, the entities in
           an information system are divided into subjects and objects. The notion of a "secure state" is defined, and it is proven
           that each state transition preserves security by moving from secure state to secure state, thereby inductively proving
           that the system satisfies the security objectives of the model. The Bell-LaPadula model is built on the concept of
           a state machine with a set of allowable states in a computer network system. The transition from one state to another
           state is defined by transition functions.
           A system state is defined to be "secure" if the only permitted access modes of subjects to objects are in accordance
           with a security policy. To determine whether a specific access mode is allowed, the clearance of a subject is
           compared to the classification of the object (more precisely, to the combination of classification and set of
           compartments, making up the security level) to determine if the subject is authorized for the specific access mode.
           The clearance/classification scheme is expressed in terms of a lattice. The model defines two mandatory access
           control (MAC) rules and one discretionary access control (DAC) rule with three security properties:
       1. The Simple Security Property - a subject at a given security level may not read an object at a higher security level
            (no read-up).
       2. The ★-property (read "star"-property) - a subject at a given security level must not write to any object at a lower
            security level (no write-down). The ★-property is also known as the Confinement property.
       3. The Discretionary Security Property - use of an access matrix to specify the discretionary access control.
          The transfer of information from a high-sensitivity document to a lower-sensitivity document may happen in the Bell-
          LaPadula model via the concept of trusted subjects. Trusted Subjects are not restricted by the ★-property. Untrusted
          subjects are. Trusted Subjects must be shown to be trustworthy with regard to the security policy. This security model
          is directed toward access control and is characterized by the phrase: "no read up, no write down." Compare
          the Biba model, the Clark-Wilson model and the Chinese wall model.
          With Bell-LaPadula, users can create content only at or above their own security level (i.e. secret researchers can
          create secret or top-secret files but may not create public files; no write-down). Conversely, users can view content
          only at or below their own security level (i.e. secret researchers can view public or secret files, but may not view top-
          secret files; no read-up).
          The Bell-LaPadula model explicitly defined its scope. It did not treat the following extensively:
        Covert channels. Passing information via pre-arranged actions was described briefly.
        Networks of systems. Later modeling work did address this topic.
        Policies outside multilevel security. Work in the early 1990s showed that MLS is one version of Boolean policies, as
     are all other published policies.
          Strong * property:-
          The Strong ★ Property is an alternative to the ★-Property, in which subjects may write to objects with only a
          matching security level. Thus, the write-up operation permitted in the usual ★-Property is not present, only a write-to-
          same operation. The Strong ★ Property is usually discussed in the context of multilevel database management
          systems and is motivated by integrity concerns. This Strong ★ Property was anticipated in the Biba model where it
          was shown that strong integrity in combination with the Bell-LaPadula model resulted in reading and writing at a
          single level.
          Tranquility principle
          The tranquility principle of the Bell-LaPadula model states that the classification of a subject or object does not
          change while it is being referenced. There are two forms to the tranquility principle: the "principle of strong tranquility"
          states that security levels do not change during the normal operation of the system. The "principle of weak tranquility"
          states that security levels may never change in such a way as to violate a defined security policy. Weak tranquility is
          desirable as it allows systems to observe the principle of least privilege. That is, processes start with a low clearance
          level regardless of their owners‟ clearance, and progressively accumulate higher clearance levels as actions require
          it.
          ●●●●●●Briefly describe the multiprocessor operating system
          A multiprocessor operating system manages all the available resources schedule functionality to form an abstraction
          it wills facilitates program execution & interaction with users. The process is one of the important & basic types of
          resources that need to be managed. For effective use of multiprocessors the processor scheduling is necessary.
          Processors scheduling undertakes the following tasks.
1>     Allocation of processor among applications in such a manner that will be consistent with system design
       objectives. It affects the system throughput. Throughput can be improved by co-scheduling several applications
       together, thus availing fewer processors to each.
2> Ensure efficient use of processors allocation to an application. This primarily affects the speed up of the system.
The second basic types of resources are memory management that needs to be managed. In multiprocessor system
memory management is highly dependent on the architecture & interconnection scheme.
a) In multiprocessor operating systems the operating system should provide a flexible memory model that facilitate
       safe & efficient access to shared memory may be simulated by means of a message passing mechanism.
b) In shared memory system the operating system should provide a flexible memory model that facilitates safe &
       efficient access to share data structures & synchronized data.
A multiprocessor operating system should provide an h/w independent. Unified modeling of shared memory to
facilitates parting of applications between different multiprocessor environments.
●●●●●●Fetched & Add instruction
The fetch & add instruction is a multiple operation memory access instruction that automatically adds a constant to a
memory location & returns the previous
contents of the memory location. This
instruction is defined follows:-
The fetch & add instruction is powerful & it
allows the implementation of „p‟ & ‟v‟                     Function fetched and add(m: integer; c:
operations on a general semaphore. S is in                 integer)
the following manner:-                                     Var tenp: integer;
p(s): while (fetched add (s, -1)<0)
do{
                                                           Begin{
             begin                                                   temp:=m;
                        fetched & add(s, 1);                         m:=m+c;
                        while (s<0) do nothing;                      Return(temp;)
             end;
    }                                                             }
                                                         end;



●●●●●●Briefly describe the structure of UNIX operating system
UNIX is a layered operating system. The innermost layered is the h/w that provides the services for the OS. The
following are the components of UNIX OS.
1) The Kernel:- the operating system referred to in UNIX as the kernel interacts directly with the h/w & provides
      the services the user programs. These User programs don‟t need to know how to interact with the kernel & it‟s
      up to the kernel to provide the desired service. User program interacts directly with the kernel through a services
      would be provided by the kernel. Such services would include accessing a file: open, read, write link or execute
      a file, starting or updating accounting records changing ownership of a file or directory; changing to a new
      directory; creating, suspending or killing a process; enabling to h/w devices; & setting limits on system
      resources.
2) The shell:- shell is often called a command line interpreter, since it presents a single prompt for the user. The
      user types a command; the shell invokes that command, & then presents the prompt again when the command
      has finished. This is done on a line by line. Hence the term “commands line”. The shell program provides a
      method for adapting each user‟s setup requirements & storing this information for re use. The user interacts with
      /bin/sh, which interprets each command typed. Internal commands are handled within the shell & external
      commands are cited as programs link ls, grep, sort, ps etc.
3) System Utilities:- the system utilities are intended to be controlling tools that do a single task exceptionally well.
      Users can solve problems by integrating these tools instead of writing a large monolithic application.
4) Application programs: - some application programs include the Emacs Editor, GCC, G++, Xfig, Latex. UNIX
      works very differently rather than having a kernel tasks examine the requests of a process. The process itself
      enters kernel space. This means that rather than the process waiting outside the kernel it enters the kernel itself.
      When a process invokes a system call the h/w is switched to the settings. The process will be executing from
      the kernel image.
●●●●●●●Explain Resource Allocation graph for multiple Instances with an example & also explain the
recovery of Deadlock
Deadlock can be described more precisely in terms of a directed graph called a system allocation graph. This graph
consists of a set of vertices v and set of edges e. The set of vertices v is divided into two different types of nodes p=
{p1, p2, ….., pn} the set consisting of all active processes in the system & R= {R1, R2,…. ,RM } the set consisting of all
resourses types in the system.
A direct edge from process pi to resource Rj is denoted by pi Rj it signifies that process pi requested an instance
of resource type Rj & is currently waiting for that resource. A directed edge from resource type Rj to process pi is
denoted by Rj pi ; it signifies that an instance of resource type Rj has been allocated to process pi. A direct edge
                                                             Rj pi is called an assignment edge & piRj is called
                                                             request edge. Pictorially we represent each process pi as a
                                                             circle & each resource type Rj as a sequence. Since
                                                             resource type Rj may have more than one instance we
                                                             represent each such instance as a dot within the square.
                                                             The request edge points to only the square Rj whereas an
                                                             assignment edge must also design one of the dots in the
                                                             square. When process pi requests an instance of resource
                                                             type Rj a request edge is inserted in the graph. When this
                                                             request can be fulfilled, the request edge is instantaneously
transformed to an assignment edge when the process no longer needs access to the resource it releases the
             resource, & as a result the assignment edge is deleted. The graph from the above diagram depicts the following
             situation.
             (a) The set P, R, E where
                             i)  P= {p1, p2, p3 }
                             j)  R= {R1, R2, R3}
                             k) E= {P1R1, P2R3, R1P2, R2P2, R2P1, R3P3}
                                                                         (b)       Resource instances
                                                                                   i) One instance of resource type R1;
                                                                                   ii) Two instance of resource type R2;
                                                                                   iii) One instance of resource type R3;
                                                                                   iv) Three instance of resource type R4;
                        (C)Process states:
                                                                                   I) Process P1 is holding an instance of
                                                               resource type R2 & is waiting for a resource type R1.
                                                                                   ii) Process P2 is holding an instance of R1,
                                                               & R2 & is waiting for a resource type R3.
                                                                                           iii) Process P3 is holding an
                                                               instance of R3.
(I)So from the definition we easily understand that if there have a cycle then deadlock may exists but if no cycle exists then any
process in the system is deadlock free.
(II) If each resource type has exactly one instance, then a cycle implies that a deadlock has occurred. If the cycle involves only
a set of resource types, each of which has a single instance then deadlock occurred. Each process involved in the cycle is
deadlock. But if each resource type has several instances then a cycle doesn‟t necessarily imply that a deadlock has occurred.
In this case a cycle in the graph is a necessary but not a sufficient condition for the existence of deadlock. We can use a
protocol to prevent a deadlock ensuring that deadlock never occur & system can use either a deadlock prevention or deadlock
avoidance methods. Deadlock prevention is a set of methods for ensuring that at least one of the necessary conditions can‟t
hold. These methods prevent deadlocks by consisting how requests for resources can be made.
(III) If a system doesn‟t employ either deadlock prevention or a deadlock avoidance algorithm then a deadlock situation occur.
In this environment, the system can provide an algorithm that examines the state of the system to determine whether a
deadlock has occurred & an algorithm to recover from the deadlock.
●●●●●●Explain the concept of Virtual Memory. List any two methods implementation & explain any one with the help
of a diagram
Virtual Memory is a technique that allows the execution may not be completely in memory. One major advantage of this
scheme is that programs can be larger than physical memory. Further virtual memory abstracts main memory into an extremely
large uniform array of storage separating logical memory as viewed by the user from physical memory. Virtual memory also
allows processes to easily share files & address spaces & it provides an efficient mechanism for process creation. Virtual
memory is not easy to implement, however & may substantially decrease performance if it is used carelessly. The two methods
for implementing & explaining are as follows:
 (1) Principle of Operation virtual memory can be implemented as an extension of paged or segmented memory
        management or as a combination of both. Accordingly address translation




                                                                                  is performed by means of page mapped table,
segmented descriptor table, or both. The important characteristics is that in virtual memory systems some portions of address
space of the running process can be absent from main memory. To emphasize the distinction, the term real memory is often
used to denote physical memory. The operating system dynamically allocates real memory to portions of the virtual address
space. The address translation mechanism must be able to associates virtual names with physical locations. The type of
missing items depends on the basic underlying memory management scheme & may be a segment or a page. The page map
table contains an entry for each virtual page of the related process. The diagram describe above.
 (2) Management of Virtual Memory Assuming that paging is used as an underlying memory management scheme. The
      implementation of virtual memory requires maintenance of one page map table per active process. A new component of
      the memory manager data structures is the file map table (FMT). A FMT contains secondary storage accesses of all
      pages. The memory manager‟s use into the main memory. The base may be kept in the control block of the related
process. A pair of page map table base & page map length registered may be provided in h/w to expedite the address
     translation process & to reduce the size of PMT for smaller processes.
●●●●●●Explain the concept of segmentation with the help of a diagram. Make a relative comparison between paging &
segmentation. Explain the concept of page fault with the help of an example
Segmentation is a memory management scheme that supports this user‟s view of memory. A logical address space is a
collection of segments. Each segment has a name & the offset within the segment. The user therefore specifies each address
by 2 quantities (a) A segment name & (b) An offset. The diagram is described bellow.




                                                                                               Although most of our specific
examples are based on the paging, it is also possible to implement virtual memory in the form of demand segmentation. Such
implementations usually inherits the benefits of sharing & protection that provided by segmentation. Moreover their placement
procedure is explicit awareness of the types of information contained in particular segments. For example a working set of
segments should include at least one each of code, data & stack segments. As with segment references alert the operating
system to changes of the locality. However the variability of segment sizes & the complicate the management of both main &
secondary memories. Placement strategies i.e; methods of finding a suitable area of free memory to load an incoming
segment, are quite complex in segment systems. Paging is very convenient for the management of main & secondary
memories but it is inferior with regard to protection & sharing. The transparency of paging necessitates the use of probabilistic
replacement algorithms. Both segmented & page implementations of virtual memory have their respective advantages &
disadvantages & neither is superior to the other over all characteristics. Some computer systems combine the two approaches
in order to enjoy the benefits of both. The diagram is shown in the bellow.




 The working set module is based on the assumption of the locality. The working set model is successful & knowledge of the
working set can be useful for pre-paging, but it seems a clumsy way to control thrashing. A strategy that uses the page fault
frequency takes a more direct approach. The specific problem is how to prevent thrashing. Thrashing has a high page fault
rate. Thus we want to control the page fault rate. When it is too high, we know that the process needs more frames. Similarly if
the page fault rate is too low, then the process may have too many frames. We can establish upper & lower bounds on the
desired page fault rate. If the page fault rate exceeds the upper limit, we allocate
that process another frame; if the page fault rate falls
 below the lower limit, we remove a frame from that process. Thus we can directly measure & control the page fault rate to
 prevent thrashing. If the page fault rate increases & no more free frames are available, we must select some process &
 suspend it. The freed frames are then distributed to process with high page fault rates.
 ●●●●●●What is meant by context switch? Explain the o/h incurred due to the context switching on process & thread
 The process of changing context from an executing program to an interrupt will assume control requires a combination of h/w
 & s/w. since the interrupted program knows neither when an interrupt will assume control of the processor nor which port of the
 machine context will be modified by the interrupt routine, the interrupt service routine itself is changed with the saving &
 restoring the context of the preempted activity.
 In a context switched, the state of the first process must be saved somehow, so that when the scheduler gets back to the
 execution of the first process, it can restore this state & continue normally. The state of the process includes all the registers
 that the process may be using; especially the program may be necessary. Often all the data structures called PCB. Now in
 order to switch the processes the PCB for the first processes must be created & saved. The threads are normally cheaper than
 the processes & that they can be scheduled for execution in a user dependent way with less o/hs. They are cheaper because
 they do not have a full set of resources each, whereas the PCB for a heavy weight process is large & costly to context switch
 the PCB‟s for threads are much smaller, since each threads has only a stack & some registers to manage. It has no open file
 lists or resource lists or resource lists, no accounting structures to update. All of these resources are shared by all threads
 within the process.
 ●●●●●●What are the limitations of Banker’s Algorithm used for deadlock avoidance?
 There are some problems with the Banker‟s algorithm as follows
                             a> It is time consuming to execute on the operation of every resource.
                             b> If the claim information is not accurate system resources may be underutilized.
                             c> Another difficulty can occur when the system is heavily loaded. In this situation so many
                                  resources are granted away that very safe sequences remain & as a consequence the job will be
                                  executed sequentially. So Banker‟s algorithm is referred to as the “Most Liberal granting
                                  Process”.
                             d> The process claim must be less than the total number of units of the resource in the system. If
                                  not the process is not accepted by the manager.
                             e> Since the state without the new process is safe, so is the state with the new process. Just use
                                  the order we had originally & put the new process at the end.
                             f> A resource becoming unavailable can result in an unsafe state.
●●●●Advantage & disadvantage of Multiuser operating system
The advantage of having a multiuser operating system is that normally the h/w is very expensive & it lets a no. of users share
this expensive resource. This means the cost is divided amongst the users. Since the resources are shared, they are more
likely to be in user then sitting idle being unproductive.
The disadvantage with multi user computer systems is that as more users access it the performance becomes slower & slower.
Another limitation is the cost of h/w, as a multiuser operating system requires a lot of disk space & memory. In addition the
actual s/w for multiuser operating systems tend to cost more than single user operating system.
●●●●●●What is Remote procedure call or RPC? How RPC work? Give its limitations also
Distributed systems usually use remote procedure call (RPC) as a fundamental building block for implementing remote
operations & RPC is a powerful technique for constructing distributed client server based applications. It is based on extending
the notion of conventional or local procedure need not exist in the same address space as the calling procedure. The 2
processes may be on the same system or they may be on different systems with a n/w connecting them. By using RPC
programmers of distributed application avoid the details of the interface with the n/w.
An RPC is analogous to a function call. Like a function call, when an RPC is made the calling arguments are passed to the
Remote procedure & the caller waits for a response to be returned from the remote procedure. The following figure shows the
flow of activity of that takes place during an RPC call b/w two networked system.
The client make a procedure call that
sends a request to the server &waits. The thread is blocked from processing until either a reply is received or it times out. When
the request arrives, the server calls a dispatch routine that performs the request service, & sends the reply to clients. After RPC
call is completed, the client program continues. RPC specifically supports network application. RPC implementations are
nominally incomputable with other RPC implementation, although some are compatible. Using a single implementation of a
RPC in a system will most likely results in a dependence on the RPC vendor for maintenance support & future enhancements.
This could have a highly negative impact on a system‟s flexibility, maintainability, portability because there is no single standard
for implementing a RPC, different features may be offered by individual RPC implementations. Features that may affect the
design & cost of a RPC based application.
●●●●●●Linked Allocation & Indexed Allocation
Linked allocation The problems in contiguous allocation can be traced directly to the requirement that the
spaces are allocated contiguously and that the files that need these spaces are of different sizes. These
requirements can be avoided by using linked allocation. In linked allocation, each file is a linked list of disk
blocks. The directory contains a pointer to the first and (optionally the last) block of the file. For example, a file
of 5 blocks which starts at block 4, might continue at block 7, then block 16, block 10, and finally block 27.
Each block contains a pointer to the next block and the last block contains a NIL pointer. The value -1 may be
used for NIL to differentiate it from block 0. With linked allocation, each directory entry has a pointer to the
first disk block of the file. This pointer is initialized to nil (the end-of-list pointer value) to signify an empty file.
A write to a file removes the first free block and writes to that block. This new block is then linked to the end of
the file. To read a file, the pointers are just followed from block to block. There is no external fragmentation
with linked allocation. Any free block can be used to satisfy a request. Notice also that there is no need to
declare the size of a file when that file is created. A file can continue to grow as long as there are free blocks.
Linked allocation, does have disadvantages, however. The major problem is that it is inefficient to support
direct-access; it is effective only for sequential-access files. To find the ith block of a file, it must start at the
beginning of that file and follow the pointers until the ith block is reached. Note that each access to a pointer
requires a disk read. Another severe problem is reliability. A bug in OS or disk hardware failure might result in
pointers being lost and damaged. The effect of which could be picking up a wrong pointer and linking it to a
free block or into another file.
Index Allocation linked allocation does not support random access of file since pointer hidden in block sequentially.
Indexed allocation solves this problem by bringing pointer together into an index block. Indexed allocation uses an index to
directly track the file block locations. A user declares the maximum file size, and the file system allocates a file header with an
array of pointers big enough to point to all file blocks.

Although indexed allocation provides fast disk location lookups for random accesses, file blocks may be scattered all over the
disk. A file system needs to provide additional mechanisms to ensure that disk blocks are grouped together for good
performance (e.g., disk defragmenter). Also, as a file increases in size, the file system needs to reallocate the index array and
copy old entries. Ideally, the index can grow incrementally.
                                                                File header
                                                                  Block 0

                                                                                        Data blocks
                                                                  Block 1


                                                                  Block 2
Multilevel Indexed Allocation
Linux uses multilevel indexed allocation, so certain index entries point to index blocks as opposed to data blocks. The file
header, or the i_node data structure, holds 15 index pointers. The first 12 pointers point to data blocks. The 13 th pointer points
to a single indirect block, which contains 1,024 additional pointers to data blocks. The 14th pointer in the file header points to
a double indirect block, which contains 1,024 pointers to single indirect blocks. The 15th pointer points to a triple indirect
block, which contains 1,024 pointers to double indirect blocks.
This skewed multilevel index tree is optimized for both small and large files. Small files can be accessed through the first 12
pointers, while large files can grow with incremental allocations of index blocks. However, accessing a data block under the
triple indirect block involves multiple disk accesses—one disk access for the triple indirect block, another for the double indirect
block, and yet another for the single indirect block before accessing the actual data block. Also, the number of pointers
provided by this data structure caps the largest file size. Finally, the boundaries between the last four pointers are somewhat
arbitrary. With a given block number, it is not immediately obvious as to of which of the 15 pointers to follow.




●●●●●●●●External Fragmentation & Internal Fragmentation
External fragmentation: - External fragmentation is the phenomenon in which free storage becomes divided into many small
pieces over time. It is a weakness of certain storage allocation algorithms, occurring when an application allocates and de-
allocates ("frees") regions of storage of varying sizes, and the allocation algorithm responds by leaving the allocated and de-
allocated regions interspersed. The result is that although free storage is available, it is effectively unusable because it is
divided into pieces that are too small to satisfy the demands of the application. The term "external" refers to the fact that the
unusable storage is outside the allocated regions. "A partition of main memory is the wastage of an entire partition is said to be
External Fragmentation". Fragmentation can also refer to RAM that has small, unused holes scattered throughout it. This is
called external fragmentation. With modern operating systems that use a paging scheme, a more common type of RAM
fragmentation is internal fragmentation. This occurs when memory is allocated in frames and the frame size is larger than the
amount of memory requested. External fragmentation refers to the division of free storage into small pieces over a period of
time, due to an inefficient memory allocation algorithm, resulting in the lack of sufficient storage for another program because
these small pieces are not contiguous. In External Fragmentation Both first fit and best fit strategies suffer from this. Depending
on the total amount of memory storage, size, external fragmentation may be minor or major problem.
Internal fragmentation: - Internal fragmentation is the space wasted inside of allocated memory blocks because of
restriction on the allowed sizes of allocated blocks. Allocated memory may be slightly larger than requested memory; this size
difference is memory internal to a partition, but not being used. Internal fragmentation occurs when storage is allocated without
intention to use it. This space is wasted. While this seems foolish, it is often accepted in return for increased efficiency or
simplicity. The term "internal" refers to the fact that the unusable storage is inside the allocated region but is not being used. "A
partition of main memory is wasted with in a partition is said to be Internal Fragmentation". For example, in many file systems,
each file always starts at the beginning of a cluster, because this simplifies organization and makes it easier to grow files. Any
space left over between the last byte of the file and the first byte of the next cluster is a form of internal fragmentation called file
slack or slack space. Slack space is a very important source of evidence in computer forensic investigation. Similarly, a
program which allocates a single byte of data is often allocated many additional bytes for metadata and alignment. This extra
space is also internal fragmentation.
●●●●●●●What is semaphore? Give the solution to producer consumer problem using semaphore & explain the
solution
A semaphore is hardware or a software tag variable whose value indicates the status of a common resource. Its purpose is to
lock the resource being used. A process which needs the resource will check the semaphore for determining the status of the
resource followed by the decision for proceeding. In multitasking operating systems, the activities are synchronized by using the
semaphore techniques. In computer science, producer-consumer problem (also known as the bounded-buffer problem) is
a classical example of a multi-process synchronization problem. The problem describes two processes, the producer and the
consumer, who share a common, fixed-size buffer. An inadequate solution could result in a deadlock where both processes are
waiting to be awakened. The problem can also be generalized to have multiple producers and consumers. Semaphores solve
the problem of lost wakeup calls. In the solution below we use two semaphores, fill Count and empty Count, to solve the
problem. Fill Count is the number of items to be read in the buffer, and empty Count is the number of available spaces in the
buffer where items could be written. Fill Count is incremented and empty Count decremented when a new item has been put
into the buffer. If the producer tries to decrement empty Count while its value is zero, the producer is put to sleep. The next time
an item is consumed, empty Count is incremented and the producer wakes up. The consumer works analogously.
semaphore fillCount = 0; // items produced
semaphore emptyCount = BUFFER_SIZE; // remaining space
 procedure producer() {
   while (true) {
      item = produceItem();
      down(emptyCount);
         putItemIntoBuffer(item);
      up(fillCount);
   }
}
procedure consumer() {
   while (true) {
      down(fillCount);
         item = removeItemFromBuffer();
      up(emptyCount);
       consumeItem(item);
   }
}
The solution above works fine when there is only one producer and consumer. Unfortunately, with multiple producers or
consumers this solution contains a serious race condition that could result in two or more processes reading or writing into the
same slot at the same time. To understand how this is possible, imagine how the procedure putItemIntoBuffer() can be
implemented. It could contain two actions, one determining the next available slot and the other writing into it. If the procedure
can be executed concurrently by multiple producers, then the following scenario is possible:
       1. Two producers decrement empty-Count
       2. One of the producers determines the next empty slot in the buffer
       3. Second producer determines the next empty slot and gets the same result as the first producer
       4. Both producers write into the same slot
To overcome this problem, we need a way to make sure that only one producer is executing putItemIntoBuffer() at a time. In
other words we need a way to execute a critical section with mutual exclusion. To accomplish this we use a binary semaphore
called mutex. Since the value of a binary semaphore can be only either one or zero, only one process can be executing
between down (mutex) and up (mutex). The solution for multiple producers and consumers is shown below.
semaphore mutex = 1;
semaphore fillCount = 0;
semaphore emptyCount = BUFFER_SIZE;

procedure producer() {
   while (true) {
      item = produceItem();
      down(emptyCount);
         down(mutex);
           putItemIntoBuffer(item);
         up(mutex);
      up(fillCount);
   }
   up(fillCount); //the consumer may not finish before the producer.
}
procedure consumer() {
   while (true) {
      down(fillCount);
         down(mutex);
           item = removeItemFromBuffer();
         up(mutex);
      up(emptyCount);
      consumeItem(item);
   }
}
The order in which different semaphores are incremented or decremented is essential: changing the order might result in a
deadlock.
●●●●●●●Pipes & Filters in UNIX operating system
A pipe is a unidirectional channel that may be written as one end & read at the other. A pipe is used for communication
between 2 processes. The producer process writes data into one end of the pipe & the consumer process retrieves them from
the other end. The system provided limited buffering for each open pipe. Control of data flow is performed by the system, which
halts the producer attempting to write into the full pipe & halts the consumer attempting to read an empty pipe.
In UNIX and Unix-like operating systems, a filter is a program that gets most of its data from its standard input (the main input
stream) and writes its main results to its standard output (the main output stream). UNIX filters are often used as elements
of pipelines. The pipe operator ("|") on a command line signifies that the main output of the command to the left is passed as
main input to the command on the right.
●●●●●●●Deadlock avoidance algorithm or Banker’s algorithm
This is an algorithm that deals with operating resources such as memory or processor time as though they were money and the
processes competing for them as though they were bank customers. The operating system takes on the role of the banker.
The banker has a set of units to allocate to its customers. Each customer states in advance its total requirements for each
resource. The banker accepts a request for more units if the customer's maximum doesn't exceed the capital the banker has. If
the loan is granted, the customer agrees to return the units within a finite time. The current loan of a customer can't exceed his
maximum need. During a transaction a customer only borrows/returns one unit at a time.
This prevents circular waiting. It allows piecemeal allocation, but before any partial allocation the remaining free resource is
checked to make sure enough is free. The problem is execution time: if we have m resource types and n processes, the worst
case execution time is approximately mn(n+1)/2. For m and n both equal to 10, each resource request takes about half a
second, which is bad.
The current position is said to be safe if the banker may allow all his present customers to complete their transactions within a
finite time, otherwise it is said to be unsafe (but that doesn't necessarily mean an inevitable deadlock as there is a certain time
dependency).
A customer is characterized by his current loan and his claim where the claim is the customer's need minus the current loan to
that customer. Similarly for the banker, the total "cash" that he has is the starting capital minus the sum of all the loans. The
banker prevents deadlock by satisfying one request at a time, but only when absolutely necessary.
Consider the more general problem with several "currencies", as shown by this pseudo code:
 TYPE B = 1..number of customers;
     D = 1..number of currencies;         C = array [D] of integer;
     S = record
          Transactions : array [B] of record
                              Claim, loan : C;
                              Completed : boolean
                             end;
          Capital, cash : C;
         end;

PROCEDURE return_loan (VAR loan, cash : C);
VAR currency : D;
BEGIN FOR every currency DO
   Cash[currency] := cash[currency] + loan[currency]
END;

PROCEDURE complete_transactions (VAR state : S);
VAR customer : B;
  progress : boolean;
BEGIN
  WITH state DO
  REPEAT
   progress := false;
   FOR every customer DO
    WITH transactions[customer] DO
    IF NOT completed
     THEN BEGIN returnloan(loan,cash);
           completed:=true;
           progress:=true
        END
  UNTIL NOT progress
END;

FUNCTION all_transactions_completed (state : S) : boolean;
BEGIN WITH state DO
   all_transactions_completed:=(capital = cash)
END;

FUNCTION safe (current_state : S) : boolean;
VAR state : S;
BEGIN state:=current_state;
     complete_transactions(state);
     safe:=all_transactions_completed(state)
END;
If all transactions can be completed, the current position is safe and it's alright to honor the request for a new loan. In practice, a
process may crash for one of several reasons, liberating its held resources and making no further claim.
If all the OS resources were controlled by such an algorithm, we would need just the variable current state and an
operation safe. Safe can be micro-coded as a single machine instruction, or included in the resident OS part as code.
●●●●●●Acyclic-Graph Directory
The acyclic graph is a natural generalization of the tree-structured directory scheme. The common subdirectory should be
shared. A shared directory or file will exist in the file system in two (or more) places at once. A tree structure prohibits the
sharing of files or directories. An acyclic graph (a graph with no cycles) allows directories to share subdirectories and files.
Here the same file or subdirectory may be in two different directories. It is important to note that a shared file (or directory) is not
the same as two copies of the file. With two copies, each programmer can view the copy rather than the original, but if one
programmer changes the file, the changes will not appear in the other's copy. With a shared file, only one actual file exists, so
any changes made by one person are immediately visible to the other. A common way, exemplified by many of the UNIX
systems, is to create a new directory entry called a link. When a reference to a file is made, we search the directory. If the
directory entry is marked as a link, then the name of the real file is included in the link information. We resolve the link by using
that path name to locate the real file. Links are easily identified by their format in the directory entry and are effectively named
indirect pointers. In a system where sharing is implemented by symbolic links, this situation is somewhat easier to handle.
Mcs 041.1

Contenu connexe

Dernier

Time Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsTime Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsNathaniel Shimoni
 
Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...Farhan Tariq
 
Moving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfMoving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfLoriGlavin3
 
Decarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityDecarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityIES VE
 
UiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPathCommunity
 
Testing tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examplesTesting tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examplesKari Kakkonen
 
Data governance with Unity Catalog Presentation
Data governance with Unity Catalog PresentationData governance with Unity Catalog Presentation
Data governance with Unity Catalog PresentationKnoldus Inc.
 
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...Wes McKinney
 
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024BookNet Canada
 
[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality Assurance[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality AssuranceInflectra
 
Generative AI - Gitex v1Generative AI - Gitex v1.pptx
Generative AI - Gitex v1Generative AI - Gitex v1.pptxGenerative AI - Gitex v1Generative AI - Gitex v1.pptx
Generative AI - Gitex v1Generative AI - Gitex v1.pptxfnnc6jmgwh
 
Glenn Lazarus- Why Your Observability Strategy Needs Security Observability
Glenn Lazarus- Why Your Observability Strategy Needs Security ObservabilityGlenn Lazarus- Why Your Observability Strategy Needs Security Observability
Glenn Lazarus- Why Your Observability Strategy Needs Security Observabilityitnewsafrica
 
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024BookNet Canada
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxLoriGlavin3
 
Zeshan Sattar- Assessing the skill requirements and industry expectations for...
Zeshan Sattar- Assessing the skill requirements and industry expectations for...Zeshan Sattar- Assessing the skill requirements and industry expectations for...
Zeshan Sattar- Assessing the skill requirements and industry expectations for...itnewsafrica
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity PlanDatabarracks
 
2024 April Patch Tuesday
2024 April Patch Tuesday2024 April Patch Tuesday
2024 April Patch TuesdayIvanti
 
Design pattern talk by Kaya Weers - 2024 (v2)
Design pattern talk by Kaya Weers - 2024 (v2)Design pattern talk by Kaya Weers - 2024 (v2)
Design pattern talk by Kaya Weers - 2024 (v2)Kaya Weers
 
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxLoriGlavin3
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024Lonnie McRorey
 

Dernier (20)

Time Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsTime Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directions
 
Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...
 
Moving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfMoving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdf
 
Decarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityDecarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a reality
 
UiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to Hero
 
Testing tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examplesTesting tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examples
 
Data governance with Unity Catalog Presentation
Data governance with Unity Catalog PresentationData governance with Unity Catalog Presentation
Data governance with Unity Catalog Presentation
 
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
 
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
 
[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality Assurance[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality Assurance
 
Generative AI - Gitex v1Generative AI - Gitex v1.pptx
Generative AI - Gitex v1Generative AI - Gitex v1.pptxGenerative AI - Gitex v1Generative AI - Gitex v1.pptx
Generative AI - Gitex v1Generative AI - Gitex v1.pptx
 
Glenn Lazarus- Why Your Observability Strategy Needs Security Observability
Glenn Lazarus- Why Your Observability Strategy Needs Security ObservabilityGlenn Lazarus- Why Your Observability Strategy Needs Security Observability
Glenn Lazarus- Why Your Observability Strategy Needs Security Observability
 
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
 
Zeshan Sattar- Assessing the skill requirements and industry expectations for...
Zeshan Sattar- Assessing the skill requirements and industry expectations for...Zeshan Sattar- Assessing the skill requirements and industry expectations for...
Zeshan Sattar- Assessing the skill requirements and industry expectations for...
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity Plan
 
2024 April Patch Tuesday
2024 April Patch Tuesday2024 April Patch Tuesday
2024 April Patch Tuesday
 
Design pattern talk by Kaya Weers - 2024 (v2)
Design pattern talk by Kaya Weers - 2024 (v2)Design pattern talk by Kaya Weers - 2024 (v2)
Design pattern talk by Kaya Weers - 2024 (v2)
 
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024
 

En vedette

2024 State of Marketing Report – by Hubspot
2024 State of Marketing Report – by Hubspot2024 State of Marketing Report – by Hubspot
2024 State of Marketing Report – by HubspotMarius Sescu
 
Everything You Need To Know About ChatGPT
Everything You Need To Know About ChatGPTEverything You Need To Know About ChatGPT
Everything You Need To Know About ChatGPTExpeed Software
 
Product Design Trends in 2024 | Teenage Engineerings
Product Design Trends in 2024 | Teenage EngineeringsProduct Design Trends in 2024 | Teenage Engineerings
Product Design Trends in 2024 | Teenage EngineeringsPixeldarts
 
How Race, Age and Gender Shape Attitudes Towards Mental Health
How Race, Age and Gender Shape Attitudes Towards Mental HealthHow Race, Age and Gender Shape Attitudes Towards Mental Health
How Race, Age and Gender Shape Attitudes Towards Mental HealthThinkNow
 
AI Trends in Creative Operations 2024 by Artwork Flow.pdf
AI Trends in Creative Operations 2024 by Artwork Flow.pdfAI Trends in Creative Operations 2024 by Artwork Flow.pdf
AI Trends in Creative Operations 2024 by Artwork Flow.pdfmarketingartwork
 
PEPSICO Presentation to CAGNY Conference Feb 2024
PEPSICO Presentation to CAGNY Conference Feb 2024PEPSICO Presentation to CAGNY Conference Feb 2024
PEPSICO Presentation to CAGNY Conference Feb 2024Neil Kimberley
 
Content Methodology: A Best Practices Report (Webinar)
Content Methodology: A Best Practices Report (Webinar)Content Methodology: A Best Practices Report (Webinar)
Content Methodology: A Best Practices Report (Webinar)contently
 
How to Prepare For a Successful Job Search for 2024
How to Prepare For a Successful Job Search for 2024How to Prepare For a Successful Job Search for 2024
How to Prepare For a Successful Job Search for 2024Albert Qian
 
Social Media Marketing Trends 2024 // The Global Indie Insights
Social Media Marketing Trends 2024 // The Global Indie InsightsSocial Media Marketing Trends 2024 // The Global Indie Insights
Social Media Marketing Trends 2024 // The Global Indie InsightsKurio // The Social Media Age(ncy)
 
Trends In Paid Search: Navigating The Digital Landscape In 2024
Trends In Paid Search: Navigating The Digital Landscape In 2024Trends In Paid Search: Navigating The Digital Landscape In 2024
Trends In Paid Search: Navigating The Digital Landscape In 2024Search Engine Journal
 
5 Public speaking tips from TED - Visualized summary
5 Public speaking tips from TED - Visualized summary5 Public speaking tips from TED - Visualized summary
5 Public speaking tips from TED - Visualized summarySpeakerHub
 
ChatGPT and the Future of Work - Clark Boyd
ChatGPT and the Future of Work - Clark Boyd ChatGPT and the Future of Work - Clark Boyd
ChatGPT and the Future of Work - Clark Boyd Clark Boyd
 
Getting into the tech field. what next
Getting into the tech field. what next Getting into the tech field. what next
Getting into the tech field. what next Tessa Mero
 
Google's Just Not That Into You: Understanding Core Updates & Search Intent
Google's Just Not That Into You: Understanding Core Updates & Search IntentGoogle's Just Not That Into You: Understanding Core Updates & Search Intent
Google's Just Not That Into You: Understanding Core Updates & Search IntentLily Ray
 
Time Management & Productivity - Best Practices
Time Management & Productivity -  Best PracticesTime Management & Productivity -  Best Practices
Time Management & Productivity - Best PracticesVit Horky
 
The six step guide to practical project management
The six step guide to practical project managementThe six step guide to practical project management
The six step guide to practical project managementMindGenius
 
Beginners Guide to TikTok for Search - Rachel Pearson - We are Tilt __ Bright...
Beginners Guide to TikTok for Search - Rachel Pearson - We are Tilt __ Bright...Beginners Guide to TikTok for Search - Rachel Pearson - We are Tilt __ Bright...
Beginners Guide to TikTok for Search - Rachel Pearson - We are Tilt __ Bright...RachelPearson36
 

En vedette (20)

2024 State of Marketing Report – by Hubspot
2024 State of Marketing Report – by Hubspot2024 State of Marketing Report – by Hubspot
2024 State of Marketing Report – by Hubspot
 
Everything You Need To Know About ChatGPT
Everything You Need To Know About ChatGPTEverything You Need To Know About ChatGPT
Everything You Need To Know About ChatGPT
 
Product Design Trends in 2024 | Teenage Engineerings
Product Design Trends in 2024 | Teenage EngineeringsProduct Design Trends in 2024 | Teenage Engineerings
Product Design Trends in 2024 | Teenage Engineerings
 
How Race, Age and Gender Shape Attitudes Towards Mental Health
How Race, Age and Gender Shape Attitudes Towards Mental HealthHow Race, Age and Gender Shape Attitudes Towards Mental Health
How Race, Age and Gender Shape Attitudes Towards Mental Health
 
AI Trends in Creative Operations 2024 by Artwork Flow.pdf
AI Trends in Creative Operations 2024 by Artwork Flow.pdfAI Trends in Creative Operations 2024 by Artwork Flow.pdf
AI Trends in Creative Operations 2024 by Artwork Flow.pdf
 
Skeleton Culture Code
Skeleton Culture CodeSkeleton Culture Code
Skeleton Culture Code
 
PEPSICO Presentation to CAGNY Conference Feb 2024
PEPSICO Presentation to CAGNY Conference Feb 2024PEPSICO Presentation to CAGNY Conference Feb 2024
PEPSICO Presentation to CAGNY Conference Feb 2024
 
Content Methodology: A Best Practices Report (Webinar)
Content Methodology: A Best Practices Report (Webinar)Content Methodology: A Best Practices Report (Webinar)
Content Methodology: A Best Practices Report (Webinar)
 
How to Prepare For a Successful Job Search for 2024
How to Prepare For a Successful Job Search for 2024How to Prepare For a Successful Job Search for 2024
How to Prepare For a Successful Job Search for 2024
 
Social Media Marketing Trends 2024 // The Global Indie Insights
Social Media Marketing Trends 2024 // The Global Indie InsightsSocial Media Marketing Trends 2024 // The Global Indie Insights
Social Media Marketing Trends 2024 // The Global Indie Insights
 
Trends In Paid Search: Navigating The Digital Landscape In 2024
Trends In Paid Search: Navigating The Digital Landscape In 2024Trends In Paid Search: Navigating The Digital Landscape In 2024
Trends In Paid Search: Navigating The Digital Landscape In 2024
 
5 Public speaking tips from TED - Visualized summary
5 Public speaking tips from TED - Visualized summary5 Public speaking tips from TED - Visualized summary
5 Public speaking tips from TED - Visualized summary
 
ChatGPT and the Future of Work - Clark Boyd
ChatGPT and the Future of Work - Clark Boyd ChatGPT and the Future of Work - Clark Boyd
ChatGPT and the Future of Work - Clark Boyd
 
Getting into the tech field. what next
Getting into the tech field. what next Getting into the tech field. what next
Getting into the tech field. what next
 
Google's Just Not That Into You: Understanding Core Updates & Search Intent
Google's Just Not That Into You: Understanding Core Updates & Search IntentGoogle's Just Not That Into You: Understanding Core Updates & Search Intent
Google's Just Not That Into You: Understanding Core Updates & Search Intent
 
How to have difficult conversations
How to have difficult conversations How to have difficult conversations
How to have difficult conversations
 
Introduction to Data Science
Introduction to Data ScienceIntroduction to Data Science
Introduction to Data Science
 
Time Management & Productivity - Best Practices
Time Management & Productivity -  Best PracticesTime Management & Productivity -  Best Practices
Time Management & Productivity - Best Practices
 
The six step guide to practical project management
The six step guide to practical project managementThe six step guide to practical project management
The six step guide to practical project management
 
Beginners Guide to TikTok for Search - Rachel Pearson - We are Tilt __ Bright...
Beginners Guide to TikTok for Search - Rachel Pearson - We are Tilt __ Bright...Beginners Guide to TikTok for Search - Rachel Pearson - We are Tilt __ Bright...
Beginners Guide to TikTok for Search - Rachel Pearson - We are Tilt __ Bright...
 

Mcs 041.1

  • 1. Shankha For Solving question with Answers MINTU 11
  • 2. ●●●●●●Lamport’s Algorithm:- Assuming the presence of the pipelining property & eventual delivery of all messages the solution requires time stamping of a ll messages & it also assumes that each process maintains a request queue, initially empty, that contains request messages ordered by the following five rules: 1. Initiator: i When process pi desires to acquire exclusive ownership of the resources it sends the time stamp message request (t i , i) where ti=ci to every other process & records the requests in its own queue. 2. Other processes: j, ji When process pj receives the request (ti, i) message, it places the request on its own queue & sends a time stamped reply (tj, j) to process pi. 3. Process pi is allowed to access the resource when the following 2 conditions are satisfied a) Pi’s request message is at the front of the queue b) Pi has received a message from every other process with a time stamp later than (t i, i). 4. Process pi releases the resource by removing the request from its own queue & sending a time stamped release messages to every other process. 5. Upon receipts of pi‟s release message, process pj removes pi‟s request from its request queue. Correctness of the algorithm follows from rule 3, which guarantees that the initiating process p i learns about all potentially conflicting requests that precede it. Given that message can‟t be received out of ordering of events that the relation  provides a total ordering of events in the system & in the preprocess request queues, rule 3(a) permits only one process to access the resource at a time. The solution is dead lock free due to the time stamp ordering of the requests which precludes formation of the wait for loops. Granting of requests in the order in which they are made prevent process starvation & lockouts. The communication cost of the algorithm is 3(N-1) messages: (N-1) request messages, (N-1) reply messages, and (N-1) release messages. Given that requests & release notifications are effectively broadcasts the algorithm clearly performs better in a broadcast type network such as bus. ●●●●●●●Give a solution Dining Philosophers problem using monitors:- Consider 5 philosophers who spend their lives thing & eating. The philosophers shares a common circular table surrounded by 5 chairs, each belonging to one philosopher. In a center of table is a bowel of rice, & the philosopher thinks she does not share with her colleagues. From time to time the philosopher gets hungry & tries to pick up the two chop sticks at a time. Obviously she can‟t pick up a chopstick that already in the hand of her colleagues. When a hungry philosopher has both her chopsticks at a same time she ate it without releasing chopsticks. When she finished eating she puts down both the chopsticks & start to thinking again. The dining philosopher problem is a classic synchronization problem neither have its practical importance nor computer scientist dislike philosophers but because it is an example of a large class of concurrency control problem. It is a simple representation of the need to allocate several resources among several processes in a deadlock & starvation free manner. One simple solution is to represent each chopstick by a semaphore. A philosopher tries to grab all her chopstick by executing a wait operation on that semaphore; she releases her chopstick by executing the signal operation on the appropriate semaphores. Thus the shared data are: semaphore chopstick[5]; where all the elements of the chopstick are initialized to 1. The structure of the philosopher „i‟ shown in the figure. do{ Although this solution guarantees that no two neighbors are eating simultaneously, it nevertheless of creating a deadlock. wait (chopstick[i]); Suppose that all 5 philosophers become hungry wait (chopstick[(i+1)%5]); simultaneously, & each grabs her left chopstick. All the ----- elements of a chopstick will now be equal to 0. When each ----- philosopher tries to grab her right chopstick she will be eat delayed forever. Another high level synchronization construct is the monitor type. A monitor is characterized by a set of signal(chopstick[i]); programmer defined operators. The representation of a signal(chopstick[(i+1)%5]); monitor type consists of declarations of variables whose ----- values define the state of an instance of the type, as well as ------ the bodies of procedure or functions that implemented think operations on the type. The syntax of a monitor is shown below. }while(1); monitor monitor_name Fig: structure of philosopher { Shared variable declaration Procedure body p1 (---) { --- } Procedure body p2 (---) { --- } ***** ***** Procedure body pn (---) { --- } Initialization code } Now we are in a position to describe to our solution to the dining philosopher problem using monitor.
  • 3. DATA: condition can_eat[NUM_PHILS]; enum states {THINKING, HUNGRY, EATING} state[NUM_PHILS-1]; int index; INITIALIZATION: for (index=0; index<NUM_PHILS; index++) { flags[index] = THINKING; } MONITOR PROCEDURES: /* request the right to pickup chopsticks and eat */ entry void pickup(int mynum) { /* announce that we're hungry */ state[mynum] = HUNGRY; /* if neighbor's aren't eating, proceed */ if ((state[mynum-1 mod NUM_PHILS] != EATING) && (state [mynum+1 mod NUM_PHILS] != EATING)) { state[mynum] = EATING; } /* otherwise wait for them */ else can_eat[mynum].wait; /* ready to eat now */ state[mynum] = EATING; } /* announce that we're finished, give others a chance */ entry void putdown(int mynum) { /* announce that we're done */ state[mynum] = THINKING; /* give left (lower) neighbor a chance to eat */ if ((state [mynum-1 mod NUM_PHILS] == HUNGRY) && (state [mynum-2 mod NUM_PHILS] != EATING)) { can_eat[mynum-1 mod NUM_PHILS].signal; } /* give right (higher) neighbor a chance to eat */ if ((state [mynum+1 mod NUM_PHILS] == HUNGRY) && (state [mynum+2 mod NUM_PHILS] != EATING)) { can_eat[mynum+1 mod NUM_PHILS].signal; } } PHILOSOPHER: /* find out our id, then repeat forever */ me = get_my_id(); while (TRUE) { /* think, wait, eat, do it all again ... */ think(); pickup(me); eat(); putdown(me); } ●●●●●Explain the Real time operating System (RTOS). Give any 2 example application suitable for RTOS. Differentiate between time sharing & RTOS Real time operating systems are used in environments where a large number of events mostly external to the computer system, must be accepted & processed in a short time or within a certain deadlines. Such applications include industrial central, telephone, switching & real time simulation. A real-time OS has an advanced algorithm for scheduling. Scheduler flexibility enables a wider, computer-system orchestration of process priorities, but a real-time OS is more frequently dedicated to a narrow set of applications. Key factors in a real-time OS are minimal interrupt latency and minimal thread switching latency, but a real-time OS is valued more for how quickly or how predictably it can respond than for the amount of work it can perform in a given period of time. The primary issue of real time operating is to provide quick event response times & thus meet the scheduling deadlines. User convenience & resource utilization are the secondary concern to the RTOS designers. It is not uncommon for a real time system to be expected to process bursts of thousands of interrupts per second without missing a
  • 4. single event. Such requirements usually can‟t be met by multiprogramming alone, & real time operating systems usually rely on some specific policies & techniques for doing their job. Explicit programmer defined & controlled processes are commonly encountered in real time operating systems. Basically a separate process is charged with handling a single external event. The process is activated upon occurrence of the related event, which is often signaled by an interrupt. Multitasking operation is accomplished by scheduling processes for execution independently of each other. Each process is assigned a curtained level of priority that corresponds to the relative importance of the event that it services. The processor is normally allocated to the highest priority process among those that are ready to execute. Higher priority processes usually preempt execution of the lower priority processes. This form of scheduling is known as priority based preemptive scheduling is used by a majority of real time systems. Differences between time sharing & RTOS are time sharing is a popular representation of multi-programmed, multi-user system whereas RTOS are used in environments mostly external to the computer system, must be accepted & processed in a short time or within a certain deadlines. The primary objectives of real time operating system is good terminal response time whereas the primary objective of RTOS is to provide quick event response time & thus meet the scheduling deadline. ●●●●●●Explain the windows 2000 operating system architecture Windows 2000 Architecture Diagram The Windows 2000® Architecture Roadmap provides a global view of the operating system architecture, its main components, and mechanisms. It also provides "logical navigation" to other locations for more in depth discussions. The goal is to help the user to go from general to specific information, in a way that is logical and based on the system structure itself. She (he) should be able to become familiar with the operating system main concepts and components. Novice and experienced users should also benefit from this comprehensive operating system description, its numerous diagrams, and examples. Refer to the site organization for background information and for navigation suggestions. Windows 2000 Overview The Windows 2000 operating system constitutes the environment in which applications run. It provides the means to access processor(s) and all other hardware resources. Also, it allows the applications and its own components to communicate with each other. Windows 2000 has been built combining the following models:  Layered Model. The operating system code is grouped in modules layered on top of each other. Each module provides a set of functions used by modules of higher levels. This model is applied mainly to the operating system executive.  Client/Server Model. The operating system is divided into several processes, called servers, each implementing a set of specific services. Each of these servers runs in user mode, waiting for client requests. User-Mode The software in user mode runs in a non-privileged state with limited access to system resources. Windows 2000 applications and protected subsystems run in user-mode. The protected subsystems run in their own protected space and do not interfere with each other. They are divided into the following two groups:
  • 5. Environment subsystems. Services that provide Application Programming Interfaces (APIs) specific to an operating system.  Integral subsystems. Services that provide interfaces with important operating system functions such as security and network services. Kernel-Mode versus User-Mode Windows 2000 divides the executing code in the following two areas or modes. Kernel-Mode In the privileged kernel mode, the software can access all the system resources such as computer hardware, and sensitive system data. The kernel-mode software constitutes the core of the operating system and can be grouped as follows:  System Components. Responsible for providing system services, to environment subsystems and other executive components. They perform system tasks such as input/output (I/O), file management, virtual memory management, resource management, and interposes communications.  Kernel. The executive core component. It performs crucial functions such as scheduling, interrupt, exception dispatching, and multiprocessor synchronization.  Hardware Abstract Layer (HAL). Isolates the rest of the Windows NT executive from the specific hardware, making the operating system compatible with multiple processor platforms. For more information, refer to:  Windows 2000 basic techniques. Standard operating system techniques used by Windows 2000.  User mode components. They execute in their own protected address space and have limited access to system resources.  Kernel mode components. Performance sensitive operating system components. ●●●●●Cache Memory Cache memory is extremely fast memory that is built into a computer‟s central processing unit (CPU), or located next to it on a separate chip. The CPU uses cache memory to store instructions that are repeatedly required to run programs, improving overall system speed. The advantage of cache memory is that the CPU does not have to use the motherboard‟s system bus for data transfer. Whenever data must be passed through the system bus, the data transfer speed slows to the motherboard‟s capability. The CPU can process data much faster by avoiding the bottleneck created by the system bus. As it happens, once most programs are open and running, they use very few resources. When these resources are kept in cache, programs can operate more quickly and efficiently. All else being equal, cache is so effective in system performance that a computer running a fast CPU with little cache can have lower benchmarks than a system running a somewhat slower CPU with more cache. Cache built into the CPU itself is referred to as Level 1 (L1) cache. Cache that resides on a separate chip next to the CPU is called Level 2 (L2) cache. Some CPUs have both L1 and L2 cache built-in and designate the separate cache chip as Level 3 (L3) cache. ●●●●●●Physical vs. Virtual memory Physical Memory is the form of the hardware called RAM. While the virtual memory is exactly the word virtual which means its not real, because it uses the space in the hard disk to create some memory space. The problem with virtual memory is that when you have allocated exact number of bytes and then your hard disk gets low on space; your system will have errors. Just leave the system allocate the virtual memory that it needs so you won't have any problem. Virtual memory is a memory management technique, used by multitasking computer operating systems wherein non-contiguous memory is presented to software as contiguous memory. This contiguous memory is referred to as the virtual address space. It is used commonly and provides great benefit for users at a very low cost. The computer hardware that is the primary memory system of a computer also called as RAM is the physical memory. ●●●●●●Paging vs. Swapping The difference between swapping and paging is that paging swaps memory pages while swapping swaps the address space of complete processes. Paging refers to paging out individual pages of memory. Swapping is when an entire process is swapped out. Paging is normal. Swapping happens when the resource load is heavy and the entire process is written out to disk. If you're swapping a lot, you might want to look deeper into looking at things like adding memory. Paging is how all processes normally run. A page fault occurs, which is normal operation, and a new page of a program is paged in, and freed pages are paged out. This is not swapping. Swapping is at the process level. This will only occur if the resource load is heavy and system performance is degrading. The lowest priority process is written out to swap space. This could be a sleeping process, which are the highest candidates for swap out. When they become active again, a new candidate is swapped out, if need be, and the process is swapped in. ●●●●●●Scheduling or CPU scheduling Scheduling refers to a set of policies & mechanisms built into the operating system that govern the order in which the work to be done by a computer system that select the next job to be admitted into the system & the next process to run. The primary objective of the scheduling is to optimize the system performance in accordance with the criteria deemed most important by the system designers. The main objective of the scheduling is to increase CPU utilization & higher throughput. Throughput is the amount of work accompanied in a given time interval. CPU scheduling is the basis of operating system which supports multiprogramming concepts. So the objectives of scheduling are: (i) Scheduling should attempt to service the largest possible number of processes per unit time. (ii) Scheduling should minimize the wasted resources overhead. (iii) The scheduling mechanism should keep the resource of the system busy. Processes that will use under utilized resources should be favored.
  • 6. (iv) It would be fair if all processes are treated the same, & no process can suffer indefinite postponement. (v) In environments in which process are given priorities the scheduling should favor the higher priority processes. ●●●●●●●Differences between network operating system & distributed operating system Distributed operating system is a collection of loosely coupled system interconnected by a communication network. From the point of view of the specific processor in a distributed system, the rest of the processors & their respective resources are remote whereas its own resources are local. But Network operating system it provides an environment in which users who are aware of the multiplicity of machines can access remote resources by either logging into the appropriate remote machine or transferring data from the remote machine to their own machine. ●●●●●●Thread Thread represents a software approach to improving the performance of operating systems by reducing the overhead of process switching. A thread is lightweight process with a reduced state. State reduction is achieved by having a group of related thread is equivalent to a classical process. Each thread belongs to exactly one process. Processes are static & only threads can scheduled for the execution. Threads can communicate efficiently by means of commonly accessible shared memory within the enclosing process. Threads have been successfully used in network services. ●●●●●●Authentication The primary goal of authentication is to allow access to legitimate system user & to deny access to unauthorized parties. The 2 primary measures of authentication effectiveness are:- (i) The false acceptance ratio i.e. the percentage of illegitimate erroneously admitted. (ii) The false rejection rations i.e. the percentage of legitimate users who are access due to failure of the authentication mechanism. Obviously the objective is to minimize both the false acceptance & false rejection ratio. One way authentication is usually based on: 1> Possession of a secret. 2> Possession of an artifact. 3> Unique physiological or behavioral characteristics of the user. The 2 types of authentication are 1) Mutual authentication & 2) Extensible Authentication. Mutual authentication or two-way authentication refers to two parties authenticating each other suitably. In technology terms, it refers to a client or user authenticating themselves to a server and that server authenticating itself to the user in such a way that both parties are assured of the others' identity. When describing online authentication processes, mutual authentication is often referred to as website-to-user authentication, or site-to-user authentication. Mutual SSL provides the same things as SSL, with the addition of authentication and non-repudiation of the client authentication, using digital signatures. However, due to issues with complexity, cost, logistics, and effectiveness, most web applications are designed so they do not require client-side certificates. This creates an opening for a man-in-the-middle attack, in particular for online banking. Extensible Authentication Protocol, or EAP, is an authentication framework frequently used in wireless networks and Point-to-Point connections. EAP is an authentication framework providing for the transport and usage of keying material and parameters generated by EAP methods. There are many methods defined by RFCs and a number of vendor specific methods and new proposals exist. EAP is not a wire protocol; instead it only defines message formats. Each protocol that uses EAP defines a way to encapsulate EAP messages within that protocol's messages. ●●●●●●Swapping Removing suspended or preempted process from memory & their subsequent bringing back is called swapping. Swapping has traditionally been used to implement multiprogramming in systems with respective memory capacity or with little hardware support for improving processor utilization in partitioned memory environments by increasing the ratio of ready to resident processes. Swapping is usually employed in memory management systems with contiguous allocation. Such as fixed & dynamically partitioned memory & segmentation. The swapper is an operating system process whose major responsibilities include: a) Selection of processes to swap out. b) Selection of processes to swap in. c) Allocation & management of swap space. The swapper usually selects a victim among the suspended processes that occupy partitions large enough to satisfy the needs of the incoming process. Among the qualifying processes the more likely candidates for swapping are the ones with low priority & those waiting for slow events and thus having a higher probability of being suspended for a comparatively long time. Another important consideration is the time spends in memory by the potential victim & whether it ran while in memory. Otherwise there is a danger of thrashing caused by repeatedly removing processes from memory almost immediately after loading them into memory. So the benefits of using Swapping are: 1) Allows higher of multiprogramming. 2) Better memory utilization. 3) Less wastage of CPU time based on compaction. 4) Can easily be applied on priority based scheduling algorithms to improve performance.
  • 7. ●●●●●●Thrashing If the number of frames allocated to a low priority process falls below the minimum number required by the computer architecture, we must suspend that process execution. We should then page out its remaining pages, freeing all its allocating frames. This provision introduces a swap in swap out level of intermediate CPU scheduling. Any processes that does not have enough frames. Although it is technically possible to reduce the number of allocated frames to the minimum, there is some number of pages in active use. If that processes does not have this number of frames, it will quickly page fault. It must replace some page. Since all its pages are in active use. It must replace a page that will be needed again right away. Consequently it quickly faults again & again & again. This high paging is called thrashing. ●●●●●●●Four necessary condition to occur Deadlock Condition a) Mutual Exclusion: - the resources involved are non sharable. At least one resource must be held in a non sharable mode i.e. only one process at a time claims exclusive control of the resource. If another process requests that resource the requesting process must be delayed until the resource has been released. b) Hold & Wait condition: - in this condition a requesting process already holds the resources & waiting for the requested resources. A process holding a resource allocated to it waits for an additional resource that is/ are currently being held by another process. c) No preemptive condition: - resources are already allocated to a process can‟t be preempted. Resources can‟t be removed forcibly voluntarily by the process holding it. d) Circular wait condition: - the process in the system form a circular list or chain where each process in the list is waiting for a resource held by the next process in the list. We emphasize that all four conditions must hold for a deadlock to occur. The circular wait condition implies the hold & wait condition, so the four conditions are not completely independent. ●●●●●●●Lattice model The lattice for security levels is widely used to describe the structure of military security levels. A lattice is a finite set together with a partial ordering on its elements such that for every pair of elements there is a least upper bound & a greatest lower bound. The simple linear ordering of sensitivity levels gas already been defined. Compartment sets can be partially ordered by the subset relation: one compartment is greater than or equal to another if the latter set is a subset of the former. Classification which includes a sensitivity level & a compartment set can then be partially ordered as follows: For any sensitivity levels a, b & compartment set c, d; the relation (a,c)≥(b,d) exists if & only if a≥b & cd i.e. each pair of classifications has a greatest lower bound & a least upper bound follows from these definitions & the facts that the classification “Unclassified” , no compartments is a global lower bound & that we can postulate (assume) a classification “top-secret all compartments” as a global upper bound. Because the lattice model niches (role) the military classification structure so closely it is widely used. ●●●●●●●Three dimensional Hypercube systems Various cube type multiprocessor topologies address the scalability & cost issues by providing interconnections whose complexity grows logarithmically with the increasing number of nodes. The figure illustrates this. The three degree hypercube will have 2n nodes i.e. 23-8 nodes. Nodes are arranged in 3 dimensional cubes that is each node connected to 3 numbers of nodes. Each node is assigned with a unique number or address lies between 0 to 7(2n-1) i.e. 000, 001, 010, 011, 100, 101, 110, 111. The adjacent nodes differing in 1 bit (001, 010) & the third (nth) node is having maximum 3 internodes distance (100). Hypercube provides a good basis for scalable system since their complexities grow logarithmically with the number of nodes. It provides a bi directional communication between two processors. It is normally used in loosely coupled system because the transfer of data between two processors goes through several intermediate processors. Increasing the I/O bandwidth the I/O devices can be attached with every node. ●●●●●●●Access Matrix model In computer science, an Access Control Matrix or Access Matrix is an abstract, formal security model of protection state in computer systems that characterize the rights of each subject with respect to every object in the system. The access matrix model is the policy for user authentication, and has several implementations such as access control lists (ACLs) and capabilities. It is used to describe which users have access to what objects. The access matrix model consists of four major parts:  A list of objects  A list of subjects  A function T which returns an object's type
  • 8. The matrix itself, with the objects making the columns and the subjects making the rows In the cells where a subject and object meet lie the rights the subject has on that object. Some example access rights are read, write, execute, list and delete. An access matrix has several standard operations associated with it:  Entry of a right into a specified cell  Removal of a right from a specified cell  Creation of a subject  Creation of an object  Removal of an subject  Removal of an object ●●●●●●Differences between security policy & security model The security policy outlines several level points: how the data is accessed, the amount of security required & what the steps are when these requirements are not met. The security model is more in depth & supports the security policy. Security model is an important concept in the design of any security system. They all have different security policies applying to the systems. ●●●●●●Take grant model The take-grant protection model is a formal model used in the field of computer security to establish or disprove the safety of a given computer system that follows specific rules. It shows that for specific systems the question of safety is decidable in linear time, which is in general un-decidable. The model represents a system as directed graph, where vertices are either subjects or objects. The edges between them are labeled and the label indicates the rights that the source of the edge has over the destination. Two rights occur in every instance of the model: take and grant. They play a special role in the graph rewriting rules describing admissible changes of the graph. There are a total of four such rules:  take rule allows a subject to take rights of another object (add an edge originating at the subject)  grant rule allows a subject to grant own rights to another object (add an edge terminating at the subject)  create rule allows a subject to create new objects (add a vertex and an edge from the subject to the new vertex)  remove rule allows a subject to remove rights it has over on another object (remove an edge originating at the subject) Preconditions for take (o, p, r):  Subject s has the right take for o.  Object o has the right r on p. Preconditions for grant (o, p, r):  Subject s has the right Grant for o.  s has the right r on p. Using the rules of the take-grant protection model, one can reproduce in which states a system can change, with respect to the distribution of rights. Therefore one can show if rights can leak with respect to a given safety model. ●●●●●●●Bakery algorithm In computer science, it is common for multiple threads to simultaneously access the same resources. Data corruption can occur if two or more threads try to write into the same memory location, or if one thread reads a memory location before another has finished writing into it. Lamport's bakery algorithm is one of many mutual exclusion algorithms designed to prevent concurrent threads entering critical sections of code concurrently to eliminate the risk of data corruption. Lamport envisioned a bakery with a numbering machine at its entrance so each customer is given a unique number. Numbers increase by one as customers enter the store. A global counter displays the number of the customer that is currently being served. All other customers must wait in a queue until the baker finishes serving the current customer and the next number is displayed. When the customer is done shopping and has disposed of his or her number, the clerk increments the number, allowing the next customer to be served. That customer must draw another number from the numbering machine in order to shop again. According to the analogy, the "customers" are threads, identified by the letter i, and obtained from a global variable. Due to the limitations of computer architecture, some parts of the Lamport's analogy need slight modification. It is possible that more than one thread will get the same number when they request it; this cannot be avoided. Therefore, it is assumed that the thread identifier i is also a priority identifier. A lower value of i means a higher priority and threads with higher priority will enter the critical section first. The critical section is that part of code that requires exclusive access to resources and may only be executed by one thread at a time. In the bakery analogy, it is when the customer trades with the baker and others must wait. When a thread wants to enter the critical section, it has to check whether it is its turn to do so. It should check the numbers of every other thread to make sure that it has the smallest one. In case another thread has the same number, the thread with the smallest i will enter the critical section first. In pseudo code this comparison will be written in the form: (a, b) < (c, d) is similar to (a < c) or ((a == c) and (b < d)) Once the thread ends its critical job, it gets rid of its number and enters the non-critical section. The non-critical section is the part of code that doesn't need exclusive access. It represents some thread-specific computation that doesn't interfere with other threads' resources and execution.
  • 9. This part is analogous to actions that occur after shopping, such as putting change back into the wallet. ●●●●●●Mutual Exclusion Mutual exclusion algorithms are used in concurrent programming to avoid the simultaneous use of a common resource, such as a global variable, by pieces of computer code called critical sections. A critical section is a piece of code in which a process or thread accesses a common resource. The critical section by itself is not a mechanism or algorithm for mutual exclusion. A program, process, or thread can have the critical section in it without any mechanism or algorithm which implements mutual exclusion. Examples of such resources are fine-grained flags, counters or queues, used to communicate between code that runs concurrently, such as an application and its interrupt handlers. The synchronization of access to those resources is an acute problem because a thread can be stopped or started at any time. To illustrate: suppose a section of code is altering a piece of data over several program steps, when another thread, perhaps triggered by some unpredictable event, starts executing. If this second thread reads from the same piece of data, the data, which is in the process of being overwritten, is in an inconsistent and unpredictable state. If the second thread tries overwriting that data, the ensuing state will probably be unrecoverable. These shared data being accessed by critical sections of code must, therefore, be protected, so that other processes which read from or write to the chunk of data are excluded from running. A mutex is also a common name for a program object that negotiates mutual exclusion among threads, also called a lock. On a uniprocessor system a common way to achieve mutual exclusion inside kernels is to disable interrupts for the smallest possible number of instructions that will prevent corruption of the shared data structure, the critical section. This prevents interrupt code from running in the critical section that also protects against interrupt-based process-change. In a computer in which several processors share memory, an indivisible test-and-set of a flag could be used in a tight loop to wait until the other processor clears the flag. The test-and-set performs both operations without releasing the memory bus to another processor. ●●●●●●Test & Set instruction In computer science, the test-and-set instruction is an instruction used to write to a memory location and return its old value as a single atomic (i.e. non-interruptible) operation. If multiple processes may access the same memory and if a process is currently performing a test-and-set, no other process may begin another test-and-set until the first process is done. CPUs may use test-and-set instructions offered by other electronic components, such as Dual-Port RAM; CPUs may also offer a test-and-set instruction themselves. A lock can be built using an atomic test-and-set instruction as follows: function Lock(boolean *lock) { while (test_and_set (lock) == 1) }; The test-and-set operation can solve the wait-free consensus problem for no more than two concurrent processes. However, more than two decades before Herlihy's proof, IBM had replaced test-and-set by compare-and-swap, which is a more general solution to this problem. Ultimately, IBM would release a processor family with 12 processors, whereas Amdahl would release a processor family with the architectural maximum of 16 processors. The test and set instruction when used with Boolean values behaves like the following function. Crucially the entire function is executed atomically: no process can interrupt the function mid-execution and hence see a state that only exists during the execution of the function. This code only serves to help explain the behavior of test-and-set; atomicity requires explicit hardware support and hence can't be implemented as a simple function. ●●●●●●●Synchronization mechanism Inter-process synchronization & communication are necessary for designing concurrent s/w i.e. correct & reliable. Parallel program execution & read/write sharing of data place heavy demands on the synchronization & communication are handled via messages. Once the necessary data are transmitted to individual processors for processing there is usually little need for processes to synchronize while operating on data in speed improvements for many application but it also intensified (increase) to need for synchronization. Properly designed uni-processor instructions such as test & Set & compare & swap implemented using the indivisible (undividable) read modify write cycle can be used as a foundation for Inter-process synchronization in multiprocessor system. ●●●●●Conditional Critical Region The critical region construct can be effectively used to solve the critical section problem. It cannot, however, be used to solve some general synchronization problems. For this reason the conditional critical region was introduced. The shared variable is declared in the same way the region construct is used again for controlling access & the only new keyword is await. It is illustrated in the following sequence of code: var v: shared T; Implementation of this construct allows a process waiting on a condition begin within a critical region to be suspended in a special queue, pending satisfaction of the related condition. Unlike a semaphore a conditional ……. critical section in that case. Consequently a process waiting on a condition ……. that does not prevent others from using the resource & when the condition region v do is eventually satisfied, the suspended process is awaited. Since it is { cumbersome to keep track of dynamic changes of the numerous possible individual conditions the common implementation of the conditional critical begin region assumes that each completed process may have modified the system …….. state in a way that has caused some of the waited on condition to become ……. satisfied. Whenever process leaves the critical section all conditioned that } have suspended earlier process are evaluated & if warranted one of the process is awakened. When that process leaves the next waiting process Qwait condition; whose waiting condition is satisfied is activated. No more suspended …….. processes are left or none of them has the necessary conditions to process. …….. end;
  • 10. ●●●●●Explain the 5 design goals of Distributed shared memory In order to design a good distributed system there are many design goals among them 5 are explained below 1) Concurrency: - A server must handle client requests at the same time distributed systems are naturally concurrent that is there are multiple workstations running programs independently & at the same time. Concurrency is important because any distributed service that is not concurrent would become a bottle neck that would serialize the actions of its clients & thus reduce the natural concurrency of the system. 2) Scalability: - The capability of a system to adopt to increase load is its scalability. Systems have bounded resources & can become completely saturated under increased load. A scalable system reacts more gracefully to increased load then does a non scalable one. Its resources reach a saturated state later. Even perfect design can‟t accommodate an ever growing load. Adding new resources might solve the problem. A scalable system should have the potential to grow without problems. In short a scalable design should withstand high service load, accommodate growth of the user community & enable simple integration of added resources. 3) Openness: - Two types of openness are important: non- proprietary & extensibility. Public protocols are important because they make it possible for many s/w manufactures that will be able to talk to each other. A system is extensible if it permits customization needed to meet unanticipated requirements. Extensibility is important because it aids (help) scalability & allows a system to survive (live) over time as the demands on it & the ways it is used to change. 4) Fault Tolerance (Acceptance): - Many clients are affected by the failure of distributed services, unlike a non distributed system in which a failure affects only single nodes. A distributed service depends on many components like n/w, switches etc all of which must work. Furthermore a client will often depend on multiple distributed services in order to function properly. If a client that depends on the N components that each have failure probability p will fail with probability roughly N*P. this approximation is (1-(1-P)^N) . 5) Transparency: - The final goal is transparency. We often use term single system image to refer to this goal of making the distributed system look to programs like it is a tightly coupled system. This id rely what a distributed operating system s/w is all about. There are 8 types of transparencies: a) Access transparency enables local & remote resources to be accessed using identical operations. b) Local transparency that enables resources to be accessed without knowledge of their location. c) Concurrency transparency enables several processes to operate concurrently using shared resources without interfaces b/w them. d) Replication transparency enables multiple instances to be used to increase reliability & performance without knowledge of the replicas by user. e) Failure transparency enables the concealment of faults allowing users & application program to complete their task. f) Mobility transparency allows the movement of resources & clients within a system without affecting the operation of users or programs. g) Performance transparency allows the system to be reconfigured to improve performance. h) Scaling transparency allows the system & applications to expand in scale without change to the system structure or the application algorithm. ●●●●●Working Set Peter Denning (1968) defines “the working set of information W(t,τ) of a process at time t to be the collection of information referenced by the process during the process time interval (t − τ,t)”. Typically the units of information in question are considered to be memory pages. This is suggested to be an approximation of the set of pages that the process will access in the future (say during the next τ time units), and more specifically is suggested to be an indication of what pages ought to be kept in main memory to allow most progress to be made in the execution of that process. The effect of choice of what pages to be kept in main memory (as distinct from being paged out to auxiliary storage) is important: if too many pages of a process are kept in main memory, then fewer other processes can be ready at any one time. If too few pages of a process are kept in main memory, then the page fault frequency is greatly increased and the number of active (non-suspended) processes currently executing in the system is set to zero. The working set model states that a process can be in RAM if and only if all of the pages that it is currently using (often approximated by the most recently used pages) can be in RAM. The model is an all or nothing model, meaning if the pages it needs to use increases, and there is no room in RAM, the process is swapped out of memory to free the memory for other processes to use. Often a heavily loaded computer has so many processes queued up that, if all the processes were allowed to run for one scheduling time slice, they would refer to more pages than there is RAM, causing the computer to "thrash".
  • 11. By swapping some processes from memory, the result is that processes -- even processes that were temporarily removed from memory -- finish much sooner than they would if the computer attempted to run them all at once. The processes also finish much sooner than they would if the computer only ran one process at a time to completion, since it allows other processes to run and make progress during times that one process is waiting on the hard drive or some other global resource. In other words, the working set strategy prevents thrashing while keeping the degree of multiprogramming as high as possible. Thus it optimizes CPU utilization and throughput. The main hurdle in implementing the working set model is keeping track of the working set. The working set window is a moving window. At each memory reference a new reference appears at one end and the oldest reference drops off the other end. A page is in the working set if it is referenced in the working set window. To avoid the overhead of keeping a list of the last k referenced pages, the working set is often implemented by keeping track of the time t of the last reference, and considering the working set to be all pages referenced within a certain period of time. The working set isn't a page replacement algorithm, but page-replacement algorithms can be designed to only remove pages that aren't in the working set for a particular process. One example is a modified version of the clock algorithm called WS- Clock. ●●●●●●●Bell & LaPadula Model The Bell-LaPadula Model (abbreviated BLP) is a state machine model used for enforcing access control in government and military applications. The model is a formal state transition model of computer security policy that describes a set of access control rules which use security labels on objects and clearances for subjects. Security labels range from the most sensitive (e.g."Top Secret"), down to the least sensitive (e.g., "Unclassified" or "Public"). The Bell-LaPadula model is an example of a model where there is no clear distinction of protection and security. Features of La Padual Model:- The Bell-LaPadula model focuses on data confidentiality and controlled access to classified information, in contrast to the Biba Integrity Model which describes rules for the protection of data integrity. In this formal model, the entities in an information system are divided into subjects and objects. The notion of a "secure state" is defined, and it is proven that each state transition preserves security by moving from secure state to secure state, thereby inductively proving that the system satisfies the security objectives of the model. The Bell-LaPadula model is built on the concept of a state machine with a set of allowable states in a computer network system. The transition from one state to another state is defined by transition functions. A system state is defined to be "secure" if the only permitted access modes of subjects to objects are in accordance with a security policy. To determine whether a specific access mode is allowed, the clearance of a subject is compared to the classification of the object (more precisely, to the combination of classification and set of compartments, making up the security level) to determine if the subject is authorized for the specific access mode. The clearance/classification scheme is expressed in terms of a lattice. The model defines two mandatory access control (MAC) rules and one discretionary access control (DAC) rule with three security properties: 1. The Simple Security Property - a subject at a given security level may not read an object at a higher security level (no read-up). 2. The ★-property (read "star"-property) - a subject at a given security level must not write to any object at a lower security level (no write-down). The ★-property is also known as the Confinement property. 3. The Discretionary Security Property - use of an access matrix to specify the discretionary access control. The transfer of information from a high-sensitivity document to a lower-sensitivity document may happen in the Bell- LaPadula model via the concept of trusted subjects. Trusted Subjects are not restricted by the ★-property. Untrusted subjects are. Trusted Subjects must be shown to be trustworthy with regard to the security policy. This security model is directed toward access control and is characterized by the phrase: "no read up, no write down." Compare the Biba model, the Clark-Wilson model and the Chinese wall model. With Bell-LaPadula, users can create content only at or above their own security level (i.e. secret researchers can create secret or top-secret files but may not create public files; no write-down). Conversely, users can view content only at or below their own security level (i.e. secret researchers can view public or secret files, but may not view top- secret files; no read-up). The Bell-LaPadula model explicitly defined its scope. It did not treat the following extensively:  Covert channels. Passing information via pre-arranged actions was described briefly.  Networks of systems. Later modeling work did address this topic.  Policies outside multilevel security. Work in the early 1990s showed that MLS is one version of Boolean policies, as are all other published policies. Strong * property:- The Strong ★ Property is an alternative to the ★-Property, in which subjects may write to objects with only a matching security level. Thus, the write-up operation permitted in the usual ★-Property is not present, only a write-to- same operation. The Strong ★ Property is usually discussed in the context of multilevel database management systems and is motivated by integrity concerns. This Strong ★ Property was anticipated in the Biba model where it was shown that strong integrity in combination with the Bell-LaPadula model resulted in reading and writing at a single level. Tranquility principle The tranquility principle of the Bell-LaPadula model states that the classification of a subject or object does not change while it is being referenced. There are two forms to the tranquility principle: the "principle of strong tranquility" states that security levels do not change during the normal operation of the system. The "principle of weak tranquility" states that security levels may never change in such a way as to violate a defined security policy. Weak tranquility is desirable as it allows systems to observe the principle of least privilege. That is, processes start with a low clearance level regardless of their owners‟ clearance, and progressively accumulate higher clearance levels as actions require it. ●●●●●●Briefly describe the multiprocessor operating system A multiprocessor operating system manages all the available resources schedule functionality to form an abstraction it wills facilitates program execution & interaction with users. The process is one of the important & basic types of resources that need to be managed. For effective use of multiprocessors the processor scheduling is necessary. Processors scheduling undertakes the following tasks.
  • 12. 1> Allocation of processor among applications in such a manner that will be consistent with system design objectives. It affects the system throughput. Throughput can be improved by co-scheduling several applications together, thus availing fewer processors to each. 2> Ensure efficient use of processors allocation to an application. This primarily affects the speed up of the system. The second basic types of resources are memory management that needs to be managed. In multiprocessor system memory management is highly dependent on the architecture & interconnection scheme. a) In multiprocessor operating systems the operating system should provide a flexible memory model that facilitate safe & efficient access to shared memory may be simulated by means of a message passing mechanism. b) In shared memory system the operating system should provide a flexible memory model that facilitates safe & efficient access to share data structures & synchronized data. A multiprocessor operating system should provide an h/w independent. Unified modeling of shared memory to facilitates parting of applications between different multiprocessor environments. ●●●●●●Fetched & Add instruction The fetch & add instruction is a multiple operation memory access instruction that automatically adds a constant to a memory location & returns the previous contents of the memory location. This instruction is defined follows:- The fetch & add instruction is powerful & it allows the implementation of „p‟ & ‟v‟ Function fetched and add(m: integer; c: operations on a general semaphore. S is in integer) the following manner:- Var tenp: integer; p(s): while (fetched add (s, -1)<0) do{ Begin{ begin temp:=m; fetched & add(s, 1); m:=m+c; while (s<0) do nothing; Return(temp;) end; } } end; ●●●●●●Briefly describe the structure of UNIX operating system UNIX is a layered operating system. The innermost layered is the h/w that provides the services for the OS. The following are the components of UNIX OS. 1) The Kernel:- the operating system referred to in UNIX as the kernel interacts directly with the h/w & provides the services the user programs. These User programs don‟t need to know how to interact with the kernel & it‟s up to the kernel to provide the desired service. User program interacts directly with the kernel through a services would be provided by the kernel. Such services would include accessing a file: open, read, write link or execute a file, starting or updating accounting records changing ownership of a file or directory; changing to a new directory; creating, suspending or killing a process; enabling to h/w devices; & setting limits on system resources. 2) The shell:- shell is often called a command line interpreter, since it presents a single prompt for the user. The user types a command; the shell invokes that command, & then presents the prompt again when the command has finished. This is done on a line by line. Hence the term “commands line”. The shell program provides a method for adapting each user‟s setup requirements & storing this information for re use. The user interacts with /bin/sh, which interprets each command typed. Internal commands are handled within the shell & external commands are cited as programs link ls, grep, sort, ps etc. 3) System Utilities:- the system utilities are intended to be controlling tools that do a single task exceptionally well. Users can solve problems by integrating these tools instead of writing a large monolithic application. 4) Application programs: - some application programs include the Emacs Editor, GCC, G++, Xfig, Latex. UNIX works very differently rather than having a kernel tasks examine the requests of a process. The process itself enters kernel space. This means that rather than the process waiting outside the kernel it enters the kernel itself. When a process invokes a system call the h/w is switched to the settings. The process will be executing from the kernel image. ●●●●●●●Explain Resource Allocation graph for multiple Instances with an example & also explain the recovery of Deadlock Deadlock can be described more precisely in terms of a directed graph called a system allocation graph. This graph consists of a set of vertices v and set of edges e. The set of vertices v is divided into two different types of nodes p= {p1, p2, ….., pn} the set consisting of all active processes in the system & R= {R1, R2,…. ,RM } the set consisting of all resourses types in the system. A direct edge from process pi to resource Rj is denoted by pi Rj it signifies that process pi requested an instance of resource type Rj & is currently waiting for that resource. A directed edge from resource type Rj to process pi is denoted by Rj pi ; it signifies that an instance of resource type Rj has been allocated to process pi. A direct edge Rj pi is called an assignment edge & piRj is called request edge. Pictorially we represent each process pi as a circle & each resource type Rj as a sequence. Since resource type Rj may have more than one instance we represent each such instance as a dot within the square. The request edge points to only the square Rj whereas an assignment edge must also design one of the dots in the square. When process pi requests an instance of resource type Rj a request edge is inserted in the graph. When this request can be fulfilled, the request edge is instantaneously
  • 13. transformed to an assignment edge when the process no longer needs access to the resource it releases the resource, & as a result the assignment edge is deleted. The graph from the above diagram depicts the following situation. (a) The set P, R, E where i) P= {p1, p2, p3 } j) R= {R1, R2, R3} k) E= {P1R1, P2R3, R1P2, R2P2, R2P1, R3P3} (b) Resource instances i) One instance of resource type R1; ii) Two instance of resource type R2; iii) One instance of resource type R3; iv) Three instance of resource type R4; (C)Process states: I) Process P1 is holding an instance of resource type R2 & is waiting for a resource type R1. ii) Process P2 is holding an instance of R1, & R2 & is waiting for a resource type R3. iii) Process P3 is holding an instance of R3. (I)So from the definition we easily understand that if there have a cycle then deadlock may exists but if no cycle exists then any process in the system is deadlock free. (II) If each resource type has exactly one instance, then a cycle implies that a deadlock has occurred. If the cycle involves only a set of resource types, each of which has a single instance then deadlock occurred. Each process involved in the cycle is deadlock. But if each resource type has several instances then a cycle doesn‟t necessarily imply that a deadlock has occurred. In this case a cycle in the graph is a necessary but not a sufficient condition for the existence of deadlock. We can use a protocol to prevent a deadlock ensuring that deadlock never occur & system can use either a deadlock prevention or deadlock avoidance methods. Deadlock prevention is a set of methods for ensuring that at least one of the necessary conditions can‟t hold. These methods prevent deadlocks by consisting how requests for resources can be made. (III) If a system doesn‟t employ either deadlock prevention or a deadlock avoidance algorithm then a deadlock situation occur. In this environment, the system can provide an algorithm that examines the state of the system to determine whether a deadlock has occurred & an algorithm to recover from the deadlock. ●●●●●●Explain the concept of Virtual Memory. List any two methods implementation & explain any one with the help of a diagram Virtual Memory is a technique that allows the execution may not be completely in memory. One major advantage of this scheme is that programs can be larger than physical memory. Further virtual memory abstracts main memory into an extremely large uniform array of storage separating logical memory as viewed by the user from physical memory. Virtual memory also allows processes to easily share files & address spaces & it provides an efficient mechanism for process creation. Virtual memory is not easy to implement, however & may substantially decrease performance if it is used carelessly. The two methods for implementing & explaining are as follows: (1) Principle of Operation virtual memory can be implemented as an extension of paged or segmented memory management or as a combination of both. Accordingly address translation is performed by means of page mapped table, segmented descriptor table, or both. The important characteristics is that in virtual memory systems some portions of address space of the running process can be absent from main memory. To emphasize the distinction, the term real memory is often used to denote physical memory. The operating system dynamically allocates real memory to portions of the virtual address space. The address translation mechanism must be able to associates virtual names with physical locations. The type of missing items depends on the basic underlying memory management scheme & may be a segment or a page. The page map table contains an entry for each virtual page of the related process. The diagram describe above. (2) Management of Virtual Memory Assuming that paging is used as an underlying memory management scheme. The implementation of virtual memory requires maintenance of one page map table per active process. A new component of the memory manager data structures is the file map table (FMT). A FMT contains secondary storage accesses of all pages. The memory manager‟s use into the main memory. The base may be kept in the control block of the related
  • 14. process. A pair of page map table base & page map length registered may be provided in h/w to expedite the address translation process & to reduce the size of PMT for smaller processes. ●●●●●●Explain the concept of segmentation with the help of a diagram. Make a relative comparison between paging & segmentation. Explain the concept of page fault with the help of an example Segmentation is a memory management scheme that supports this user‟s view of memory. A logical address space is a collection of segments. Each segment has a name & the offset within the segment. The user therefore specifies each address by 2 quantities (a) A segment name & (b) An offset. The diagram is described bellow. Although most of our specific examples are based on the paging, it is also possible to implement virtual memory in the form of demand segmentation. Such implementations usually inherits the benefits of sharing & protection that provided by segmentation. Moreover their placement procedure is explicit awareness of the types of information contained in particular segments. For example a working set of segments should include at least one each of code, data & stack segments. As with segment references alert the operating system to changes of the locality. However the variability of segment sizes & the complicate the management of both main & secondary memories. Placement strategies i.e; methods of finding a suitable area of free memory to load an incoming segment, are quite complex in segment systems. Paging is very convenient for the management of main & secondary memories but it is inferior with regard to protection & sharing. The transparency of paging necessitates the use of probabilistic replacement algorithms. Both segmented & page implementations of virtual memory have their respective advantages & disadvantages & neither is superior to the other over all characteristics. Some computer systems combine the two approaches in order to enjoy the benefits of both. The diagram is shown in the bellow. The working set module is based on the assumption of the locality. The working set model is successful & knowledge of the working set can be useful for pre-paging, but it seems a clumsy way to control thrashing. A strategy that uses the page fault frequency takes a more direct approach. The specific problem is how to prevent thrashing. Thrashing has a high page fault rate. Thus we want to control the page fault rate. When it is too high, we know that the process needs more frames. Similarly if the page fault rate is too low, then the process may have too many frames. We can establish upper & lower bounds on the desired page fault rate. If the page fault rate exceeds the upper limit, we allocate
  • 15. that process another frame; if the page fault rate falls below the lower limit, we remove a frame from that process. Thus we can directly measure & control the page fault rate to prevent thrashing. If the page fault rate increases & no more free frames are available, we must select some process & suspend it. The freed frames are then distributed to process with high page fault rates. ●●●●●●What is meant by context switch? Explain the o/h incurred due to the context switching on process & thread The process of changing context from an executing program to an interrupt will assume control requires a combination of h/w & s/w. since the interrupted program knows neither when an interrupt will assume control of the processor nor which port of the machine context will be modified by the interrupt routine, the interrupt service routine itself is changed with the saving & restoring the context of the preempted activity. In a context switched, the state of the first process must be saved somehow, so that when the scheduler gets back to the execution of the first process, it can restore this state & continue normally. The state of the process includes all the registers that the process may be using; especially the program may be necessary. Often all the data structures called PCB. Now in order to switch the processes the PCB for the first processes must be created & saved. The threads are normally cheaper than the processes & that they can be scheduled for execution in a user dependent way with less o/hs. They are cheaper because they do not have a full set of resources each, whereas the PCB for a heavy weight process is large & costly to context switch the PCB‟s for threads are much smaller, since each threads has only a stack & some registers to manage. It has no open file lists or resource lists or resource lists, no accounting structures to update. All of these resources are shared by all threads within the process. ●●●●●●What are the limitations of Banker’s Algorithm used for deadlock avoidance? There are some problems with the Banker‟s algorithm as follows a> It is time consuming to execute on the operation of every resource. b> If the claim information is not accurate system resources may be underutilized. c> Another difficulty can occur when the system is heavily loaded. In this situation so many resources are granted away that very safe sequences remain & as a consequence the job will be executed sequentially. So Banker‟s algorithm is referred to as the “Most Liberal granting Process”. d> The process claim must be less than the total number of units of the resource in the system. If not the process is not accepted by the manager. e> Since the state without the new process is safe, so is the state with the new process. Just use the order we had originally & put the new process at the end. f> A resource becoming unavailable can result in an unsafe state. ●●●●Advantage & disadvantage of Multiuser operating system The advantage of having a multiuser operating system is that normally the h/w is very expensive & it lets a no. of users share this expensive resource. This means the cost is divided amongst the users. Since the resources are shared, they are more likely to be in user then sitting idle being unproductive. The disadvantage with multi user computer systems is that as more users access it the performance becomes slower & slower. Another limitation is the cost of h/w, as a multiuser operating system requires a lot of disk space & memory. In addition the actual s/w for multiuser operating systems tend to cost more than single user operating system. ●●●●●●What is Remote procedure call or RPC? How RPC work? Give its limitations also Distributed systems usually use remote procedure call (RPC) as a fundamental building block for implementing remote operations & RPC is a powerful technique for constructing distributed client server based applications. It is based on extending the notion of conventional or local procedure need not exist in the same address space as the calling procedure. The 2 processes may be on the same system or they may be on different systems with a n/w connecting them. By using RPC programmers of distributed application avoid the details of the interface with the n/w. An RPC is analogous to a function call. Like a function call, when an RPC is made the calling arguments are passed to the Remote procedure & the caller waits for a response to be returned from the remote procedure. The following figure shows the flow of activity of that takes place during an RPC call b/w two networked system.
  • 16. The client make a procedure call that sends a request to the server &waits. The thread is blocked from processing until either a reply is received or it times out. When the request arrives, the server calls a dispatch routine that performs the request service, & sends the reply to clients. After RPC call is completed, the client program continues. RPC specifically supports network application. RPC implementations are nominally incomputable with other RPC implementation, although some are compatible. Using a single implementation of a RPC in a system will most likely results in a dependence on the RPC vendor for maintenance support & future enhancements. This could have a highly negative impact on a system‟s flexibility, maintainability, portability because there is no single standard for implementing a RPC, different features may be offered by individual RPC implementations. Features that may affect the design & cost of a RPC based application. ●●●●●●Linked Allocation & Indexed Allocation Linked allocation The problems in contiguous allocation can be traced directly to the requirement that the spaces are allocated contiguously and that the files that need these spaces are of different sizes. These requirements can be avoided by using linked allocation. In linked allocation, each file is a linked list of disk blocks. The directory contains a pointer to the first and (optionally the last) block of the file. For example, a file of 5 blocks which starts at block 4, might continue at block 7, then block 16, block 10, and finally block 27. Each block contains a pointer to the next block and the last block contains a NIL pointer. The value -1 may be used for NIL to differentiate it from block 0. With linked allocation, each directory entry has a pointer to the first disk block of the file. This pointer is initialized to nil (the end-of-list pointer value) to signify an empty file. A write to a file removes the first free block and writes to that block. This new block is then linked to the end of the file. To read a file, the pointers are just followed from block to block. There is no external fragmentation with linked allocation. Any free block can be used to satisfy a request. Notice also that there is no need to declare the size of a file when that file is created. A file can continue to grow as long as there are free blocks. Linked allocation, does have disadvantages, however. The major problem is that it is inefficient to support direct-access; it is effective only for sequential-access files. To find the ith block of a file, it must start at the beginning of that file and follow the pointers until the ith block is reached. Note that each access to a pointer requires a disk read. Another severe problem is reliability. A bug in OS or disk hardware failure might result in pointers being lost and damaged. The effect of which could be picking up a wrong pointer and linking it to a free block or into another file. Index Allocation linked allocation does not support random access of file since pointer hidden in block sequentially. Indexed allocation solves this problem by bringing pointer together into an index block. Indexed allocation uses an index to directly track the file block locations. A user declares the maximum file size, and the file system allocates a file header with an array of pointers big enough to point to all file blocks. Although indexed allocation provides fast disk location lookups for random accesses, file blocks may be scattered all over the disk. A file system needs to provide additional mechanisms to ensure that disk blocks are grouped together for good performance (e.g., disk defragmenter). Also, as a file increases in size, the file system needs to reallocate the index array and copy old entries. Ideally, the index can grow incrementally. File header Block 0 Data blocks Block 1 Block 2 Multilevel Indexed Allocation Linux uses multilevel indexed allocation, so certain index entries point to index blocks as opposed to data blocks. The file header, or the i_node data structure, holds 15 index pointers. The first 12 pointers point to data blocks. The 13 th pointer points to a single indirect block, which contains 1,024 additional pointers to data blocks. The 14th pointer in the file header points to
  • 17. a double indirect block, which contains 1,024 pointers to single indirect blocks. The 15th pointer points to a triple indirect block, which contains 1,024 pointers to double indirect blocks. This skewed multilevel index tree is optimized for both small and large files. Small files can be accessed through the first 12 pointers, while large files can grow with incremental allocations of index blocks. However, accessing a data block under the triple indirect block involves multiple disk accesses—one disk access for the triple indirect block, another for the double indirect block, and yet another for the single indirect block before accessing the actual data block. Also, the number of pointers provided by this data structure caps the largest file size. Finally, the boundaries between the last four pointers are somewhat arbitrary. With a given block number, it is not immediately obvious as to of which of the 15 pointers to follow. ●●●●●●●●External Fragmentation & Internal Fragmentation External fragmentation: - External fragmentation is the phenomenon in which free storage becomes divided into many small pieces over time. It is a weakness of certain storage allocation algorithms, occurring when an application allocates and de- allocates ("frees") regions of storage of varying sizes, and the allocation algorithm responds by leaving the allocated and de- allocated regions interspersed. The result is that although free storage is available, it is effectively unusable because it is divided into pieces that are too small to satisfy the demands of the application. The term "external" refers to the fact that the unusable storage is outside the allocated regions. "A partition of main memory is the wastage of an entire partition is said to be External Fragmentation". Fragmentation can also refer to RAM that has small, unused holes scattered throughout it. This is called external fragmentation. With modern operating systems that use a paging scheme, a more common type of RAM fragmentation is internal fragmentation. This occurs when memory is allocated in frames and the frame size is larger than the amount of memory requested. External fragmentation refers to the division of free storage into small pieces over a period of time, due to an inefficient memory allocation algorithm, resulting in the lack of sufficient storage for another program because these small pieces are not contiguous. In External Fragmentation Both first fit and best fit strategies suffer from this. Depending on the total amount of memory storage, size, external fragmentation may be minor or major problem. Internal fragmentation: - Internal fragmentation is the space wasted inside of allocated memory blocks because of restriction on the allowed sizes of allocated blocks. Allocated memory may be slightly larger than requested memory; this size difference is memory internal to a partition, but not being used. Internal fragmentation occurs when storage is allocated without intention to use it. This space is wasted. While this seems foolish, it is often accepted in return for increased efficiency or simplicity. The term "internal" refers to the fact that the unusable storage is inside the allocated region but is not being used. "A partition of main memory is wasted with in a partition is said to be Internal Fragmentation". For example, in many file systems, each file always starts at the beginning of a cluster, because this simplifies organization and makes it easier to grow files. Any space left over between the last byte of the file and the first byte of the next cluster is a form of internal fragmentation called file slack or slack space. Slack space is a very important source of evidence in computer forensic investigation. Similarly, a program which allocates a single byte of data is often allocated many additional bytes for metadata and alignment. This extra space is also internal fragmentation. ●●●●●●●What is semaphore? Give the solution to producer consumer problem using semaphore & explain the solution A semaphore is hardware or a software tag variable whose value indicates the status of a common resource. Its purpose is to lock the resource being used. A process which needs the resource will check the semaphore for determining the status of the resource followed by the decision for proceeding. In multitasking operating systems, the activities are synchronized by using the semaphore techniques. In computer science, producer-consumer problem (also known as the bounded-buffer problem) is a classical example of a multi-process synchronization problem. The problem describes two processes, the producer and the consumer, who share a common, fixed-size buffer. An inadequate solution could result in a deadlock where both processes are waiting to be awakened. The problem can also be generalized to have multiple producers and consumers. Semaphores solve
  • 18. the problem of lost wakeup calls. In the solution below we use two semaphores, fill Count and empty Count, to solve the problem. Fill Count is the number of items to be read in the buffer, and empty Count is the number of available spaces in the buffer where items could be written. Fill Count is incremented and empty Count decremented when a new item has been put into the buffer. If the producer tries to decrement empty Count while its value is zero, the producer is put to sleep. The next time an item is consumed, empty Count is incremented and the producer wakes up. The consumer works analogously. semaphore fillCount = 0; // items produced semaphore emptyCount = BUFFER_SIZE; // remaining space procedure producer() { while (true) { item = produceItem(); down(emptyCount); putItemIntoBuffer(item); up(fillCount); } } procedure consumer() { while (true) { down(fillCount); item = removeItemFromBuffer(); up(emptyCount); consumeItem(item); } } The solution above works fine when there is only one producer and consumer. Unfortunately, with multiple producers or consumers this solution contains a serious race condition that could result in two or more processes reading or writing into the same slot at the same time. To understand how this is possible, imagine how the procedure putItemIntoBuffer() can be implemented. It could contain two actions, one determining the next available slot and the other writing into it. If the procedure can be executed concurrently by multiple producers, then the following scenario is possible: 1. Two producers decrement empty-Count 2. One of the producers determines the next empty slot in the buffer 3. Second producer determines the next empty slot and gets the same result as the first producer 4. Both producers write into the same slot To overcome this problem, we need a way to make sure that only one producer is executing putItemIntoBuffer() at a time. In other words we need a way to execute a critical section with mutual exclusion. To accomplish this we use a binary semaphore called mutex. Since the value of a binary semaphore can be only either one or zero, only one process can be executing between down (mutex) and up (mutex). The solution for multiple producers and consumers is shown below. semaphore mutex = 1; semaphore fillCount = 0; semaphore emptyCount = BUFFER_SIZE; procedure producer() { while (true) { item = produceItem(); down(emptyCount); down(mutex); putItemIntoBuffer(item); up(mutex); up(fillCount); } up(fillCount); //the consumer may not finish before the producer. } procedure consumer() { while (true) { down(fillCount); down(mutex); item = removeItemFromBuffer(); up(mutex); up(emptyCount); consumeItem(item); } } The order in which different semaphores are incremented or decremented is essential: changing the order might result in a deadlock. ●●●●●●●Pipes & Filters in UNIX operating system A pipe is a unidirectional channel that may be written as one end & read at the other. A pipe is used for communication between 2 processes. The producer process writes data into one end of the pipe & the consumer process retrieves them from the other end. The system provided limited buffering for each open pipe. Control of data flow is performed by the system, which halts the producer attempting to write into the full pipe & halts the consumer attempting to read an empty pipe. In UNIX and Unix-like operating systems, a filter is a program that gets most of its data from its standard input (the main input stream) and writes its main results to its standard output (the main output stream). UNIX filters are often used as elements of pipelines. The pipe operator ("|") on a command line signifies that the main output of the command to the left is passed as main input to the command on the right. ●●●●●●●Deadlock avoidance algorithm or Banker’s algorithm This is an algorithm that deals with operating resources such as memory or processor time as though they were money and the processes competing for them as though they were bank customers. The operating system takes on the role of the banker.
  • 19. The banker has a set of units to allocate to its customers. Each customer states in advance its total requirements for each resource. The banker accepts a request for more units if the customer's maximum doesn't exceed the capital the banker has. If the loan is granted, the customer agrees to return the units within a finite time. The current loan of a customer can't exceed his maximum need. During a transaction a customer only borrows/returns one unit at a time. This prevents circular waiting. It allows piecemeal allocation, but before any partial allocation the remaining free resource is checked to make sure enough is free. The problem is execution time: if we have m resource types and n processes, the worst case execution time is approximately mn(n+1)/2. For m and n both equal to 10, each resource request takes about half a second, which is bad. The current position is said to be safe if the banker may allow all his present customers to complete their transactions within a finite time, otherwise it is said to be unsafe (but that doesn't necessarily mean an inevitable deadlock as there is a certain time dependency). A customer is characterized by his current loan and his claim where the claim is the customer's need minus the current loan to that customer. Similarly for the banker, the total "cash" that he has is the starting capital minus the sum of all the loans. The banker prevents deadlock by satisfying one request at a time, but only when absolutely necessary. Consider the more general problem with several "currencies", as shown by this pseudo code: TYPE B = 1..number of customers; D = 1..number of currencies; C = array [D] of integer; S = record Transactions : array [B] of record Claim, loan : C; Completed : boolean end; Capital, cash : C; end; PROCEDURE return_loan (VAR loan, cash : C); VAR currency : D; BEGIN FOR every currency DO Cash[currency] := cash[currency] + loan[currency] END; PROCEDURE complete_transactions (VAR state : S); VAR customer : B; progress : boolean; BEGIN WITH state DO REPEAT progress := false; FOR every customer DO WITH transactions[customer] DO IF NOT completed THEN BEGIN returnloan(loan,cash); completed:=true; progress:=true END UNTIL NOT progress END; FUNCTION all_transactions_completed (state : S) : boolean; BEGIN WITH state DO all_transactions_completed:=(capital = cash) END; FUNCTION safe (current_state : S) : boolean; VAR state : S; BEGIN state:=current_state; complete_transactions(state); safe:=all_transactions_completed(state) END; If all transactions can be completed, the current position is safe and it's alright to honor the request for a new loan. In practice, a process may crash for one of several reasons, liberating its held resources and making no further claim. If all the OS resources were controlled by such an algorithm, we would need just the variable current state and an operation safe. Safe can be micro-coded as a single machine instruction, or included in the resident OS part as code. ●●●●●●Acyclic-Graph Directory The acyclic graph is a natural generalization of the tree-structured directory scheme. The common subdirectory should be shared. A shared directory or file will exist in the file system in two (or more) places at once. A tree structure prohibits the sharing of files or directories. An acyclic graph (a graph with no cycles) allows directories to share subdirectories and files. Here the same file or subdirectory may be in two different directories. It is important to note that a shared file (or directory) is not the same as two copies of the file. With two copies, each programmer can view the copy rather than the original, but if one programmer changes the file, the changes will not appear in the other's copy. With a shared file, only one actual file exists, so any changes made by one person are immediately visible to the other. A common way, exemplified by many of the UNIX systems, is to create a new directory entry called a link. When a reference to a file is made, we search the directory. If the directory entry is marked as a link, then the name of the real file is included in the link information. We resolve the link by using that path name to locate the real file. Links are easily identified by their format in the directory entry and are effectively named indirect pointers. In a system where sharing is implemented by symbolic links, this situation is somewhat easier to handle.