Facebook   GooglePlus  StumbleUpon   Twitter   Pinterest
search
Login | Signup | Support
  • 0
  • ×

    Add as Frienddistributed operating system ppt distributed system ppt distributed System ppt DOS Transparency

    by: Rinki

    Current Rating : Rate It :

    37156

    Views

    Download
     
    1 : Distributed Operating Systems
    2 : Overview Distributed system Why distributed systems? Network Operating System Distributed Operating System DOS: Design Issues
    3 : Distributed System A distributed system is a collection of autonomous computers that appear to the users of the system as a single computer.
    4 : Why Distributed Systems? Resource sharing Scalability Reliability Need for higher processing speed Spatial distribution is inherent in many applications
    5 : Network Operating System It provides an environment where users are aware of the multiplicity of machines. Users can access remote resources by logging into the remote machine OR transferring data from the remote machine to their own machine Users should know where the required files and directories are and mount them. Each machine could act like a server and a client at the same time. E.g NFS from Sun Microsystems
    6 : Distributed Operating System Runs on a cluster of machines with no shared memory Users get the feel of a single processor - virtual uniprocessor Transparency is the driving force. Requires A single global IPC mechanism A global protection mechanism Identical process management and system calls at all nodes Common file system at all nodes
    7 : DOS: Transparency Location Transparency Users are not aware of the positioning of the resources in the system. Migration Transparency Resources can move without changing names Replication Transparency Users should not be aware of the presence of multiple copies of a resource Failure Transparency Masking the partial failures in the system Performance Transparency Reconfiguring the resources to improve the performance of the system
    8 : Transparency Contd... Concurrency Transparency Resource sharing is automatic Parallelism transparency Activities can happen in parallel without the knowledge of the user. He sees only speedup. Scaling Transparency Allowing the system to expand in scale without distrupting the activities of the users
    9 : Reliability Faults (Fail stop failure, Byzantine) Fault Avoidance Fault Tolerance Redundancy techniques (How much for k failures??) k+1 replicas for fail stop 2k+1 replicas for Byzantine failures Distributed control Fault Detection & Recovery Atomic transactions Stateless servers Acknowledgements and timeout-based retransmissions of messages
    10 : Flexibility Ease of Modification Ease of enhancement Monolithic kernel (all services) Microkernel (only IPC, Low level device, process, memory) ALP to HLP Negligible overhead involved in exchanging messages
    11 : Performance Batch if possible Cache whenever possible Minimize copying of data Minimize network traffic Take advantage of fine-grain parallelism for multiprocessing
    12 : Scalability Avoid centralized entities No fault tolerance System bottleneck Network traffic with the centralized entity Avoid centralized algorithms Perform most operations on client workstations
    13 : Message Passing Original sharing or shared-data approach Copy sharing or message-passing approach Message Passing, RPC, etc Support for error handling in case of communication failure Message structure
    14 : Synchronization Blocking (send primitive is blocked until receive acknowledgment) Time out Nonblocking (send and copy to buffer) Polling at receive primitive Interrupt Synchronous (Send and receive primitives are blocked) Asynchronous
    15 : Buffering Null buffer (No buffering) Single message buffer Unbounded-capacity buffer Finite-bound (or multiple-message) buffer Unsuccessful communication (error message to sender) Flow-controlled communication (block the sender until the receiver accepts) Multidatagram messages Maximum Transfer Unit(MTU) Encoding & Decoding Tagged representation Untagged representation
    16 : Process addressing Explicit addressing Send(process_id, message) Implicit addressing (Functional addressing) Send_any(service_id, message) Ex: Machine_id@local_id (Berkeley UNIX) Limited with process migration Link based process addressing Ex: machine_id@local_id@machine_id Overload of locating a process Intermediate node failure System-wide unique identifier (Location Transparency) High level m/c independent and low level m/c dependent Centralized naming server for high level id (functional)
    17 : Failure handling Loss of request message Loss of response message Unsuccessful execution of the request (system crash) Inter Process Communication (IPC) Four message reliable IPC (request, ack, reply, ack) Three message reliable IPC (request, reply, ack) Two message IPC (Request, reply) Failure handling at-least-once (Time out) Idempotency (no side effects no matter how many times performed) Nonidempotent (Exactly once semantics) Reply from the cache with unique Id
    18 : Group Communication One to many communication Multicast/Broadcast Open group/Closed group Flexible reliability The 0-reliable The 1-reliable The m-out-of-n reliable All reliable Atomic Multicast Many to one communication Many to many Communication Absolute Ordering (Global clock) Consistent ordering (Sequencer/ABCAST protocol) Causal ordering
    19 : Design Issues Resource management It is impossible to gather information about utilisation or availability of resources coherently. Hence these have to be calculated approximately using heuristic methods. Processor allocation Load balancing Hierarchical organisation of processors. If a processor can’t handle a request, ask the parent for help. Issues: Crashing of a higher level processor will result in isolation of processors attached to it.
    20 : Design Issues Process scheduling Communication dependency has to be considered. Fault tolerance The design should consider distribution of control and data. Services provided Typical services include name, directory, file, time, etc.
    21 : Assignments Amoeba – Case study V-System – Case study Mach – Case study Chorus – Case study Centralized & Distributed Algorithms for Clock synchronization The RPC Model Deadlock prevention & Deadlock detection algorithms Election algorithms for co-ordinator Load balancing Algorithms Desirable features of a good naming system
    Copyright © 2014 www.slideworld.com. All rights reserved.