SlideShare une entreprise Scribd logo
1  sur  60
The Google File System
(GFS)
Introduction

   Design constraints
       Component failures are the norm
           1000s of components
           Bugs, human errors, failures of memory, disk,
            connectors, networking, and power supplies
           Monitoring, error detection, fault tolerance, automatic
            recovery
       Files are huge by traditional standards
           Multi-GB files are common
           Billions of objects
Introduction

   Design constraints
       Most modifications are appends
           Random writes are practically nonexistent
           Many files are written once, and read sequentially
       Two types of reads
           Large streaming reads
           Small random reads (in the forward direction)
       Sustained bandwidth more important than latency
       File system APIs are open to changes
Interface Design

   Not POSIX compliant
   Additional operations
       Snapshot
       Record append
Architectural Design

   A GFS cluster
       A single master
       Multiple chunkservers per master
           Accessed by multiple clients
       Running on commodity Linux machines
   A file
       Represented as fixed-sized chunks
           Labeled with 64-bit unique global IDs
           Stored at chunkservers
           3-way Mirrored across chunkservers
Architectural Design (2)

     Application   chunk location?    GFS Master
     GFS client


                                       GFS chunkserver
           chunk data?            GFS chunkserver
                                       Linux file system
                                GFS chunkserver
                                   Linux file system
                                Linux file system
Architectural Design (3)

   Master server
       Maintains all metadata
           Name space, access control, file-to-chunk mappings,
            garbage collection, chunk migration
   GPS clients
       Consult master for metadata
       Access data from chunkservers
       Does not go through VFS
       No caching at clients and chunkservers due to the
        frequent case of streaming
Single-Master Design

   Simple
   Master answers only chunk locations
   A client typically asks for multiple chunk
    locations in a single request
   The master also predicatively provide chunk
    locations immediately following those
    requested
Chunk Size

   64 MB
   Fewer chunk location requests to the master
   Reduced overhead to access a chunk
   Fewer metadata entries
       Kept in memory
- Some potential problems with fragmentation
Metadata

   Three major types
       File and chunk namespaces
       File-to-chunk mappings
       Locations of a chunk’s replicas
Metadata

   All kept in memory
       Fast!
       Quick global scans
           Garbage collections
           Reorganizations
       64 bytes per 64 MB of data
       Prefix compression
Chunk Locations

   No persistent states
       Polls chunkservers at startup
       Use heartbeat messages to monitor servers
       Simplicity
       On-demand approach vs. coordination
           On-demand wins when changes (failures) are often
Operation Logs

   Metadata updates are logged
       e.g., <old value, new value> pairs
       Log replicated on remote machines
   Take global snapshots (checkpoints) to
    truncate logs
       Memory mapped (no serialization/deserialization)
       Checkpoints can be created while updates arrive
   Recovery
       Latest checkpoint + subsequent log files
Consistency Model

   Relaxed consistency
       Concurrent changes are consistent but undefined
       An append is atomically committed at least once
        - Occasional duplications
   All changes to a chunk are applied in the
    same order to all replicas
   Use version number to detect missed
    updates
System Interactions

   The master grants a chunk lease to a replica
   The replica holding the lease determines the
    order of updates to all replicas
   Lease
       60 second timeouts
       Can be extended indefinitely
       Extension request are piggybacked on heartbeat
        messages
       After a timeout expires, the master can grant new
        leases
Data Flow

   Separation of control and data flows
       Avoid network bottleneck
   Updates are pushed linearly among replicas
   Pipelined transfers
   13 MB/second with 100 Mbps network
Snapshot

   Copy-on-write approach
       Revoke outstanding leases
       New updates are logged while taking the
        snapshot
       Commit the log to disk
       Apply to the log to a copy of metadata
       A chunk is not copied until the next update
Master Operation

   No directories
   No hard links and symbolic links
   Full path name to metadata mapping
       With prefix compression
Locking Operations

   A lock per path
       To access /d1/d2/leaf
       Need to lock /d1, /d1/d2, and /d1/d2/leaf
       Can modify a directory concurrently
           Each thread acquires
               A read lock on a directory
               A write lock on a file
       Totally ordered locking to prevent deadlocks
Replica Placement

   Goals:
       Maximize data reliability and availability
       Maximize network bandwidth
   Need to spread chunk replicas across
    machines and racks
   Higher priority to replica chunks with lower
    replication factors
   Limited resources spent on replication
Garbage Collection

   Simpler than eager deletion due to
       Unfinished replicated creation
       Lost deletion messages
   Deleted files are hidden for three days
   Then they are garbage collected
   Combined with other background operations
    (taking snapshots)
   Safety net against accidents
Fault Tolerance and Diagnosis

   Fast recovery
       Master and chunkserver are designed to restore
        their states and start in seconds regardless of
        termination conditions
   Chunk replication
   Master replication
       Shadow masters provide read-only access when
        the primary master is down
Fault Tolerance and Diagnosis

   Data integrity
       A chunk is divided into 64-KB blocks
       Each with its checksum
       Verified at read and write times
       Also background scans for rarely used data
Measurements

   Chunkserver workload
       Bimodal distribution of small and large files
       Ratio of write to append operations: 3:1 to 8:1
       Virtually no overwrites
   Master workload
       Most request for chunk locations and open files
   Reads achieve 75% of the network limit
   Writes achieve 50% of the network limit
Major Innovations

   File system API tailored to stylized workload
   Single-master design to simplify coordination
   Metadata fit in memory
   Flat namespace
Hadoop File System
            26




                 2/16/2012
Basic Features: HDFS

   Highly fault-tolerant
   High throughput
   Suitable for applications with large data sets
   Streaming access to file system data
   Can be built out of commodity hardware




2/16/2012
                                                     27
Fault tolerance

   Failure is the norm rather than exception
   A HDFS instance may consist of thousands of
    server machines, each storing part of the file
    system’s data.
   Since we have huge number of components and
    that each component has non-trivial probability of
    failure means that there is always some
    component that is non-functional.
   Detection of faults and quick, automatic recovery
    from them is a core architectural goal of HDFS.
2/16/2012
                                                     28
Data Characteristics
    Streaming data access
    Applications need streaming access to data
    Batch processing rather than interactive user access.
    Large data sets and files: gigabytes to terabytes size
    High aggregate data bandwidth
    Scale to hundreds of nodes in a cluster
    Tens of millions of files in a single instance
    Write-once-read-many: a file once created, written and
     closed need not be changed – this assumption
     simplifies coherency
    A map-reduce application or web-crawler application
     fits perfectly with this model.
    2/16/2012
                                                         29
MapReduce

                           combine             part0
                     map             reduce
     Cat     split


                                     reduce   part1
             split   map   combine

     Bat

                     map                      part2
             split         combine   reduce
     Dog


             split   map
   Other
   Words
    (size:
   TByte)
2/16/2012
                                                 30
ARCHITECTURE

         2/16/2012
                     31
Namenode and Datanodes
    Master/slave architecture
    HDFS cluster consists of a single Namenode, a master server that
     manages the file system namespace and regulates access to files
     by clients.
    There are a number of DataNodes usually one per node in a
     cluster.
    The DataNodes manage storage attached to the nodes that they
     run on.
    HDFS exposes a file system namespace and allows user data to be
     stored in files.
    A file is split into one or more blocks and set of blocks are stored in
     DataNodes.
    DataNodes: serves read, write requests, performs block creation,
     deletion, and replication upon instruction from Namenode.


    2/16/2012
                                                                          32
HDFS Architecture
                                                Metadata(Name, replicas..)
            Metadata ops    Namenode            (/home/foo/data,6. ..


   Client
                                         Block ops
    Read        Datanodes                                      Datanodes


                                       replication
                                                                                B
                                                                             Blocks



                Rack1              Write                           Rack2

                                   Client

2/16/2012
                                                                                    33
File system Namespace

   Hierarchical file system with directories and files
   Create, remove, move, rename etc.
   Namenode maintains the file system
   Any meta information changes to the file system
    recorded by the Namenode.
   An application can specify the number of replicas
    of the file needed: replication factor of the file. This
    information is stored in the Namenode.


2/16/2012
                                                          34
Data Replication
    HDFS is designed to store very large files across
     machines in a large cluster.
    Each file is a sequence of blocks.
    All blocks in the file except the last are of the same
     size.
    Blocks are replicated for fault tolerance.
    Block size and replicas are configurable per file.
    The Namenode receives a Heartbeat and a
     BlockReport from each DataNode in the cluster.
    BlockReport contains all the blocks on a
     Datanode.
    2/16/2012
                                                         35
Replica Placement
    The placement of the replicas is critical to HDFS reliability and performance.
    Optimizing replica placement distinguishes HDFS from other distributed file
     systems.
    Rack-aware replica placement:
        Goal: improve reliability, availability and network bandwidth utilization
        Research topic
    Many racks, communication between racks are through switches.
    Network bandwidth between machines on the same rack is greater than those in
     different racks.
    Namenode determines the rack id for each DataNode.
    Replicas are typically placed on unique racks
        Simple but non-optimal
        Writes are expensive
        Replication factor is 3
        Another research topic?
    Replicas are placed: one on a node in a local rack, one on a different node in the
     local rack and one on a node in a different rack.
    1/3 of the replica on a node, 2/3 on a rack and 1/3 distributed evenly across
     remaining racks.

    2/16/2012
                                                                                          36
Replica Selection

   Replica selection for READ operation: HDFS
    tries to minimize the bandwidth consumption
    and latency.
   If there is a replica on the Reader node then
    that is preferred.
   HDFS cluster may span multiple data centers:
    replica in the local data center is preferred over
    the remote one.

2/16/2012
                                                    37
Safemode Startup
    On startup Namenode enters Safemode.
    Replication of data blocks do not occur in Safemode.
    Each DataNode checks in with Heartbeat and BlockReport.
    Namenode verifies that each block has acceptable number of
     replicas
    After a configurable percentage of safely replicated blocks
     check in with the Namenode, Namenode exits Safemode.
    It then makes the list of blocks that need to be replicated.
    Namenode then proceeds to replicate these blocks to other
     Datanodes.




    2/16/2012
                                                                38
Filesystem Metadata
     The HDFS namespace is stored by Namenode.
     Namenode uses a transaction log called the EditLog to
      record every change that occurs to the filesystem meta
      data.
          For example, creating a new file.
          Change replication factor of a file
          EditLog is stored in the Namenode’s local filesystem
     Entire filesystem namespace including mapping of blocks
      to files and file system properties is stored in a file
      FsImage. Stored in Namenode’s local filesystem.


    2/16/2012
                                                                  39
Namenode
    Keeps image of entire file system namespace and file
     Blockmap in memory.
    4GB of local RAM is sufficient to support the above
     data structures that represent the huge number of files
     and directories.
    When the Namenode starts up it gets the FsImage and
     Editlog from its local file system, update FsImage with
     EditLog information and then stores a copy of the
     FsImage on the filesytstem as a checkpoint.
    Periodic checkpointing is done. So that the system can
     recover back to the last checkpointed state in case of
     a crash.

    2/16/2012
                                                          40
Datanode
    A Datanode stores data in files in its local file system.
    Datanode has no knowledge about HDFS filesystem
    It stores each block of HDFS data in a separate file.
    Datanode does not create all files in the same
     directory.
    It uses heuristics to determine optimal number of files
     per directory and creates directories appropriately:
        Research issue?
    When the filesystem starts up it generates a list of all
     HDFS blocks and send this report to Namenode:
     Blockreport.
    2/16/2012
                                                                41
PROTOCOL

           2/16/2012
                       42
The Communication Protocol
    All HDFS communication protocols are layered on
     top of the TCP/IP protocol
    A client establishes a connection to a configurable
     TCP port on the Namenode machine. It talks
     ClientProtocol with the Namenode.
    The Datanodes talk to the Namenode using
     Datanode protocol.
    RPC abstraction wraps both ClientProtocol and
     Datanode protocol.
    Namenode is simply a server and never initiates a
     request; it only responds to RPC requests issued
     by DataNodes or clients.
    2/16/2012
                                                       43
ROBUSTNESS

             2/16/2012
                         44
Objectives

   Primary objective of HDFS is to store data
    reliably in the presence of failures.
   Three common failures are: Namenode failure,
    Datanode failure and network partition.




2/16/2012                                      45
DataNode failure and heartbeat
     A network partition can cause a subset of Datanodes to
      lose connectivity with the Namenode.
     Namenode detects this condition by the absence of a
      Heartbeat message.
     Namenode marks Datanodes without Hearbeat and does
      not send any IO requests to them.
     Any data registered to the failed Datanode is not available
      to the HDFS.
     Also the death of a Datanode may cause replication factor
      of some of the blocks to fall below their specified value.


    2/16/2012                                                   46
Re-replication

   The necessity for re-replication may arise due
    to:
       A Datanode may become unavailable,
       A replica may become corrupted,
       A hard disk on a Datanode may fail, or
       The replication factor on the block may be
        increased.




2/16/2012                                            47
Cluster Rebalancing
     HDFS architecture is compatible with data rebalancing
      schemes.
     A scheme might move data from one Datanode to another
      if the free space on a Datanode falls below a certain
      threshold.
     In the event of a sudden high demand for a particular file, a
      scheme might dynamically create additional replicas and
      rebalance other data in the cluster.
     These types of data rebalancing are not yet implemented:
      research issue.



    2/16/2012                                                    48
Data Integrity
     Consider a situation: a block of data fetched from
      Datanode arrives corrupted.
     This corruption may occur because of faults in a storage
      device, network faults, or buggy software.
     A HDFS client creates the checksum of every block of its
      file and stores it in hidden files in the HDFS namespace.
     When a clients retrieves the contents of file, it verifies that
      the corresponding checksums match.
     If does not match, the client can retrieve the block from a
      replica.


    2/16/2012                                                       49
Metadata Disk Failure
     FsImage and EditLog are central data structures of HDFS.
     A corruption of these files can cause a HDFS instance to
      be non-functional.
     For this reason, a Namenode can be configured to
      maintain multiple copies of the FsImage and EditLog.
     Multiple copies of the FsImage and EditLog files are
      updated synchronously.
     Meta-data is not data-intensive.
     The Namenode could be single point failure: automatic
      failover is NOT supported! Another research topic.


    2/16/2012                                                50
DATA ORGANIZATION

          2/16/2012
                      51
Data Blocks

   HDFS support write-once-read-many with
    reads at streaming speeds.
   A typical block size is 64MB (or even 128 MB).
   A file is chopped into 64MB chunks and stored.




2/16/2012                                       52
Staging

     A client request to create a file does not reach Namenode
      immediately.
     HDFS client caches the data into a temporary file. When
      the data reached a HDFS block size the client contacts the
      Namenode.
     Namenode inserts the filename into its hierarchy and
      allocates a data block for it.
     The Namenode responds to the client with the identity of
      the Datanode and the destination of the replicas
      (Datanodes) for the block.
     Then the client flushes it from its local memory.

    2/16/2012                                                 53
Staging (contd.)

   The client sends a message that the file is
    closed.
   Namenode proceeds to commit the file for
    creation operation into the persistent store.
   If the Namenode dies before file is closed, the
    file is lost.
   This client side caching is required to avoid
    network congestion; also it has precedence is
    AFS (Andrew file system).
2/16/2012                                         54
Replication Pipelining

   When the client receives response from
    Namenode, it flushes its block in small pieces
    (4K) to the first replica, that in turn copies it to
    the next replica and so on.
   Thus data is pipelined from Datanode to the
    next.




2/16/2012                                              55
API (ACCESSIBILITY)


            2/16/2012   56
Application Programming Interface

   HDFS provides Java API for application to use.
   Python access is also used in many
    applications.
   A C language wrapper for Java API is also
    available.
   A HTTP browser can be used to browse the
    files of a HDFS instance.



2/16/2012                                       57
FS Shell, Admin and Browser
    Interface
  HDFS organizes its data in files and directories.
 It provides a command line interface called the FS shell
   that lets the user interact with data in the HDFS.
 The syntax of the commands is similar to bash and csh.

 Example: to create a directory /foodir

/bin/hadoop dfs –mkdir /foodir
 There is also DFSAdmin interface available

 Browser interface is also available to view the namespace.




    2/16/2012                                             58
Space Reclamation
     When a file is deleted by a client, HDFS renames file to a
      file in be the /trash directory for a configurable amount of
      time.
     A client can request for an undelete in this allowed time.
     After the specified time the file is deleted and the space is
      reclaimed.
     When the replication factor is reduced, the Namenode
      selects excess replicas that can be deleted.
     Next heartbeat(?) transfers this information to the
      Datanode that clears the blocks for use.



    2/16/2012                                                     59
Summary

   We discussed the features of the Hadoop File
    System, a peta-scale file system to handle big-
    data sets.
   What discussed: Architecture, Protocol, API,
    etc.
   Missing element: Implementation
       The Hadoop file system (internals)
       An implementation of an instance of the HDFS (for
        use by applications such as web crawlers).

2/16/2012                                                   60

Contenu connexe

Tendances

Putting Wings on the Elephant
Putting Wings on the ElephantPutting Wings on the Elephant
Putting Wings on the ElephantDataWorks Summit
 
Performance comparison of Distributed File Systems on 1Gbit networks
Performance comparison of Distributed File Systems on 1Gbit networksPerformance comparison of Distributed File Systems on 1Gbit networks
Performance comparison of Distributed File Systems on 1Gbit networksMarian Marinov
 
Apache Hadoop YARN, NameNode HA, HDFS Federation
Apache Hadoop YARN, NameNode HA, HDFS FederationApache Hadoop YARN, NameNode HA, HDFS Federation
Apache Hadoop YARN, NameNode HA, HDFS FederationAdam Kawa
 
Comparison of foss distributed storage
Comparison of foss distributed storageComparison of foss distributed storage
Comparison of foss distributed storageMarian Marinov
 
HDFS NameNode High Availability
HDFS NameNode High AvailabilityHDFS NameNode High Availability
HDFS NameNode High AvailabilityDataWorks Summit
 
Think_your_Postgres_backups_and_recovery_are_safe_lets_talk.pptx
Think_your_Postgres_backups_and_recovery_are_safe_lets_talk.pptxThink_your_Postgres_backups_and_recovery_are_safe_lets_talk.pptx
Think_your_Postgres_backups_and_recovery_are_safe_lets_talk.pptxPayal Singh
 
Backup, Restore, and Disaster Recovery
Backup, Restore, and Disaster RecoveryBackup, Restore, and Disaster Recovery
Backup, Restore, and Disaster RecoveryMongoDB
 
PostgreSQL + ZFS best practices
PostgreSQL + ZFS best practicesPostgreSQL + ZFS best practices
PostgreSQL + ZFS best practicesSean Chittenden
 
Improvements in GlusterFS for Virtualization usecase
Improvements in GlusterFS for Virtualization usecaseImprovements in GlusterFS for Virtualization usecase
Improvements in GlusterFS for Virtualization usecaseDeepak Shetty
 
Hadoop ha system admin
Hadoop ha system adminHadoop ha system admin
Hadoop ha system adminTrieu Dao Minh
 
Comparison of-foss-distributed-storage
Comparison of-foss-distributed-storageComparison of-foss-distributed-storage
Comparison of-foss-distributed-storageMarian Marinov
 
What every data programmer needs to know about disks
What every data programmer needs to know about disksWhat every data programmer needs to know about disks
What every data programmer needs to know about disksiammutex
 
Performance bottlenecks for metadata workload in Gluster with Poornima Gurusi...
Performance bottlenecks for metadata workload in Gluster with Poornima Gurusi...Performance bottlenecks for metadata workload in Gluster with Poornima Gurusi...
Performance bottlenecks for metadata workload in Gluster with Poornima Gurusi...Gluster.org
 
Rhel cluster gfs_improveperformance
Rhel cluster gfs_improveperformanceRhel cluster gfs_improveperformance
Rhel cluster gfs_improveperformancesprdd
 
pg_prefaulter: Scaling WAL Performance
pg_prefaulter: Scaling WAL Performancepg_prefaulter: Scaling WAL Performance
pg_prefaulter: Scaling WAL PerformanceSean Chittenden
 
Facebook's HBase Backups - StampedeCon 2012
Facebook's HBase Backups - StampedeCon 2012Facebook's HBase Backups - StampedeCon 2012
Facebook's HBase Backups - StampedeCon 2012StampedeCon
 
Native erasure coding support inside hdfs presentation
Native erasure coding support inside hdfs presentationNative erasure coding support inside hdfs presentation
Native erasure coding support inside hdfs presentationlin bao
 

Tendances (20)

Putting Wings on the Elephant
Putting Wings on the ElephantPutting Wings on the Elephant
Putting Wings on the Elephant
 
Performance comparison of Distributed File Systems on 1Gbit networks
Performance comparison of Distributed File Systems on 1Gbit networksPerformance comparison of Distributed File Systems on 1Gbit networks
Performance comparison of Distributed File Systems on 1Gbit networks
 
Apache Hadoop YARN, NameNode HA, HDFS Federation
Apache Hadoop YARN, NameNode HA, HDFS FederationApache Hadoop YARN, NameNode HA, HDFS Federation
Apache Hadoop YARN, NameNode HA, HDFS Federation
 
Comparison of foss distributed storage
Comparison of foss distributed storageComparison of foss distributed storage
Comparison of foss distributed storage
 
HDFS NameNode High Availability
HDFS NameNode High AvailabilityHDFS NameNode High Availability
HDFS NameNode High Availability
 
Think_your_Postgres_backups_and_recovery_are_safe_lets_talk.pptx
Think_your_Postgres_backups_and_recovery_are_safe_lets_talk.pptxThink_your_Postgres_backups_and_recovery_are_safe_lets_talk.pptx
Think_your_Postgres_backups_and_recovery_are_safe_lets_talk.pptx
 
Backups
BackupsBackups
Backups
 
Interacting with hdfs
Interacting with hdfsInteracting with hdfs
Interacting with hdfs
 
Backup, Restore, and Disaster Recovery
Backup, Restore, and Disaster RecoveryBackup, Restore, and Disaster Recovery
Backup, Restore, and Disaster Recovery
 
PostgreSQL + ZFS best practices
PostgreSQL + ZFS best practicesPostgreSQL + ZFS best practices
PostgreSQL + ZFS best practices
 
Improvements in GlusterFS for Virtualization usecase
Improvements in GlusterFS for Virtualization usecaseImprovements in GlusterFS for Virtualization usecase
Improvements in GlusterFS for Virtualization usecase
 
Redis Persistence
Redis  PersistenceRedis  Persistence
Redis Persistence
 
Hadoop ha system admin
Hadoop ha system adminHadoop ha system admin
Hadoop ha system admin
 
Comparison of-foss-distributed-storage
Comparison of-foss-distributed-storageComparison of-foss-distributed-storage
Comparison of-foss-distributed-storage
 
What every data programmer needs to know about disks
What every data programmer needs to know about disksWhat every data programmer needs to know about disks
What every data programmer needs to know about disks
 
Performance bottlenecks for metadata workload in Gluster with Poornima Gurusi...
Performance bottlenecks for metadata workload in Gluster with Poornima Gurusi...Performance bottlenecks for metadata workload in Gluster with Poornima Gurusi...
Performance bottlenecks for metadata workload in Gluster with Poornima Gurusi...
 
Rhel cluster gfs_improveperformance
Rhel cluster gfs_improveperformanceRhel cluster gfs_improveperformance
Rhel cluster gfs_improveperformance
 
pg_prefaulter: Scaling WAL Performance
pg_prefaulter: Scaling WAL Performancepg_prefaulter: Scaling WAL Performance
pg_prefaulter: Scaling WAL Performance
 
Facebook's HBase Backups - StampedeCon 2012
Facebook's HBase Backups - StampedeCon 2012Facebook's HBase Backups - StampedeCon 2012
Facebook's HBase Backups - StampedeCon 2012
 
Native erasure coding support inside hdfs presentation
Native erasure coding support inside hdfs presentationNative erasure coding support inside hdfs presentation
Native erasure coding support inside hdfs presentation
 

En vedette

Google File Systems
Google File SystemsGoogle File Systems
Google File SystemsAzeem Mumtaz
 
The Google File System (GFS)
The Google File System (GFS)The Google File System (GFS)
The Google File System (GFS)Romain Jacotin
 
GFS - Google File System
GFS - Google File SystemGFS - Google File System
GFS - Google File Systemtutchiio
 
advanced Google file System
advanced Google file Systemadvanced Google file System
advanced Google file Systemdiptipan
 
Google file system
Google file systemGoogle file system
Google file systemDhan V Sagar
 
GOOGLE FILE SYSTEM
GOOGLE FILE SYSTEMGOOGLE FILE SYSTEM
GOOGLE FILE SYSTEMJYoTHiSH o.s
 

En vedette (9)

Google File Systems
Google File SystemsGoogle File Systems
Google File Systems
 
GFS
GFSGFS
GFS
 
The Google File System (GFS)
The Google File System (GFS)The Google File System (GFS)
The Google File System (GFS)
 
GFS - Google File System
GFS - Google File SystemGFS - Google File System
GFS - Google File System
 
Google file system
Google file systemGoogle file system
Google file system
 
Google file system
Google file systemGoogle file system
Google file system
 
advanced Google file System
advanced Google file Systemadvanced Google file System
advanced Google file System
 
Google file system
Google file systemGoogle file system
Google file system
 
GOOGLE FILE SYSTEM
GOOGLE FILE SYSTEMGOOGLE FILE SYSTEM
GOOGLE FILE SYSTEM
 

Similaire à Google

Distributed computing seminar lecture 3 - distributed file systems
Distributed computing seminar   lecture 3 - distributed file systemsDistributed computing seminar   lecture 3 - distributed file systems
Distributed computing seminar lecture 3 - distributed file systemstugrulh
 
Distributed file systems (from Google)
Distributed file systems (from Google)Distributed file systems (from Google)
Distributed file systems (from Google)Sri Prasanna
 
Distributed file systems
Distributed file systemsDistributed file systems
Distributed file systemsSri Prasanna
 
Introduction to hadoop and hdfs
Introduction to hadoop and hdfsIntroduction to hadoop and hdfs
Introduction to hadoop and hdfsshrey mehrotra
 
Distributed File System
Distributed File SystemDistributed File System
Distributed File SystemNtu
 
Hadoop HDFS Architeture and Design
Hadoop HDFS Architeture and DesignHadoop HDFS Architeture and Design
Hadoop HDFS Architeture and Designsudhakara st
 
Distributed Shared Memory Systems
Distributed Shared Memory SystemsDistributed Shared Memory Systems
Distributed Shared Memory SystemsAnkit Gupta
 
Hw09 Low Latency, Random Reads From Hdfs
Hw09   Low Latency, Random Reads From HdfsHw09   Low Latency, Random Reads From Hdfs
Hw09 Low Latency, Random Reads From HdfsCloudera, Inc.
 
Big data interview questions and answers
Big data interview questions and answersBig data interview questions and answers
Big data interview questions and answersKalyan Hadoop
 
Hadoop architecture (Delhi Hadoop User Group Meetup 10 Sep 2011)
Hadoop architecture (Delhi Hadoop User Group Meetup 10 Sep 2011)Hadoop architecture (Delhi Hadoop User Group Meetup 10 Sep 2011)
Hadoop architecture (Delhi Hadoop User Group Meetup 10 Sep 2011)Hari Shankar Sreekumar
 

Similaire à Google (20)

tittle
tittletittle
tittle
 
Lec3 Dfs
Lec3 DfsLec3 Dfs
Lec3 Dfs
 
Distributed computing seminar lecture 3 - distributed file systems
Distributed computing seminar   lecture 3 - distributed file systemsDistributed computing seminar   lecture 3 - distributed file systems
Distributed computing seminar lecture 3 - distributed file systems
 
Distributed file systems (from Google)
Distributed file systems (from Google)Distributed file systems (from Google)
Distributed file systems (from Google)
 
Hadoop
HadoopHadoop
Hadoop
 
Distributed file systems
Distributed file systemsDistributed file systems
Distributed file systems
 
Introduction to hadoop and hdfs
Introduction to hadoop and hdfsIntroduction to hadoop and hdfs
Introduction to hadoop and hdfs
 
HDFS Design Principles
HDFS Design PrinciplesHDFS Design Principles
HDFS Design Principles
 
Kosmos Filesystem
Kosmos FilesystemKosmos Filesystem
Kosmos Filesystem
 
Hdfs architecture
Hdfs architectureHdfs architecture
Hdfs architecture
 
HDFS.ppt
HDFS.pptHDFS.ppt
HDFS.ppt
 
Distributed File System
Distributed File SystemDistributed File System
Distributed File System
 
Hadoop HDFS Architeture and Design
Hadoop HDFS Architeture and DesignHadoop HDFS Architeture and Design
Hadoop HDFS Architeture and Design
 
Distributed Shared Memory Systems
Distributed Shared Memory SystemsDistributed Shared Memory Systems
Distributed Shared Memory Systems
 
Hw09 Low Latency, Random Reads From Hdfs
Hw09   Low Latency, Random Reads From HdfsHw09   Low Latency, Random Reads From Hdfs
Hw09 Low Latency, Random Reads From Hdfs
 
Big data interview questions and answers
Big data interview questions and answersBig data interview questions and answers
Big data interview questions and answers
 
HDFS.ppt
HDFS.pptHDFS.ppt
HDFS.ppt
 
Hadoop -HDFS.ppt
Hadoop -HDFS.pptHadoop -HDFS.ppt
Hadoop -HDFS.ppt
 
Hadoop data management
Hadoop data managementHadoop data management
Hadoop data management
 
Hadoop architecture (Delhi Hadoop User Group Meetup 10 Sep 2011)
Hadoop architecture (Delhi Hadoop User Group Meetup 10 Sep 2011)Hadoop architecture (Delhi Hadoop User Group Meetup 10 Sep 2011)
Hadoop architecture (Delhi Hadoop User Group Meetup 10 Sep 2011)
 

Dernier

Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsRizwan Syed
 
Powerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time ClashPowerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time Clashcharlottematthew16
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 3652toLead Limited
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Enterprise Knowledge
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfAlex Barbosa Coqueiro
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024Stephanie Beckett
 
The Future of Software Development - Devin AI Innovative Approach.pdf
The Future of Software Development - Devin AI Innovative Approach.pdfThe Future of Software Development - Devin AI Innovative Approach.pdf
The Future of Software Development - Devin AI Innovative Approach.pdfSeasiaInfotech2
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupFlorian Wilhelm
 
Training state-of-the-art general text embedding
Training state-of-the-art general text embeddingTraining state-of-the-art general text embedding
Training state-of-the-art general text embeddingZilliz
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenHervé Boutemy
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brandgvaughan
 
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr LapshynFwdays
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024Lorenzo Miniero
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLScyllaDB
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsMark Billinghurst
 
Artificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxArtificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxhariprasad279825
 
Vector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector DatabasesVector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector DatabasesZilliz
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr BaganFwdays
 

Dernier (20)

Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL Certs
 
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptxE-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
 
Powerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time ClashPowerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time Clash
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdf
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024
 
The Future of Software Development - Devin AI Innovative Approach.pdf
The Future of Software Development - Devin AI Innovative Approach.pdfThe Future of Software Development - Devin AI Innovative Approach.pdf
The Future of Software Development - Devin AI Innovative Approach.pdf
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project Setup
 
Training state-of-the-art general text embedding
Training state-of-the-art general text embeddingTraining state-of-the-art general text embedding
Training state-of-the-art general text embedding
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache Maven
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brand
 
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQL
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR Systems
 
Artificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxArtificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptx
 
Vector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector DatabasesVector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector Databases
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan
 

Google

  • 1. The Google File System (GFS)
  • 2. Introduction  Design constraints  Component failures are the norm  1000s of components  Bugs, human errors, failures of memory, disk, connectors, networking, and power supplies  Monitoring, error detection, fault tolerance, automatic recovery  Files are huge by traditional standards  Multi-GB files are common  Billions of objects
  • 3. Introduction  Design constraints  Most modifications are appends  Random writes are practically nonexistent  Many files are written once, and read sequentially  Two types of reads  Large streaming reads  Small random reads (in the forward direction)  Sustained bandwidth more important than latency  File system APIs are open to changes
  • 4. Interface Design  Not POSIX compliant  Additional operations  Snapshot  Record append
  • 5. Architectural Design  A GFS cluster  A single master  Multiple chunkservers per master  Accessed by multiple clients  Running on commodity Linux machines  A file  Represented as fixed-sized chunks  Labeled with 64-bit unique global IDs  Stored at chunkservers  3-way Mirrored across chunkservers
  • 6. Architectural Design (2) Application chunk location? GFS Master GFS client GFS chunkserver chunk data? GFS chunkserver Linux file system GFS chunkserver Linux file system Linux file system
  • 7. Architectural Design (3)  Master server  Maintains all metadata  Name space, access control, file-to-chunk mappings, garbage collection, chunk migration  GPS clients  Consult master for metadata  Access data from chunkservers  Does not go through VFS  No caching at clients and chunkservers due to the frequent case of streaming
  • 8. Single-Master Design  Simple  Master answers only chunk locations  A client typically asks for multiple chunk locations in a single request  The master also predicatively provide chunk locations immediately following those requested
  • 9. Chunk Size  64 MB  Fewer chunk location requests to the master  Reduced overhead to access a chunk  Fewer metadata entries  Kept in memory - Some potential problems with fragmentation
  • 10. Metadata  Three major types  File and chunk namespaces  File-to-chunk mappings  Locations of a chunk’s replicas
  • 11. Metadata  All kept in memory  Fast!  Quick global scans  Garbage collections  Reorganizations  64 bytes per 64 MB of data  Prefix compression
  • 12. Chunk Locations  No persistent states  Polls chunkservers at startup  Use heartbeat messages to monitor servers  Simplicity  On-demand approach vs. coordination  On-demand wins when changes (failures) are often
  • 13. Operation Logs  Metadata updates are logged  e.g., <old value, new value> pairs  Log replicated on remote machines  Take global snapshots (checkpoints) to truncate logs  Memory mapped (no serialization/deserialization)  Checkpoints can be created while updates arrive  Recovery  Latest checkpoint + subsequent log files
  • 14. Consistency Model  Relaxed consistency  Concurrent changes are consistent but undefined  An append is atomically committed at least once - Occasional duplications  All changes to a chunk are applied in the same order to all replicas  Use version number to detect missed updates
  • 15. System Interactions  The master grants a chunk lease to a replica  The replica holding the lease determines the order of updates to all replicas  Lease  60 second timeouts  Can be extended indefinitely  Extension request are piggybacked on heartbeat messages  After a timeout expires, the master can grant new leases
  • 16. Data Flow  Separation of control and data flows  Avoid network bottleneck  Updates are pushed linearly among replicas  Pipelined transfers  13 MB/second with 100 Mbps network
  • 17. Snapshot  Copy-on-write approach  Revoke outstanding leases  New updates are logged while taking the snapshot  Commit the log to disk  Apply to the log to a copy of metadata  A chunk is not copied until the next update
  • 18. Master Operation  No directories  No hard links and symbolic links  Full path name to metadata mapping  With prefix compression
  • 19. Locking Operations  A lock per path  To access /d1/d2/leaf  Need to lock /d1, /d1/d2, and /d1/d2/leaf  Can modify a directory concurrently  Each thread acquires  A read lock on a directory  A write lock on a file  Totally ordered locking to prevent deadlocks
  • 20. Replica Placement  Goals:  Maximize data reliability and availability  Maximize network bandwidth  Need to spread chunk replicas across machines and racks  Higher priority to replica chunks with lower replication factors  Limited resources spent on replication
  • 21. Garbage Collection  Simpler than eager deletion due to  Unfinished replicated creation  Lost deletion messages  Deleted files are hidden for three days  Then they are garbage collected  Combined with other background operations (taking snapshots)  Safety net against accidents
  • 22. Fault Tolerance and Diagnosis  Fast recovery  Master and chunkserver are designed to restore their states and start in seconds regardless of termination conditions  Chunk replication  Master replication  Shadow masters provide read-only access when the primary master is down
  • 23. Fault Tolerance and Diagnosis  Data integrity  A chunk is divided into 64-KB blocks  Each with its checksum  Verified at read and write times  Also background scans for rarely used data
  • 24. Measurements  Chunkserver workload  Bimodal distribution of small and large files  Ratio of write to append operations: 3:1 to 8:1  Virtually no overwrites  Master workload  Most request for chunk locations and open files  Reads achieve 75% of the network limit  Writes achieve 50% of the network limit
  • 25. Major Innovations  File system API tailored to stylized workload  Single-master design to simplify coordination  Metadata fit in memory  Flat namespace
  • 26. Hadoop File System 26 2/16/2012
  • 27. Basic Features: HDFS  Highly fault-tolerant  High throughput  Suitable for applications with large data sets  Streaming access to file system data  Can be built out of commodity hardware 2/16/2012 27
  • 28. Fault tolerance  Failure is the norm rather than exception  A HDFS instance may consist of thousands of server machines, each storing part of the file system’s data.  Since we have huge number of components and that each component has non-trivial probability of failure means that there is always some component that is non-functional.  Detection of faults and quick, automatic recovery from them is a core architectural goal of HDFS. 2/16/2012 28
  • 29. Data Characteristics  Streaming data access  Applications need streaming access to data  Batch processing rather than interactive user access.  Large data sets and files: gigabytes to terabytes size  High aggregate data bandwidth  Scale to hundreds of nodes in a cluster  Tens of millions of files in a single instance  Write-once-read-many: a file once created, written and closed need not be changed – this assumption simplifies coherency  A map-reduce application or web-crawler application fits perfectly with this model. 2/16/2012 29
  • 30. MapReduce combine part0 map reduce Cat split reduce part1 split map combine Bat map part2 split combine reduce Dog split map Other Words (size: TByte) 2/16/2012 30
  • 31. ARCHITECTURE 2/16/2012 31
  • 32. Namenode and Datanodes  Master/slave architecture  HDFS cluster consists of a single Namenode, a master server that manages the file system namespace and regulates access to files by clients.  There are a number of DataNodes usually one per node in a cluster.  The DataNodes manage storage attached to the nodes that they run on.  HDFS exposes a file system namespace and allows user data to be stored in files.  A file is split into one or more blocks and set of blocks are stored in DataNodes.  DataNodes: serves read, write requests, performs block creation, deletion, and replication upon instruction from Namenode. 2/16/2012 32
  • 33. HDFS Architecture Metadata(Name, replicas..) Metadata ops Namenode (/home/foo/data,6. .. Client Block ops Read Datanodes Datanodes replication B Blocks Rack1 Write Rack2 Client 2/16/2012 33
  • 34. File system Namespace  Hierarchical file system with directories and files  Create, remove, move, rename etc.  Namenode maintains the file system  Any meta information changes to the file system recorded by the Namenode.  An application can specify the number of replicas of the file needed: replication factor of the file. This information is stored in the Namenode. 2/16/2012 34
  • 35. Data Replication  HDFS is designed to store very large files across machines in a large cluster.  Each file is a sequence of blocks.  All blocks in the file except the last are of the same size.  Blocks are replicated for fault tolerance.  Block size and replicas are configurable per file.  The Namenode receives a Heartbeat and a BlockReport from each DataNode in the cluster.  BlockReport contains all the blocks on a Datanode. 2/16/2012 35
  • 36. Replica Placement  The placement of the replicas is critical to HDFS reliability and performance.  Optimizing replica placement distinguishes HDFS from other distributed file systems.  Rack-aware replica placement:  Goal: improve reliability, availability and network bandwidth utilization  Research topic  Many racks, communication between racks are through switches.  Network bandwidth between machines on the same rack is greater than those in different racks.  Namenode determines the rack id for each DataNode.  Replicas are typically placed on unique racks  Simple but non-optimal  Writes are expensive  Replication factor is 3  Another research topic?  Replicas are placed: one on a node in a local rack, one on a different node in the local rack and one on a node in a different rack.  1/3 of the replica on a node, 2/3 on a rack and 1/3 distributed evenly across remaining racks. 2/16/2012 36
  • 37. Replica Selection  Replica selection for READ operation: HDFS tries to minimize the bandwidth consumption and latency.  If there is a replica on the Reader node then that is preferred.  HDFS cluster may span multiple data centers: replica in the local data center is preferred over the remote one. 2/16/2012 37
  • 38. Safemode Startup  On startup Namenode enters Safemode.  Replication of data blocks do not occur in Safemode.  Each DataNode checks in with Heartbeat and BlockReport.  Namenode verifies that each block has acceptable number of replicas  After a configurable percentage of safely replicated blocks check in with the Namenode, Namenode exits Safemode.  It then makes the list of blocks that need to be replicated.  Namenode then proceeds to replicate these blocks to other Datanodes. 2/16/2012 38
  • 39. Filesystem Metadata  The HDFS namespace is stored by Namenode.  Namenode uses a transaction log called the EditLog to record every change that occurs to the filesystem meta data.  For example, creating a new file.  Change replication factor of a file  EditLog is stored in the Namenode’s local filesystem  Entire filesystem namespace including mapping of blocks to files and file system properties is stored in a file FsImage. Stored in Namenode’s local filesystem. 2/16/2012 39
  • 40. Namenode  Keeps image of entire file system namespace and file Blockmap in memory.  4GB of local RAM is sufficient to support the above data structures that represent the huge number of files and directories.  When the Namenode starts up it gets the FsImage and Editlog from its local file system, update FsImage with EditLog information and then stores a copy of the FsImage on the filesytstem as a checkpoint.  Periodic checkpointing is done. So that the system can recover back to the last checkpointed state in case of a crash. 2/16/2012 40
  • 41. Datanode  A Datanode stores data in files in its local file system.  Datanode has no knowledge about HDFS filesystem  It stores each block of HDFS data in a separate file.  Datanode does not create all files in the same directory.  It uses heuristics to determine optimal number of files per directory and creates directories appropriately:  Research issue?  When the filesystem starts up it generates a list of all HDFS blocks and send this report to Namenode: Blockreport. 2/16/2012 41
  • 42. PROTOCOL 2/16/2012 42
  • 43. The Communication Protocol  All HDFS communication protocols are layered on top of the TCP/IP protocol  A client establishes a connection to a configurable TCP port on the Namenode machine. It talks ClientProtocol with the Namenode.  The Datanodes talk to the Namenode using Datanode protocol.  RPC abstraction wraps both ClientProtocol and Datanode protocol.  Namenode is simply a server and never initiates a request; it only responds to RPC requests issued by DataNodes or clients. 2/16/2012 43
  • 44. ROBUSTNESS 2/16/2012 44
  • 45. Objectives  Primary objective of HDFS is to store data reliably in the presence of failures.  Three common failures are: Namenode failure, Datanode failure and network partition. 2/16/2012 45
  • 46. DataNode failure and heartbeat  A network partition can cause a subset of Datanodes to lose connectivity with the Namenode.  Namenode detects this condition by the absence of a Heartbeat message.  Namenode marks Datanodes without Hearbeat and does not send any IO requests to them.  Any data registered to the failed Datanode is not available to the HDFS.  Also the death of a Datanode may cause replication factor of some of the blocks to fall below their specified value. 2/16/2012 46
  • 47. Re-replication  The necessity for re-replication may arise due to:  A Datanode may become unavailable,  A replica may become corrupted,  A hard disk on a Datanode may fail, or  The replication factor on the block may be increased. 2/16/2012 47
  • 48. Cluster Rebalancing  HDFS architecture is compatible with data rebalancing schemes.  A scheme might move data from one Datanode to another if the free space on a Datanode falls below a certain threshold.  In the event of a sudden high demand for a particular file, a scheme might dynamically create additional replicas and rebalance other data in the cluster.  These types of data rebalancing are not yet implemented: research issue. 2/16/2012 48
  • 49. Data Integrity  Consider a situation: a block of data fetched from Datanode arrives corrupted.  This corruption may occur because of faults in a storage device, network faults, or buggy software.  A HDFS client creates the checksum of every block of its file and stores it in hidden files in the HDFS namespace.  When a clients retrieves the contents of file, it verifies that the corresponding checksums match.  If does not match, the client can retrieve the block from a replica. 2/16/2012 49
  • 50. Metadata Disk Failure  FsImage and EditLog are central data structures of HDFS.  A corruption of these files can cause a HDFS instance to be non-functional.  For this reason, a Namenode can be configured to maintain multiple copies of the FsImage and EditLog.  Multiple copies of the FsImage and EditLog files are updated synchronously.  Meta-data is not data-intensive.  The Namenode could be single point failure: automatic failover is NOT supported! Another research topic. 2/16/2012 50
  • 51. DATA ORGANIZATION 2/16/2012 51
  • 52. Data Blocks  HDFS support write-once-read-many with reads at streaming speeds.  A typical block size is 64MB (or even 128 MB).  A file is chopped into 64MB chunks and stored. 2/16/2012 52
  • 53. Staging  A client request to create a file does not reach Namenode immediately.  HDFS client caches the data into a temporary file. When the data reached a HDFS block size the client contacts the Namenode.  Namenode inserts the filename into its hierarchy and allocates a data block for it.  The Namenode responds to the client with the identity of the Datanode and the destination of the replicas (Datanodes) for the block.  Then the client flushes it from its local memory. 2/16/2012 53
  • 54. Staging (contd.)  The client sends a message that the file is closed.  Namenode proceeds to commit the file for creation operation into the persistent store.  If the Namenode dies before file is closed, the file is lost.  This client side caching is required to avoid network congestion; also it has precedence is AFS (Andrew file system). 2/16/2012 54
  • 55. Replication Pipelining  When the client receives response from Namenode, it flushes its block in small pieces (4K) to the first replica, that in turn copies it to the next replica and so on.  Thus data is pipelined from Datanode to the next. 2/16/2012 55
  • 56. API (ACCESSIBILITY) 2/16/2012 56
  • 57. Application Programming Interface  HDFS provides Java API for application to use.  Python access is also used in many applications.  A C language wrapper for Java API is also available.  A HTTP browser can be used to browse the files of a HDFS instance. 2/16/2012 57
  • 58. FS Shell, Admin and Browser Interface  HDFS organizes its data in files and directories.  It provides a command line interface called the FS shell that lets the user interact with data in the HDFS.  The syntax of the commands is similar to bash and csh.  Example: to create a directory /foodir /bin/hadoop dfs –mkdir /foodir  There is also DFSAdmin interface available  Browser interface is also available to view the namespace. 2/16/2012 58
  • 59. Space Reclamation  When a file is deleted by a client, HDFS renames file to a file in be the /trash directory for a configurable amount of time.  A client can request for an undelete in this allowed time.  After the specified time the file is deleted and the space is reclaimed.  When the replication factor is reduced, the Namenode selects excess replicas that can be deleted.  Next heartbeat(?) transfers this information to the Datanode that clears the blocks for use. 2/16/2012 59
  • 60. Summary  We discussed the features of the Hadoop File System, a peta-scale file system to handle big- data sets.  What discussed: Architecture, Protocol, API, etc.  Missing element: Implementation  The Hadoop file system (internals)  An implementation of an instance of the HDFS (for use by applications such as web crawlers). 2/16/2012 60