Key insights in installing, configuring, and running Hadoop and Cloudera's Distribution for Hadoop in production. These are lessons learned from Cloudera helping organizations move to a productions state with Hadoop.
1. Welcome to
Production-izing Hadoop: Lessons Learned
Audio/Telephone: +1 916 233 3087
Access Code: 616-465-108
Audio PIN: Shown after joining the Webinar
2. Housekeeping
• Ask questions at any time using the questions panel
• Problems? Use the chat panel
• Book drawing - winner announced at the end
• Slides and recording will be available
Copyright 2010 Cloudera Inc. All rights reserved 2
3. Poll
What is your interest in Hadoop?
• Just learning about it
• I have a problem I think Hadoop can solve
• Using Hadoop in our labs
• Using Hadoop in production
Copyright 2010 Cloudera Inc. All rights reserved 3
4. Speaker: Eric Sammer
Eric is a Solution Architect and Training Instructor for Cloudera. He has
worked with dozens of customers in a variety of industries including
Cloudera's largest Hadoop deployments. His experience ranges from
clusters of a few nodes to clusters with hundreds of nodes with complex
multi-tenant user environments.
Prior to joining Cloudera, he held roles including System Architect, Director
of Technical Operations, and Tech Lead at various New York City startups
focusing on distributed data collection, processing, and reporting systems.
Eric has over 12 years in development and technical operations and has
contributed to various open source projects such as Gentoo Linux.
twitter: @esammer, @cloudera
Copyright 2010 Cloudera Inc. All rights reserved 4
5. Starting Out
(You)
“Let’s build a Hadoop cluster!”
http://www.iccs.inf.ed.ac.uk/~miles/code.html
Copyright 2010 Cloudera Inc. All rights reserved 5
6. Starting Out
(You)
http://www.iccs.inf.ed.ac.uk/~miles/code.html
Copyright 2010 Cloudera Inc. All rights reserved 6
7. Where you want to be
(You)
Yahoo! Hadoop Cluster (2007)
Copyright 2010 Cloudera Inc. All rights reserved 7
8. What is Hadoop?
• A scalable fault-tolerant distributed system for data storage
and processing (open source under the Apache license)
• Core Hadoop has two main components
• Hadoop Distributed File System (HDFS): self-healing high-bandwidth
clustered storage
• MapReduce: fault-tolerant distributed processing
• Key value
• Flexible -> store data without a schema and add it later as needed
• Affordable -> cost / TB at a fraction of traditional options
• Broadly adopted -> a large and active ecosystem
• Proven at scale -> dozens of petabyte + implementations in
production today
Copyright 2010 Cloudera Inc. All Rights Reserved. 8
9. Cloudera’s Distribution for Hadoop, Version 3
The Industry’s Leading Hadoop Distribution
Hue Hue SDK
Oozie Oozie Hive
Pig/
Hive
Flume, Sqoop HBase
Zookeeper
• Open source – 100% Apache licensed
• Simplified – Component versions & dependencies managed for you
• Integrated – All components & functions interoperate through standard API’s
• Reliable – Patched with fixes from future releases to improve stability
• Supported – Employs project founders and committers for >70% of components
Copyright 2010 Cloudera Inc. All Rights Reserved. 9
10. Overview
• Proper planning
• Data Ingestion
• ETL and Data Processing Infrastructure
• Authentication, Authorization, and Sharing
• Monitoring
Copyright 2010 Cloudera Inc. All rights reserved 10
11. The production data platform
• Data storage
• ETL / data processing / analysis infrastructure
• Data ingestion infrastructure
• Integration with tools
• Data security and access control
• Health and performance monitoring
Copyright 2010 Cloudera Inc. All rights reserved 11
12. Proper planning
• Know your use cases!
• Log transformation, aggregation
• Text mining, IR
• Analytics
• Machine learning
• Critical to proper configuration
• Hadoop
• Network
• OS
• Resource utilization, deep job insight will tell you more
Copyright 2010 Cloudera Inc. All rights reserved 12
13. HDFS Concerns
• Name node availability
• HA is tricky
• Consider where Hadoop lives in the system
• Manual recovery can be simple, fast, effective
• Backup Strategy
• Name node metadata – hourly, ~2 day retention
• User data
• Log shipping style strategies
• DistCp
• “Fan out” to multiple clusters on ingestion
Copyright 2010 Cloudera Inc. All rights reserved 13
14. Data Ingestion
• Many data sources
• Streaming data sources (log files, mostly)
• RDBMS
• EDW
• Files (usually exports from 3rd party)
• Common place we see DIY
• You probably shouldn’t
• Sqoop, Flume, Oozie (but I’m biased)
• No matter what - fault tolerant, performant, monitored
Copyright 2010 Cloudera Inc. All rights reserved 14
15. ETL and Data Processing
• Non-interactive jobs
• Establish a common directory structure for processes
• Need tools to handle complex chains of jobs
• Workflow tools support
• Job dependencies, error handling
• Tracking
• Invocation based on time or events
• Most common mistake: depending on jobs always
completing successfully or within a window of time.
• Monitor for SLA rather than pray
• Defensive coding practices apply just as they do everywhere else!
Copyright 2010 Cloudera Inc. All rights reserved 15
16. Metadata Management
• Tool independent metadata about…
• Data sets we know about and their location (on HDFS)
• Schemata
• Authorization (currently HDFS permissions only)
• Partitioning
• Format and compression
• Guarantees (consistency, timeliness, permits duplicates)
• Currently still DIY in many ways, tool-dependent
• Most people rely on prayer and hard coding
• (H)OWL is interesting
Copyright 2010 Cloudera Inc. All rights reserved 16
17. Authentication and authorization
• Authentication
• Don’t talk to strangers
• Should integrate with existing IT infrastructure
• Yahoo! security (Kerberos) patches now part of CDH3b3
• Authorization
• Not everyone can access everything
• Ex. Production data sets are read-only to quants / analysts. Analysts
have home or group directories for derived data sets.
• Mostly enforced via HDFS permissions; directory structure and
organization is critical
• Not as fine grained as column level access in EDW, RDBMS
• HUE as a gateway to the cluster
Copyright 2010 Cloudera Inc. All rights reserved 17
18. Resource Sharing
• Prefer one large cluster to many small clusters (unless
maybe you’re Facebook)
• “Stop hogging the cluster!”
• Cluster resources
• Disk space (HDFS size quotas)
• Number of files (HDFS file count quotas)
• Simultaneous jobs
• Tasks – guaranteed capacity, full utilization, SLA enforcement
• Monitor and track resource utilization across all groups
Copyright 2010 Cloudera Inc. All rights reserved 18
19. Monitoring
• Critical for keeping things running
• Cluster health
• Duh.
• Traditional monitoring tools: Nagios, Hyperic, Zenoss
• Host checks, service checks
• When to alert? It’s tricky.
• Cluster performance
• Overall utilization in aggregate
• 30,000ft view of utilization and performance; macro level
Copyright 2010 Cloudera Inc. All rights reserved 19
20. Monitoring
• Hadoop aware cluster monitoring
• Traditional tools don’t cut it; Hadoop monitoring is inherently
Hadoop specific
• Analogous to RDBMS monitoring tools
• Job level “monitoring”
• More like analysis
• “What resources does this job use?”
• “How does this run compare to last run?”
• “How can I make this run faster, more resource efficient?”
• Two views we care about
• Job perspective
• Resource perspective (task slots, scheduler pool)
Copyright 2010 Cloudera Inc. All rights reserved 20
21. Wrapping it up
• Hadoop proper is awesome, but is only part of the
picture
• Much of Professional Services time is filling in the blanks
• There’s still a way to go
• Metadata management
• Operational tools and support
• Improvements to Hadoop core to improve stability, security,
manageability
• Adoption and feedback drive progress
• CDH provides the infrastructure for a complete system
Copyright 2010 Cloudera Inc. All rights reserved 21
22. Cloudera Makes Hadoop Safe For the Enterprise
Software Services Training
Copyright 2010 Cloudera Inc. All Rights Reserved. 22
23. Cloudera Enterprise
Enterprise Support and Management Tools
• Increases reliability and consistency of the Hadoop platform
• Improves Hadoop’s conformance to important IT policies and procedures
• Lowers the cost of management and administration
Copyright 2010 Cloudera Inc. All Rights Reserved. 23
24. References / Resources
• Cloudera documentation - http://docs.cloudera.com
• Cloudera Groups – http://groups.cloudera.org
• Cloudera JIRA – http://issues.cloudera.org
• Hadoop the Definitive Guide
• esammer@cloudera.com
• irc.freenode.net #cloudera, #hadoop
• @esammer
Copyright 2010 Cloudera Inc. All rights reserved 24
25. Poll
What other topics would you be most interested in
hearing about?
• More case studies of enterprises using Hadoop
• Technical "How to" sessions
• Industry specific applications of Hadoop
• Technical overviews of Hadoop and related components
Copyright 2010 Cloudera Inc. All rights reserved 25
26. Winner of the drawing is…
Copyright 2010 Cloudera Inc. All rights reserved 26
27. Q&A
Learn about upcoming events: www.cloudera.com/events
DBTA Webinar: Thursday, December 9th, 11am PT / 1pm ET
New Solutions for the Data Intensive Enterprise
Register at www.cloudera.com/events
Thank you for attending.
Copyright 2010 Cloudera Inc. All rights reserved 27