Big Data Steven Noels & Wim Van Leuven SAI, 7 april 2011
Houston, we have a problem. IDC says Digital Universe will be 35 Zettabytes by 2020. 1 Zettabyte = 1,000,000,000,000,000,000,000 bytes, or 1 billion terrabytes
- like disk seek time: how long does it take to read a full 1TB disk compared to the 4MB HD of 20 years ago? - Amazon lets you ship hard disks to load data
- the only solution is to divide work beyond one node biringing us to cluster technology - but ... clusters have their own programming challenges, e.g.work load management, distributed locking and distributed transactions - but clusters do especially have one certain property ... Anyone knows which?
- Failure! Nodes will certainly fail. In large setups there are continuously breakdowns. - ... making it even more difficult to build software on the grid. - It needs to be fault-tolerant, but also self orchestrating and self healing - Assistence you will be needing: standing on the shoulders of giants
- Distributed File System for high available data - MapReduce to bring logic to the data on the nodes en bring back the results - BigTable & Dynamo to add realtime read/write access to big data - with FOSS implementations which allow US to build applications, not the plumbing ...
Althought the basic functions of those technologies are rather basic/high-level, their implementations hardly are. - They represent the state-of-the-art in operating and distributed systems research: distributed hash tables (DHT), consistent hashing, distributed versioning, vector clocks, quorums,, gossip protocols, anti-entropy based recovery, etc - ... with an industrial/commercial angle: Amazon, Google, Facebook, ... Lets explain some of the basic technologies
The most important classifier for scalable stores CA, AP, CP
KV (Amazon Dynamo) Column family (Google BigTable) Document stores (MongoDB) Graph DBs (Neo4J) Please remember scalability, availability and resilience come at a cost
RDBMSs scale to reasonable proportions, bringing commodity of technology, tools, knowlegde and experience. BD stores are rather uncharted territory lacking tools, standardized APIs, etc. cost of hardware vs cost of learning Do your homework!
ref http://www.slideshare.net/quipo/nosql-databases-why-what-and-when Good overview of different OSS and commercial implementations with their classification and features slides 96 ...
Basic support for secondary indexes. Better use full text search tools like Solr or Katta. Implement joins by denormalization Meaning consistency has to be maintained by the application, i.e. DIY Transactions are mostly non-existent, meaning you have to divide your application to support data statuses and/or implement counter-transactions for failures. No true query language, but map reduce jobs or more high-level languages like HiveQL and Pig-Latin. However not very interactive, rather meant for ETL and reporting. Think data warehouse. Complement with full text search tools like Sorl and Katta giving added value, and also faceted search possibilities.