Vote NO for MySQL - Election 2012: NoSQL. Researchers predict a dark future for MySQL. Significant market loss to come. Are things that bad, is MySQL falling behind? A look at NoSQL, an attempt to identify different kinds of NoSQL stores, their goals and how they compare to MySQL 5.6. Focus: Key Value Stores and Document Stores. MySQL versus NoSQL means looking behind the scenes, taking a step back and looking at the building blocks.
08448380779 Call Girls In Civil Lines Women Seeking Men
Vote NO for MySQL
1. Ulf Wendel,
MySQL/Oracle
VOTE NO for MySQL
Election 2012: NoSQL
2. The speaker says...
451 Research predicts* MySQL usage of 80% in 2012, and:
• Loss of 25% within five years
• Down to 55% in 2017
• 30%** looked into or use already NoSQL
Hmm, I think, I am looking for a new job. Are you looking for
Ulf WendelTM? I am for sale!
* http://de.slideshare.net/mattaslett/mysql-vs-nosql-and-newsql-survey-results-13073043
** MySQL users of the 451 Research sample
3. Swipe, tap, click, and zoom
1234 - I am super FAST {"documents":"rock"}
4567 - LIGTHNING fast {"mysql":"not"}
7890 - [1][2][3][4][5] {"documents":"rock"}
abcd - [1[a,b,c]],[2[d,e,f]] {"mysql":"not"}
_de_- 1,281,2828,2,173,8 {"documents":"rock"}
Key Value Store Document Database
010101001011010
101010110100101
101010010101010
101010110101011
101010110111101
Graph Database Big Data/Column-oriented
4. The speaker says...
Will It Blend?* Will I need a new job? That is the question.
Knifes of four different kinds of Not Only SQL are attacking
MySQL.
* http://www.youtube.com/user/Blendtec
5. Swipe, tap, click, and zoom
Memcache, Redis, MongoDB, CouchDB,
LevelDB, BerkeleyDB... Riak, ...
Key Value Store Document Database
Neo4j, FlockDB... BigTable, Hadoop,
Cassandra…
Graph Database Big Data
6. The speaker says...
Time for voting:
• Which systems are you using?
• Since when?
Please note, LevelDB itself is a Key Value Store but Riak
embeds it. Riak supports many storage engines and, thus it
can be bended in many directions.
7. A new era of databases
Scaleable
Elastic
Explore the benefits
Highly Available
Easy To Use
8. The speaker says...
Not Only SQL databases aim to be scalable. From one node
to one tousand nodes in a bit. And, back depending on both
query load and data size! Sharding built-in! Chewing gums
of the cloud area!
Master down? No problem. Some don't do lame primary
copy. Paxos and others ensure the database cluster
survives the failure of nodes – including primaries, if any.
Hot on conferences: JavaScript, HTTP, JSON you name it -
ingredients of todays web applications. No room for MySQL?
9. RDBMS are from the 80's, at best
**** MYSQL 5.6+ BEYOND 3.23 ****
MYSQL CLUSTER AUTO_SHARDING 99,999% HA
MYSQL INNODB MEMCACHE PLUGIN
MYSQL CLUSTER NODE_JS
READY
Are you ready to retire ?!
?SYNTAX ERROR
10. The speaker says...
80's at best? A claim from NoSQL folks... MySQL Cluster
scales: 4 billion ACID tpm using 30 out of 48 possible nodes,
auto sharding, condition push down, five-nines availability,
synchronous replication, in-memory and on disk. Coming
with 7.3: foreign key support.
The #1 Key-Value-Store for MySQL users is Memcache.
MySQL 5.6 includes a Memcache plugin. Clients can access
InnoDB tables using SQL or Memcache protocol. This is the
next generation Memcache for MySQL: persistence and
replication included.
BTW, Couchbase Server 2.0 = CouchDB + Membase.
11. th
Welcome back 70 ?!
IBM Information Management System R/360 TASK:197x
II MM MM SSSSS
II M M M M SS
II M M M SS
II M M M SSSS
II M M SS
II M M SS
II M M SSSSS
(1) COUCHDB (2) MONGODB
READY
#
---------------------------------------------------
abcdefghijklmnopqrstuvwxyz#+-*./%$():,;?&0123456789
12. The speaker says...
How about this claim: document databases such as
CouchDB and MongoDB have their roots in the 70th . The
hierarchical data model is older than the relational data
model.
Both databases manage documents in forests of trees. The
primary operation is a simple scan on the tree. Additionally
the user can embed links to reference between trees.
Relations are not an integral part of the data model.
13. SCNR - Curriculum vitae
Until 2005: Senior Developer
Lotus Notes Team. Lotus Notes
is a documented-oriented,
distributed database
introduced in 1984.
Since 2005: CouchDB Hero
15. The speaker says...
One or the other way, whether I have to search for a new job
or not, I need to take them serious. People use NoSQL!
There must be something smart...
For example, CouchDB:
• Speaks HTTP – iPhone browser can connect directly
• Returns JSON – iPhone browser can display it
• Server-side JavaScript – app server built-in
• Replication with conflict detection – sync mobile device
16. Swipe and zoom
1234 - I am super FAST {"documents":"rock"}
4567 - LIGTHNING fast {"mysql":"not"}
Zoom!
7890 - [1][2][3][4][5]
abcd - [1[a,b,c]],[2[d,e,f]]
Zoom!
{"documents":"rock"}
{"mysql":"not"}
_de_- 1,281,2828,2,173,8 {"documents":"rock"}
Key Value Store Document Database
010101001011010
101010110100101
Swipe! Swipe!
101010010101010
101010110101011
101010110111101
Graph Database Big Data/Table
17. The speaker says...
We identified four flavours of Not Only SQL databases.
Which ones are relevant to me when searching for a new
employer?
Key-Value-Stores are popular. Document Databases are
being considered for web applications. Click or tap to zoom!
Graph databases and Big Data are beyond the scope of the
presentation: specialist tools for special purposes. Swipe!
18. Zoom! Key Value Store
High Performance
1234 - I am super FAST
4567 - LIGTHNING fast Limited Search and Types
7890 - [1][2][3][4][5]
abcd - [1[a,b,c]],[2[d,e,f]]
_de_- 1,281,2828,2,173,8
Key Value Store Scaleable
Limited Persistence
19. The speaker says...
A Key Value Store strikes for its simple data model which is
that of an associative array/hash. The data model is
poor at ad-hoc queries: loose the key and you lock your data
in the treasure. But, it is fast. A need for speed has led to
many in-memory solutions in this class. A perfect model for
use as a cache. If used as a cache, persistence is often
secondary. Generally speaking the data model is perfect for
partitioning/sharding. There are no operations covering
multiple values, thus values can be distributed on multiple
nodes to scale the system.
Most operations are basic (CRUD). Redis stands out with
complex data types and correspondig commands.
20. Zoom! Redis and Memcache
High Performance
• Sure: simple data model, in-memory, lightweight client
protocols, non-blocking clients, pipelining, streaming...
Limited Search and Types
• Redis: strings and abstract types/data structures but no
checks. Maybe, Lua-scripting for search? Blocking task
Scaleable
• Redis: single-threaded, lazy primary copy replication, sharding
cluster planned
Limited Persistence
• Memcache. Redis: Snapshot + WAL (AOF) – recovery time
21. The speaker says...
Both Redis and Memcache are proven and popular
solutions. However, due to their late birth some features
have limitations.
For example, lack of multi-core CPU support. When
comparing an operation on MySQL 5.0 and MySQL 5.6
using only one concurrent client, don't be disappointed if it
has not become much faster. Focus is on scaling in the area
of 30...40 CPU threads.
For example, persistence. Worst case recovery: WAL has
changes for the duration of the snapshot frequence. Replay
takes minutes, albeit no transactions to roll back...?
Maybe we can bend MySQL?
22. MySQL 5.6 InnoDB Memcache
RDBMS and Key Value Store combined
• Benefits of a mature RDBMS
• High performance key lookup and SQL for ad-hoch querying
• Data hold once thus, no need to synchronize
SQL Memcache Protocol
MySQL 5.6
id | firstname | lastname
--------------------------
1 | Ulf | Wendel
2 | Nils | Lagner
InnoDB
23. The speaker says...
Let's make MySQL a happy dieter, or cut off some fat at
least! Then, support a proven, lightweight protocol with many
existing language bindings for super fast access.
Let's give MySQL users direct access to the inner workings
of MySQL. Inside MySQL we find a stable and CPU-efficient
B-tree based storage called InnoDB. If needed, InnoDB
automatically builds a hash index behind the scenes – ever
since. In-memory performance is great because it has to be.
Estimated 20% use 64 – 256 GB RAM with MySQL.
Persistence is a given on top of it. However, SQL parsing
and interpretation is slow.
24. Zoom! InnoDB Memcache
SQL Memcache Protocol
MySQL 5.6
Core Memcached Plugin
Storage Handler API InnoDB API
Standard
MyISAM id | firstname | lastname Memcache
In-Memory
--------------------------
Memory 1 | Ulf | Wendel
Storage
2 | Frank | Sons
... InnoDB
25. The speaker says...
Ever since MySQL has separated the SQL layer from the
storage layer through the so-called internal Handler API,
however the feature has not been exposed to the user for
many years. Popular storage engines include InnoDB,
Memory, MyISAM and NDB (MySQL Cluster). Since MySQL
5.1 the storage engines can be implemented as a server
plugin. The Memcache plugin is another kind of server
plugin. It integrates memcached with MySQL.
The plugin can store data both in InnoDB and the default
memcached way. As a result users can now access InnoDB
tables using either SQL or Memcache protocol.
26. A Key Value Store on steroids?
Read and write scale-out
• Within cluster: synchronous update anywhere (multi-master)
• Focus in-memory with full on-disk persistence
• Autosharding, Dual NoSQL and SQL interfaces
• 2002: 1 million reads per minute
• 2012: 1.2 billion write transactions per minute on 30 nodes
• 2012: 1 BN reads/m on 8 nodes, 4.3 BN reads/m on 30 nodes
99,999% Availability
• Shared-nothing, all data stored 1..4x times, survives failures
• WAN replication: asynchronous with conflict detection
27. The speaker says...
MySQL Cluster has reached 1.17 billion writes transactions
per minute on a 30 nodes setup in mid 2012. That is some
3.9 million writes per second. You can have upto 48 nodes
in a cluster. Clusters can be replicated over wide area
networks, for example, to run them in different data centers
on different locations. Just in case you worry about network
latency...
You can access MySQL Cluster through a variety of
interfaces. Among them are MySQL Server SQL nodes
(ODBC, JDBC, .NET, ...), ClusterJ (JNI), LDAP, HTTP/REST
(Apache mod-ndb) and Memcached. All of them internally
use the NDB C++ API.
28. Zoom! MySQL Cluster Memcache
SQL Memcache Protocol
MySQL Server / Cluster 7.2 Memcached
Storage Handler API ndb_eng
InnoDB NDB API
MyISAM id | firstname | lastname
--------------------------
Memory 1 | Ulf
2 | Nils
| Wendel
| Lagner
... MySQL Cluster (NDB) data node
29. The speaker says...
Memcached support is one of the latest additions to the list
of MySQL Cluster interface. Since MySQL Cluster 7.2 it is
possible to use MySQL Cluster as a storage engine for a
Memcache server. This is quite similar to using MySQL
Cluster/NDB as a storage engine for MySQL. In both cases
the „frontends“ wrap the main MySQL Cluster API which is
called NDB API. Both frontends inherit all MySQL Cluster
goodies.
You can choose whether to run the Memcached, the MySQL
Cluster data nodes and the application on one machine (low
latency) or on different ones (fail safety). Note, that with the
InnoDB Memcache plugin you have Memcached and
MySQL running in the same process.
30. Zoom! MySQL as a KVS
Try the NoSQL APIs! High Performance
1234 - I am super FAST
SQL for ad-hoc querying
4567 - LIGTHNING fast Limited Search and Types
7890 - [1][2][3][4][5]
abcd - [1[a,b,c]],[2[d,e,f]]
_de_- 1,281,2828,2,173,8
Threaded/Multi-Core,
Key Value Store Scaleable
Replication
In-memory,
Limited Persistence
on-disk with fast recovery
31. The speaker says...
The InnoDB Memcache Plugin is certainly a step forward.
MySQL is putting pressure on itself to modularize the server
allowing users to slim MySQL, to strip off features not
needed to get a certain job done.
Users get more choices. If you want to combine a fast and
lean client protocol with simple and fast access operations
but cannot accept compromises on persistence or
scalability, here you go.
Cluster has been a speed monster ever since...
32. Transparent fast key access
$mysqli = new mysqli("localhost", "usr", "pass", "test");
$memcache = new memcached();
$memcache->addServer("localhost", 11211);
mysqlnd_memcache_set($mysqli, $memcache);
$res1 = $mysqli->query("SELECT firstname FROM test WHERE id = 1");
$res2 = $mysqli->query("SELECT * FROM test);
mysqli PDO_MySQL mysql
MySQL native driver for PHP (mysqlnd)
Plugin: PECL/mysqlnd_memcache
SQL access Memcache access
33. The speaker says...
PECL/mysqlnd_memcache is another free and open source
plugin for the PHP mysqlnd library. Mysqlnd is the compile
time default C library used for all PHP MySQL APIs (mysqli,
PDO_MySQL and mysql).
Like other plugins it adds new features to all the APIs. Based
on a configurable regular expression the plugin turns a SQL
access into a Memcache access. Due to the lightweight
protocol and direct access the Memcache access to MySQL
is faster. No matter what protocol used by the library, the
user gets a standard result set in return. Simple to use.
However, note that no meta data is available if a key access
has been performed.
34. Ulf's take...
Not a bad attempt at all... Go try! Go ask for more!
A significant number of MySQL users is using Memcached
• Deploy only one data store instead of two
• Dual interface: can we skip a caching layer in our apps?
A good first step, but looking for more
• Persistence for Memcache - more of a topic for Redis?
• No issues with warm-up or stampeding/slamming
• KVS is about performance, where is the proof @ 5.6...?
• Data model is about distribution/sharding, MySQL Cluster
only?
35. The speaker says...
Inside MySQL is some fine and stable technology... Once
internals are exposed in a user friendly manner MySQL will
be kicking.
36. Zoom! Document Databases
JavaScript/[J|B]SON,
Map&Reduce
{"documents":"rock"} Highly Available
{"mysql":"not"}
{"documents":"rock"}
{"mysql":"not"}
{"documents":"rock"}
Scaleable
Document Database
Easy To Use
37. The speaker says...
Document Databases use a data model that seems as
appealing as that of a Key Value Store. Think of a Key Value
Store that holds nothing but arbitrary documents.
Documents are schema-free thus, you can code without
thinking first... Nested documents are great for storing
objects of a programming language.
Take CouchDB/MongoDB. All the ingredients of a modern
web database are there! MongoDB: Sharding, automatic HA
failover. CouchDB: Lazy primary copy, conflict detection,
consistent hashing for sharding/clustering, ACID
transactions.
38. Zoom! MongoDB and CouchDB
JavaScript/[J|B]SON, Map&Reduce
• No portable and powerful query language, vendor lock-in,
Map&Reduce: you write the search routine. MongoDB: no built-in
validation of data, both: update anomalies, weak on relations
Highly Available
• Lazy primary copy with built-in fail-over (MongoDB) resp.
conflict detection (CouchDB)
Scaleable
• Sharding, CouchDB: compactation, single file on disk.
MongoDB: M/R multi-threaded v8 JavaScript, really ?!,
write locks, index in memory only
39. The speaker says...
Search deserves a dedicated slide, more below. On paper
the high availability approach of both MongoDB and
CouchDB looks good. Like with Key Value Stores their data
model is great for sharding. This is a write-scale out
approach they share with MySQL Cluster. Scalability –
young systems, again. Nobody wants to waste disk space,
manually compact files, limited to a single file on disk or rely
on operating system cache managers to suit database
needs. Over the years MySQL got disk-efficient to make
reads fast and learned partitioning to fine tune data
distribution on disk arrays. Mongo M/R jobs on multi-core
single-threaded: https://jira.mongodb.org/browse/SERVER-
4258 ?
40. Zoom! Rich query language?
Projection (filter columns): SELECT [DISTINCT] attribute | arithmetric expression | aggregation function
DISTINCT
arithmetric expression
aggregation function
COUNT(*)
SUM(column) (user to provide function)
MAX(column) (user to provide function)
MIN(column) (user to provide function)
AVG(column) (user to provide function)
Selection (filter rows): WHERE … (see above)
constant: attribute = | != | < | <= | > | >= constant
attribute: attribute = | != | < | <= | > | >= attribute (see above)
logical operators: ( attribute … constant | attribute) and|or|not (…)
uncertainty: attribute LIKE constant (must be emulated with regular expressions)
NULL: attribute IS NULL
ALL, ANY, IN, EXISTS (no subquery)
Join (join tables)
41. The speaker says...
The standardized query language (SQL) is a rich query
language, the MongoDB query language is not. Please see,
http://blog.ulf-wendel.de/2012/searching-data-in-
notonlymysql-databases-a-rich-query-language/ for details.
Imagine you ever want to swap the database system. Using
MongoDB syntax means 100% vendor lock-in and significant
efforts porting the application. As a developer, once you
learned SQL you are qualified for any job using any RDBMS.
Same true for NoSQL...? Map&Reduce? Great batch
processing for distributed systems! But, you write it. You
define the data access path. SQL: you say what, DB takes
care of finding best possible physical access path.
42. Zoom! Unlimited, schema free!
Support, why is my forum email not updated ?!?!
> db.customer.save({name: "Ulf", email: "ulf.wendel@phpdoc.de"});
> db.forum_user.save({name: "Ulf", email: "ulf@ulf-wendel.de"});
Damn, John takes too much disk space...
> db.movie.save({title: "Good", actors: ["John son of...", "Joe"]})
> db.movie.save({title: "Good, really", actors: ["John..."]})
Folks, is the old code with the typo still around?
> db.forum_user.save({name: "Ulf", e-mail: "ulf@ulfwendel.de"});
Sales barking: we need to add validation to all clients!
> db.prospects.save({name: "Ulf", age: "NOT_FOR@Y.OU"});
43. The speaker says...
The relational data model may be an academic one. This
may be annoying at times. But, do not forget about its basic
such as relations (qualified: 1:n, n:m) and the normal form
(data duplication, update anomalies, …). Also recall the
goodies SQL has to offer: blue-prints, types, validation of
data, validation of relations. As a PHP MySQL guy I vividly
remember STRICT_MODE and see PHP users asking for
types to prevent errors...
Data duplication has a serious side effect: disk space
requirements. You need more disks ($) and you get slower
data access because more data has to be read.
What if MySQL schema changes would be cheap...?
44. MySQL 5.6: Online InnoDB DDL
Classic ALTER TABLE
• Create temporary table, copy rows one-by-one
• Update indexes during copy, drop old, rename new
• Clients are blocked, disk space is wasted
First step in 5.1/5.5: Fast Index Creation
• CREATE and DROP INDEX without copy
Second step in 5.6: Online DDL
• Majority does not block SELECT, INSERT, UPDATE, DELETE
• Some changes in-place (without copy), faster copy otherwise
45. The speaker says...
Online DDL speaks for itself. The improved ALTER TABLE
gives you higher concurrency, lower disk and CPU usage
and less purging of the buffer pool all resulting in
performance dips in the past. Please note, the overall
runtime of ALTER table can increase. Recording concurrent
DML changes and applying them at the end of an online
DDL may reduce raw performance. However, concurrency is
better.
As a side-effect loading huge dumps has become faster.
You can now create the table, load the data and add
secondary indexes later. Clients can start to access the table
while the secondary index is still being created.
46. Online DDL operations
Please, check the manual for details! Quick summary:
Concurrent DML and In-Place
• CREATE|DROP|ADD index
• Column changes: default, auto_increment, rename (name)
• Foreign keys: add, drop
Concurrent DML but not In-Place
• Column changes: add, drop, NULL/NOT NULL
• Change ROW_FORMAT, KEY_BLOCK_SIZE
47. The speaker says...
A new LOCK clause of the ALTER TABLE statement allows
you to block DDL operations that reduce concurrency.
Setting LOCK=NONE permits the execution of any ALTER
TABLE statement that either blocks read or write operations.
LOCK=SHARED blocks an ALTER TABLE statement which
prevents concurrent read operations.
In sum: MySQL is not schema fee but has become more
friendly towards schema changes if need be.
Next topics: High Availability and Replication
48. MySQL 5.6: GTID, easy failover
Global Transaction Identifier
• Easy to find most up-to-date slave for failover
• Emulation: PECL/mysqlnd_ms 1.2+, MySQL Proxy
MySQL Master
Log 7, Pos 34, GTID M:1: UPDATE x=1
Log 7, Pos 35, GTID M:2: UPDATE x=9
MySQL Slave 1 MySQL Slave 2
… , GTID M:1: UPDATE x=1 … , GTID M:1: UPDATE x=1
… , GTID M:2: UPDATE x=9
49. The speaker says...
A global transaction identifier is a cluster-wide unique
transaction identifier. MySQL 5.6 can generate them
automatically and for older versions of MySQL you can use
a client-side emulation provided by, for example, MySQL
Proxy and PECL/mysqlnd_ms.
A GTID helps with failover in case of a master/primary
outage. The most up-to-date slave/secondary should be
come the new master. Which one is the most current slave
can be checked comparing GTIDs. Unfortunatly, this is no
automatic failover. For automatic failover use
DRBD/Pacemaker or other 3rd party tools. Don't miss the
commandline mysqlfailover utility of MySQL Workbench!
50. Group commit and parallel slave
--slave-parallel-workers
• Before: multiple writer on master, one applier on slave
• Now: parallel worker for different databases, attn: reordering
MySQL Master
T1: UPDATE db1.table.x=1
T2: UPDATE db2.table.x=9
SQL thread (coordinator) MySQL Slave 1
Worker thread Worker thread T2: UPDATE db2.table.x=9
T1: UPDATE db1.table.x=1
51. The speaker says...
MySQL 5.6 makes replication faster. Binary log writes can
be grouped together to significantly improve the
performance of the replication master. Less writes, less
waits for the disks – in tests we saw improvements between
2x – 4x.
Slaves may catch up faster to the master in 5.6 as well.
Transactions from different databases can be applied in
parallel. The slave SQL thread acts as a coordinator for up
to 1024 workers. Note, that transactions which do not
overlap, may be recorded in a different order in the slaves
binary log than in the master log.
52. BTW, PECL/mysqlnd_ms ...
Replication and load balancing plugin for mysqlnd
• Supports all PHP APIs
• Supports all kinds of MySQL clusters
• Read-write splitting: automatic, SQL hints, user-controlled
• Load Balancing: Round-robin, random [once (sticky)], user, ...
• Transaction aware
• Lazy connections
• Quality-of-service: eventual/session/strong consistency
• Quality-of-service: max age
• Cache integration
• ...
53. The speaker says...
By the way, we have released yet another free and open
source PHP mysqlnd plugin to make using clusters easier.
Of course, I would love to see it used by everybody,
because I believe in my own work. However, I do not expect
this to happen as cluster support is something most of you
have solved, one or the other way, years ago. If so, you still
may want to read the documentation. You may get
inspiration for improving your current solution.
For example, how about the idea of quality-of-service or
transparent cache support, if eventual consistency is good
enough?
54. Zoom! JavaScript/[J|B]SON
Proof of Concept
MySQL speaks HTTP and replies JSON.
JavaScript (v8) runs inside MySQL.
Map&Reduce jobs use the internal
low-level high performance interfaces.
… it could be done
http://de.slideshare.net/nixnutz/http-json-javascript-mapreduce-builtin-to-mysql
55. The speaker says...
Like one can add a Memcached protocol interface to
MySQL, one can add an HTTP interface. Returning
JSON/BSON and running Map&Reduce jobs inside MySQL
is not out of reach...
56. Zoom! MySQL vs Docs DB
JavaScript/[J|B]SON,
Hmm...
Map&Reduce
Replication: Ok, add 3rd party
Highly Available
Cluster: beat it!
Replication: Write limit
Scaleable
Cluster: beat it!
Database and tooling: good
Easy To Use
Need for ORM: hmm...
57. Ulf's take...
Young contenders, beat MySQL on a single machine...
Accept new data models
• Don't get emotional on the ER model: people love nested data!
Push on (elastic) scalability and high availability
• Where is built-in automatic failover not just command line?
• Where is built-in write scaleout with MySQL Replication?
Open up for innovation
• Developing server plugins must become easier
• Adding HTTP and other interfaces must become easier
58. The speaker says...
The document data model must look as if it has been made
for certain applications – so easy to dump arbitrary data that
rarely changes! This comes at a price (normalization,
storage requirements, validation, ...).
Limiting the discussion to a single machine (transactions,
persistence, CPU scalability, memory efficiency, disk layout,
query language, ...) MySQL is competitive. It takes many
years of development fine tuning basic algorithms.
But, NoSQL is also about massively distributed systems.
59. What can MySQL users do?
Drive MySQL 5.6 to its limit before you worry...
Ask yourself what you give up in case of a switch
• NoSQL is not SQL vs. something – it is far more!
Use MySQL and the ideas of the new generation
• Fast and lightweight client protocols
• Create independent data units for scaling over many machines
• Prebuild aggregates
• Batch processing in addition to ad-hoc querying
60. The speaker says...
No big surprises here: ideas from the 80th, or has it been the
70th? Who cares... Some things in NoSQL are new, others
are old favourites.
Try to break things down to the basic concepts. That is hard
in a world of temptations with each vendor focussing on
promoting his strengths. However, it really helps to take a
step back and search for the basic concepts.
Maybe, you have a senior in your company that can help
you...