3. transaction
transaction
MySQL transaction
transaction
transaction
DBMS transaction
transaction
transaction
transaction
transaction
transaction
transaction
BINARY LOG
ns act act tio n
tra ansio ns sac ntio
ac ion ion n
ac
nr tr tra aniton
tsa a n csio
nts ct n
rn a o
taa scti
r
trnsa
n
n
MySQL replication
tio
ct
tra
tra
REPLICATION
transaction
transaction
is single threaded
MySQL
DBMS
Tuesday, February 7, 12 3
4. master master master master
MySQL MySQL MySQL MySQL
DBMS DBMS DBMS DBMS
MySQL MySQL MySQL MySQL
DBMS DBMS DBMS DBMS
slave slave slave slave
single source multi source (fan-in)
multiple sources?
Tuesday, February 7, 12 4
5. from this to this
MySQL MySQL
DBMS DBMS
master master
MySQL MySQL
DBMS DBMS
master master
multiple masters?
Tuesday, February 7, 12 5
6. INSERT INSERT
RECORD RECORD
A A
MySQL MySQL
DBMS DBMS
master master
MySQL MySQL
DBMS DBMS
MODIFY master master MODIFY
RECORD RECORD
B B
Avoiding conflicts?
Tuesday, February 7, 12 6
7. master
Seamless failover?
MySQL
DBMS
master
MySQL
DBMS
master
MySQL
DBMS
MySQL MySQL
DBMS DBMS
slave slave MySQL
DBMS
MySQL
DBMS
slave slave
MySQL MySQL
DBMS DBMS
slave slave
Tuesday, February 7, 12 7
8. Replicating to something else?
mysql master
MySQL
DBMS
MySQL
DBMS
mysql postgresql oracle mongodb
Tuesday, February 7, 12 8
9. All these examples tell us:
Nice dream, but MySQL
can’t do it
Tuesday, February 7, 12 9
12. What can it do?
• Easy failover
• Multiple masters
• Multiple sources to a single slave
• Conflict prevention
• Parallel replication
• Replicate to Oracle and PostgreSQL database
Tuesday, February 7, 12 12
14. Main components
• Transaction History Logs (THL)
• roughly corresponding to MySQL relay logs
• have a lot of metadata
• Service database
• contains metadata for latest transactions
• Metadata is committed together with data
• Makes slaves crash proof
Tuesday, February 7, 12 14
16. Parallel replication facts
✓Sharded by database
✓Good choice for slave lag problems
❖Bad choice for single database projects
Tuesday, February 7, 12 16
17. Parallel Replication test
STOPPED
binary logs
MySQL slave Concurrent sysbench
on 30 databases
running for 1 hour
OFFLINE
TOTAL DATA: 130 GB
Tungsten slave direct:
alpha
RAM per server: 20GB
(slave)
replicator alpha
Slaves will have 1 hour lag
Tuesday, February 7, 12 17
18. measuring results
START
binary logs
MySQL slave
ONLINE Recording
catch-up time
Tungsten slave direct:
alpha
(slave)
replicator alpha
Tuesday, February 7, 12 18
19. MySQL native
replication
slave catch up in 04:29:30
Tuesday, February 7, 12 19
20. Tungsten parallel
replication
slave catch up in 00:55:40
Tuesday, February 7, 12 20
25. parallel replication
direct slave facts
✓No need to install Tungsten on the master
Tuesday, February 7, 12 24
26. parallel replication
direct slave facts
✓No need to install Tungsten on the master
✓Tungsten runs only on the slave
Tuesday, February 7, 12 24
27. parallel replication
direct slave facts
✓No need to install Tungsten on the master
✓Tungsten runs only on the slave
✓Replication can revert to native slave with two
commands (trepctl offline; start
slave)
Tuesday, February 7, 12 24
28. parallel replication
direct slave facts
✓No need to install Tungsten on the master
✓Tungsten runs only on the slave
✓Replication can revert to native slave with two
commands (trepctl offline; start
slave)
✓Native replication can continue on other slaves
Tuesday, February 7, 12 24
29. parallel replication
direct slave facts
✓No need to install Tungsten on the master
✓Tungsten runs only on the slave
✓Replication can revert to native slave with two
commands (trepctl offline; start
slave)
✓Native replication can continue on other slaves
❖Failover (either native or Tungsten) becomes a
manual task
Tuesday, February 7, 12 24
42. Conflict prevention
facts
• Sharded by database
Tuesday, February 7, 12 36
43. Conflict prevention
facts
• Sharded by database
• Defined dynamically
Tuesday, February 7, 12 36
44. Conflict prevention
facts
• Sharded by database
• Defined dynamically
• Applied either at the master or at the
slave
Tuesday, February 7, 12 36
45. Conflict prevention
facts
• Sharded by database
• Defined dynamically
• Applied either at the master or at the
slave
• methods:
Tuesday, February 7, 12 36
46. Conflict prevention
facts
• Sharded by database
• Defined dynamically
• Applied either at the master or at the
slave
• methods:
• make replication fail
Tuesday, February 7, 12 36
47. Conflict prevention
facts
• Sharded by database
• Defined dynamically
• Applied either at the master or at the
slave
• methods:
• make replication fail
• drop silently
Tuesday, February 7, 12 36
48. Conflict prevention
facts
• Sharded by database
• Defined dynamically
• Applied either at the master or at the
slave
• methods:
• make replication fail
• drop silently
• drop with warning
Tuesday, February 7, 12 36
70. Installation
• Check the requirements
• Get the binaries
• Expand the tarball
• Run ./tools/tungsten-installer
Tuesday, February 7, 12 45
71. REQUIREMENTS
• Java JRE or JDK (Sun/Oracle or Open-jdk)
• Ruby 1.8 (only during installation)
• ssh access to the same user in all nodes
• MySQL user with all privileges
Tuesday, February 7, 12 46
72. Installation types
• master / slave
• slave - direct
Tuesday, February 7, 12 47
76. Installation (1)
# starting at node 4, but any would do
for N in 1 2 3 4
do
ssh r$N mkdir tinstall
done
cd tinstall
tar -xzf /path/to/tungsten-replicator-2.0.4.tar.gz
cd tungsten-replicator-2.0.4
Tuesday, February 7, 12 51
78. Installation (2)
export TUNGSTEN_BASE=$HOME/tinstall
./tools/tungsten-installer
--master-slave # installation mode
--master-host=r1 # who’s the master
--datasource-user=tungsten # mysql username
--datasource-password=secret # mysql password
--service-name=dragon # name of the service
--home-directory=$TUNGSTEN_BASE # where we install
--cluster-hosts=r1,r2,r3,r4 # hosts in cluster
--start # start replicator after installing
Tuesday, February 7, 12 53
79. What does the
installation do
1: Validate all servers
host4 host1 host2 host3
✔ ✔ ✔ ✔
✗ ✗ ✗ ✗
✔ ✔ ✔ ✔
✔ ✔ ✔ ✔
Report all errors
Tuesday, February 7, 12 54
80. What does the
installation do
1 (again): Validate all servers
host4 host1 host2 host3
✔ ✔ ✔ ✔
✔ ✔ ✔ ✔
✔ ✔ ✔ ✔
✔ ✔ ✔ ✔
Tuesday, February 7, 12 55
81. What does the
installation do
2: install Tungsten in all servers
host4
$HOME/ host1
tinstall/ host2
config/ host3
releases/
relay/
logs/
tungsten/
Tuesday, February 7, 12 56
82. example
ssh r2 chmod 444 $HOME/tinstall
./tools/tungsten-installer
--master-slave --master-host=r1
--datasource-user=tungsten
--datasource-password=secret
--service-name=dragon
--home-directory=$HOME/tinstall
--thl-directory=$HOME/tinstall/logs
--relay-directory=$HOME/tinstall/relay
--cluster-hosts=r1,r2,r3,r4 --start
ERROR >> qa.r2.continuent.com >> /home/tungsten/
tinstall is not writeable
Tuesday, February 7, 12 57
83. example
ssh r2 chmod 755 $HOME/tinstall
./tools/tungsten-installer
--master-slave --master-host=r1
--datasource-user=tungsten
--datasource-password=secret
--service-name=dragon
--home-directory=$HOME/tinstall
--thl-directory=$HOME/tinstall/logs
--relay-directory=$HOME/tinstall/relay
--cluster-hosts=r1,r2,r3,r4 --start
# no errors
Tuesday, February 7, 12 58