SlideShare a Scribd company logo
1 of 52
Download to read offline
MaxScale
Architecture Evolution
Johan Wikman
Lead Developer
Overview
● What is MaxScale
● Architecture
● Performance
● Summary
What is MaxScale
What is MaxScale
● Cluster Abstraction
○ Hides the complexity.
○ Load Balancer
○ High Availability
○ Easier Maintenance
● And more
○ Firewall
○ Data masking
○ Logging
○ Cache
○ ...
Client
MaxScale
Master Slave Slave
Read Write Splitting
● Analyze statements
○ Send where appropriate
Client
MaxScale
Master Slave Slave
Read Write Splitting
● Analyze statements
○ Send where appropriate
● Write statements to master
Client
MaxScale
Master Slave Slave
> INSERT INTO ...
Read Write Splitting
● Analyze statements
○ Send where appropriate
● Write statements to master
● Read statements to some slave
Client
MaxScale
Master Slave Slave
> SELECT * ...
Read Write Splitting
● Analyze statements
○ Send where appropriate
● Write statements to master
● Read statements to some slave
● Session statements to all servers
Client
MaxScale
Master Slave Slave
> SET autocommit
Architecture
.........
Static Architecture
Protocol Authenticator Filter Router Query
Classifier
Monitor
MariaDBClient MySQLAuth
...
DBFwfilter
...
ReadWriteSplit qc_sqlite
...
MariaDBMon
Core
● Threading
● Logging
● Plugin loading
● Lifetime management
● REST-API
● Admin Functionality
● etc.
APIs
MaxScale
Data FlowClient
Protocol
Filter Filter
Router
Protocol
Monitor
Query
Classifier
Servers
Server State
monitors
updates
uses
MaxScale
Code
MaxScale: 147 kloc
Core: 51 kloc
Authenticators: 5 kloc
Filters: 27 kloc
Routers: 43 kloc
Monitors: 12 kloc
Protocols: 9 kloc
Modules: 96 kloc
For comparison:
● MariaDB server: 2500 kloc
Threading Architecture
● MaxScale is essentially a router.
○ It receives SQL packets from numerous clients and dispatches them to one or more servers.
○ Waits for responses from one or more servers, and sends a response to the client.
○ Number of clients may be large.
● Basic alternatives:
○ One thread per client.
○ Asynchronous I/O and fixed number of threads.
● Reason lost in the mists of history, but MaxScale is implemented using the
latter approach.
Asynchronous I/O in Principle.
● Basically:
● When there is no activity, the thread is idle.
● When something happens, the thread wakes up and handles the events.
○ May involve initiating asynchronous I/O whose result is later reported as an event.
● Once the event has been handled, the thread returns to waiting for events.
setup();
while (true)
{
io_events events = wait_for_io();
handle_events(events);
}
● Create some file descriptors
● Make them non-blocking
● Add them to some waiting mechanism.
● Wait for something to happen to those
file descriptors
● Handle whatever happened
So How do You Wait on Events?
● select
○ The original mechanism, been around since the beginning of time. Fixed size limit on the
number of descriptors. O(N)
● poll
○ No limit on number of descriptors. O(N).
● epoll
○ More complex to set up. No limit on number of descriptors. All changes via system calls, i.e.
thread safe. O(1).
Epoll is not a better poll, it’s different.
MaxScale epoll Setup
● At startup, socket creation is triggered by the presence of listeners.
[TheListener]
type=listener
service=TheService
...
port=4009
so = socket(...);
...
listen(so);
...
struct epoll_event ev;
ev.events = events;
ev.data.ptr = data;
epoll_ctl(epoll_fd, EPOLL_CTL_ADD, so, &ev);
epoll_fd = epoll_create(...);
Client Connection
Client MaxScale
while (!shutdown)
{
struct epoll_event events[MAX_EVENTS];
int ndfs = epoll_wait(epoll_fd, events, ...);
for (int i = 0; i < ndfs; ++i)
{
epoll_event* event = &events[i];
handle_event(event);
}
}
Client Connection, cont’d
void handle_event(struct epoll_event* event)
{
if (event->events & EPOLLIN)
{
if (descriptor was a listening socket)
{
handle_accept(event);
}
else
{
handle_read(event);
}
}
if (event->events & ...)
{
...
}
}
Handle Accept
void handle_accept(struct epoll_event* event)
{
for (all servers in service)
{
int so;
connect each server;
struct epoll_event ev;
ev.events = events;
ev.data.ptr = data;
epoll_ctl(epoll_fd, EPOLL_CTL_ADD, so, &ev);
}
}
[TheService]
type=service
service=readwritesplit
servers=server,server2
...
Handle Read
void handle_read(struct epoll_event* event)
{
char buffer[MAX_SIZE];
read(sd, buffer, sizeof(buffer));
figure out what to do with the data
// - wait for more
// - authenticate
// - send to master, send to slaves, send to all
// ...
...
}
● When the servers reply,
the response will be
handled in a similar
manner.
Binding Things Together
Client
Server
DCB
Session
MXS_ROUTER_SESSION
RWSplitSession DCB
1..*
1 1
Plugin boundary
Representation of a
connection/descriptor
● A Session object ties together the client connection and all server
connections associated with that client connection.
MaxScale 1.0 - 2.0
epoll_fd = epoll_create(...);
Thread 1
while (!shutdown)
{
epoll_wait(epoll_fd, ...);
...
}
Thread 2
while (!shutdown)
{
epoll_wait(epoll_fd, ...);
...
}
Thread 3
while (!shutdown)
{
epoll_wait(epoll_fd, ...);
...
}
Problematic with one epoll Instance
● There are multiple socket descriptors for each client session.
○ One for the client connection.
○ One for every backend server.
● It is possible that an event on each of those is concurrently handled by as
many threads.
○ Client have issued a request that has been sent to all servers.
○ Response arrives from each server at the same time the client closes its connection.
● Session data ends up being manipulated by many threads concurrently.
Implications
● Lots of locks and locking was needed.
○ Primarily spinlocks, intended to be held for brief periods of time.
● Events for a socket may be reported to a thread while another thread was still
handling earlier events for that same socket.
○ Event extraction and event handling had to be decoupled => locking.
● Very hard to be sure no deadlocks could occur.
● Very hard to be sure no races were possible.
● Very hard to program, as it was not always obvious what could and what
could not occur concurrently.
● The locks started to hurt under high load and lots of clients.
MaxScale 2.1
Thread 1
while (!shutdown)
{
epoll_wait(epoll_fd, ...);
...
}
epoll_fd = epoll_create(...);
Thread 2
while (!shutdown)
{
epoll_wait(epoll_fd, ...);
...
}
epoll_fd = epoll_create(...);
Thread 2
while (!shutdown)
{
epoll_wait(epoll_fd, ...);
...
}
epoll_fd = epoll_create(...);
MaxScale 2.1
● Each thread has a epoll instance of its own.
● When a client connects:
○ The thread that handles the client will also handle all communication will all backends on
behalf of that client.
○ All descriptors belonging to a particular client session are only added to the epoll instance of
the thread in question.
● Listening sockets are still an exception; added to the poll set of all threads.
○ After accepting, the client socket is then moved in a round-robin fashion to some thread.
● Huge impact on the performance.
MaxScale 2.2
● Remove the last traces of inter-thread communication.
● Basic problem: How to distribute new connections among existing threads?
● New connections should be distributed across different threads in a roughly
even manner.
● All ports must be treated in the same way.
○ So a particular port cannot e.g. be permanently assigned to a specific thread.
epoll
● Two ways events can be triggered:
○ Edge-triggered, reported when something has happened.
○ Level-triggered, reported when something is available.
Inactive
Active
Edge triggered Level triggered
Implication of edge/level triggered epoll
1. The file descriptor that represents the read side of a pipe
(rfd) is registered on the epoll instance.
2. A pipe writer writes 2 kB of data on the write side of the pipe.
3. A call to epoll_wait(2) is done that will return rfd as a
ready file descriptor.
4. The pipe reader reads 1 kB of data from rfd.
5. A call to epoll_wait(2) is done.
● If rfd was added using EPOLLET (edge-triggered) then the call at 5 will hang.
● EPOLLET requires
○ Non-blocking descriptors.
○ Events can be waited for (epoll_wait) only after read or write return EAGAIN.
Example straight from
$ man epoll
Two kind of Descriptors
● Listening sockets that all threads should handle.
● Sockets related to a client session that only a particular thread should handle.
● What’s the problem with the listening sockets being in the epoll instance of
each thread (as in MaxScale 2.1)?
○ Also the listening socket must be non-blocking and added using EPOLLET.
○ A thread that returns from epoll_wait must call accept on the listening socket until it returns
EWOULDBLOCK.
○ So, either we must accept that a thread suddenly may have to deal with a large number of
clients (if there is a sudden surge) or a thread must be able to offload an accepted client
socket to another thread.
What we Want
● Each thread does not need to accept more than one client at a time.
○ That is, EPOLLET cannot be used.
● We don’t have to manipulate the epoll instance of a thread, from outside the
thread.
○ Listening sockets are a global resource while sockets related to a client session are thread
local resources.
○ Not having to do that also means that making it possible to increase and decrease the number
of threads at runtime becomes easier.
But epoll instances can also be waited for.
● If an epoll file descriptor has events waiting, then it will indicate that as
being readable.
● So,
○ if a file descriptor is added to an epoll instance, and
○ the descriptor of that epoll instance is added to another second epoll instance, then
○ when something happens to the file descriptor, a thread blocked in an epoll_wait call on the
second epoll instance will return.
● If the thread now calls epoll_wait on the first epoll instance, it will return
with actual file descriptor on which some change has occurred.
MaxScale 2.2
Thread N
l_fd = epoll_create(...);
struct epoll_event ev;
ev.events = EPOLLIN; // NOT EPOLLET
epoll_ctl(l_fd, EPOLL_CTL_ADD, g_fd, &ev);
while (!shutdown)
{
epoll_wait(l_fd, ...);
...
}
g_fd = epoll_create(...);
void add_listening_socket(int sd)
{
struct epoll_event ev;
ev.events = EPOLLIN; // NOT EPOLLET
epoll_ctl(g_fd, EPOLL_CTL_ADD, g_fd, &ev);
}
void add_client_socket(int l_fd, int sd)
{
struct epoll_event ev;
ev.events = .. | EPOLLET;
epoll_ctl(l_fd, EPOLL_CTL_ADD, sd, &ev);
}
Client Connecting
typedef void (*handler_t)(epoll_event*);
while (!shutdown)
{
struct epoll_event events[MAX_EVENTS];
int ndfs = epoll_wait(epoll_fd, events, ...);
for (int i = 0; i < ndfs; ++i)
{
epoll_event* event = &events[i];
handler_t handler = get_handler(event);
handler(event);
}
}
void handle_epoll_event(epoll_event*)
{
struct epoll_event events[1];
int fd = get_fd(event); // fd == g_fd
epoll_wait(fd, events, 1, 0); // 0 timeout.
epoll_event* event = &events[0];
handler_t handler = get_handler(event);
handler(event);
}
void handle_accept_event(epoll_event* event)
{
int sd = get_fd(event);
while ((cd = accept(sd)) != NULL)
{
...
add_client_socket(cd, ...);
}
}
get_hander(event) and get_fd(event) ?
typedef union epoll_data {
void *ptr;
int fd;
uint32_t u32;
uint64_t u64;
} epoll_data_t;
int epoll_ctl(int epfd, int op, int fd, struct epoll_event *event);
int epoll_wait(int epfd, struct epoll_event *events, int maxevents, int timeout);
struct epoll_event {
uint32_t events;
epoll_data_t data;
};
● When adding a descriptor to an epoll instance you can associate a value.
● When something occurs you get that value back.
○ If you do not store the fd, you do not know what fd the event relates to.
○ If you store the fd, you cannot store anything else.
Storing More Context With an Event
typedef uint32_t (*mxs_poll_handler_t)(struct mxs_poll_data* data, int wid, uint32_t events);
typedef struct mxs_poll_data {
mxs_poll_handler_t handler; /*< Handler for this particular kind of mxs_poll_data. */
} MXS_POLL_DATA;
typedef struct dcb {
MXS_POLL_DATA poll;
int fd;
...
} DCB;
static uint32_t dcb_poll_handler(MXS_POLL_DATA *data, ...) {
DCB *dcb = (DCB*)data;
...
};
DCB* create_dcb(...)
{
DCB* dcb = alloc_dcb(...);
dcb.poll.handler = dcb_poll_handler;
return dcb;
}
class Worker : private MXS_POLL_DATA {
public:
Worker() {
MXS_POLL_DATA::handler = &Worker::epoll_handler;
...
};
static uint32_t epoll_handler(MXS_POLL_DATA* data, ...) {
return ((Worker*)data)->handler(...);
}
int fd;
};
Adding and Extracting Events
void poll_add_fd(int fd, uint32_t events, MXS_POLL_DATA* pData)
{
struct epoll_event ev;
ev.events = events;
ev.data.ptr = pData;
epoll_ctl(m_epoll_fd, EPOLL_CTL_ADD, fd, &ev);
}
DCB* dcb = ...;
poll_add_events(dcb->fd, ..., &dcb->poll);
Worker* pWorker = ...;
poll_add_events(pWorker->fd, ..., pWorker);
while (!should_shutdown)
{
struct epoll_event events[MAX_EVENTS];
int n = epoll_wait(epoll_fd, events, MAX_EVENTS, -1);
for (int i = 0; i < n; ++i)
{
MXS_POLL_DATA* data = (MXS_POLL_DATA)events[i].data.ptr;
data->handler(data, ..., events[i].events);
}
}
Each worker thread sits in this loop.
Performance
MaxScale 2.0.5 (up to)
Hardware: Two physical servers, 16 cores / 32 hyperthreads,
128GB RAM and an SSD drive, connected using GBE LAN. One
runs MaxScale and sysbench, the other 4 MariaDB servers setup
as Master and 3 Slaves.
Workload: OLTP read-only, 100 simple selects per iteration, no
transaction boundaries.
● direct: Sysbench uses all
servers directly in
round-robin fashion.
● rcr: MaxScale readonnroute
router.
● rws: MaxScale readwritesplit
router.
MaxScale 2.1.0
● The architectural change that
allowed the removal of a large
number of locks provided a
dramatic improvement for
readconnroute.
● No change for readwritesplit.
● With small number of clients the
introduced cache improved the
performance, with large number
no impact.
Query Classification
● When ReadWriteSplitting, MaxScale must parse the statement.
○ Does it need to be sent to the master, to some slave or to all servers?
● The classification is done using a significantly modified parser from sqlite.
● In each thread, the parsing is done using a thread specific in-memory
database.
Thread 1 Sqlite Thread 2 sqlite
● No shared data, should be no
contention.
● Sqlite was not built using the right
flags, but there was serialization going
on.
Data Collection
● While parsing a statement, a fair amount of information was collected.
○ What tables and columns are accessed. What functions are called. Etc.
● Allocating memory for that information did not come without a cost.
○ Basically only the firewall filter uses that information.
● Now no information is collected by default, but a filter that is interested in
that information must express it explicitly.
qc_parse_result_t parse_result = qc_parse(stmt, QC_COLLECT_ALL);
Custom Parser
● Many routers and filters need to know whether a transaction is ongoing.
● Up until MaxScale 2.1.1 that implied that the statements had to be parsed
using the query classifier.
● For MaxScale 2.1.2 we introduced a custom parser that only detects
statements affecting the autocommit mode & transaction state.
○ Much faster than full parsing.
● In MaxScale 2.3 we will rely upon the server telling the autocommit mode &
transaction state.
○ Implies that changes performed via prepared statements or functions will also be detected.
MaxScale 2.1.3 versus 2.0.5
Cache
● The cache was introduced in 2.1.0 but the
performance was less than satisfactory.
● Problem was caused by parsing.
○ The cache parsed all statements to detect non-cacheable statements.
○ E.g. SELECT CURRENT_DATE();
● Added possibility to declare that all SELECT statements are cacheable.
[TheCache]
type=filter
module=cache
...
selects=assume_cacheable
● Huge impact on the performance.
MaxScale ReadWriteSplit
● In the best case the performance of MaxScale 2.1.3 is three times, eight if
caching is used, than the performance of MaxScale 2.0.5.
The Importance of Early Customer Feedback
● MaxScale caches user information so that it can authenticate users.
● In MaxScale 2.2.0 the user database was shared between threads.
● Worked fine when connection attempts were relatively rare and sessions
were relatively long
Thread 1
Users
Thread 2 ● If a user is not found, any thread may
refresh the user data from the server and
update the database.
● All access must use locks.
User Report
● With MaxScale 2.2.0 Beta a user reported that he got only 6000 qps.
function event(thread_id)
db_connect()
rs = db_query("select 1;")
db_disconnect()
● Reason turned out to be thread contention in relation to the user database.
Thread 1
Users
Thread 2
Users
● We split it, so that each thread has its own
user database.
“I just tested the same case, got 361437
queries/second, I think it works for us”
What about MaxScale 2.2.2
● No real difference, which is good, because 2.2 does more than 2.1.
○ E.g. must catch “SET SESSION SQL_MODE=ORACLE”
Summary
Summary
● MaxScale 0.1 -> 2.0
○ One epoll instance that all worker threads wait on.
○ Any thread can handle anything.
○ Lots of locking needed, and lots of potential for hard-to-resolve races.
○ Performance problems.
● MaxScale 2.1
○ One epoll instance per worker thread.
○ Any thread can accept, but must distribute the client socket to a particular thread.
○ All activity related to a particular session handled by one thread.
○ Significantly reduced need for locking and race risk effectively eliminated.
○ Good performance.
● MaxScale 2.2
○ One epoll instance for “shared” descriptors (listening sockets).
○ One epoll instance per worker thread.
○ All activity related to a particular session handled by one thread.
○ Even less locking needed.
○ Good performance.
Where do We Go From Here?
● The architectural evolution of MaxScale can be summarized as:
○ Decrease the explicit coupling between the worker threads.
■ If that leads to duplicate work or increased memory usage, fine.
● We are likely to continue moving in that direction still, so that we
conceptually will end up running N “mini”-MaxScales in parallel,
completely oblivious of each other.
● That would also make it easy to allow the starting and stopping of threads,
while MaxScale is running.

More Related Content

What's hot

Deep Dive on ClickHouse Sharding and Replication-2202-09-22.pdf
Deep Dive on ClickHouse Sharding and Replication-2202-09-22.pdfDeep Dive on ClickHouse Sharding and Replication-2202-09-22.pdf
Deep Dive on ClickHouse Sharding and Replication-2202-09-22.pdfAltinity Ltd
 
MySQL High Availability Solutions
MySQL High Availability SolutionsMySQL High Availability Solutions
MySQL High Availability SolutionsMydbops
 
Patroni - HA PostgreSQL made easy
Patroni - HA PostgreSQL made easyPatroni - HA PostgreSQL made easy
Patroni - HA PostgreSQL made easyAlexander Kukushkin
 
Stream processing using Kafka
Stream processing using KafkaStream processing using Kafka
Stream processing using KafkaKnoldus Inc.
 
ClickHouse Monitoring 101: What to monitor and how
ClickHouse Monitoring 101: What to monitor and howClickHouse Monitoring 101: What to monitor and how
ClickHouse Monitoring 101: What to monitor and howAltinity Ltd
 
Using all of the high availability options in MariaDB
Using all of the high availability options in MariaDBUsing all of the high availability options in MariaDB
Using all of the high availability options in MariaDBMariaDB plc
 
The Complete MariaDB Server tutorial
The Complete MariaDB Server tutorialThe Complete MariaDB Server tutorial
The Complete MariaDB Server tutorialColin Charles
 
Percona XtraDB Cluster ( Ensure high Availability )
Percona XtraDB Cluster ( Ensure high Availability )Percona XtraDB Cluster ( Ensure high Availability )
Percona XtraDB Cluster ( Ensure high Availability )Mydbops
 
Postgresql database administration volume 1
Postgresql database administration volume 1Postgresql database administration volume 1
Postgresql database administration volume 1Federico Campoli
 
Maxscale 소개 1.1.1
Maxscale 소개 1.1.1Maxscale 소개 1.1.1
Maxscale 소개 1.1.1NeoClova
 
ProxySQL for MySQL
ProxySQL for MySQLProxySQL for MySQL
ProxySQL for MySQLMydbops
 
Maxscale switchover, failover, and auto rejoin
Maxscale switchover, failover, and auto rejoinMaxscale switchover, failover, and auto rejoin
Maxscale switchover, failover, and auto rejoinWagner Bianchi
 
The Full MySQL and MariaDB Parallel Replication Tutorial
The Full MySQL and MariaDB Parallel Replication TutorialThe Full MySQL and MariaDB Parallel Replication Tutorial
The Full MySQL and MariaDB Parallel Replication TutorialJean-François Gagné
 
RocksDB Performance and Reliability Practices
RocksDB Performance and Reliability PracticesRocksDB Performance and Reliability Practices
RocksDB Performance and Reliability PracticesYoshinori Matsunobu
 
How does PostgreSQL work with disks: a DBA's checklist in detail. PGConf.US 2015
How does PostgreSQL work with disks: a DBA's checklist in detail. PGConf.US 2015How does PostgreSQL work with disks: a DBA's checklist in detail. PGConf.US 2015
How does PostgreSQL work with disks: a DBA's checklist in detail. PGConf.US 2015PostgreSQL-Consulting
 
PostgreSQL HA
PostgreSQL   HAPostgreSQL   HA
PostgreSQL HAharoonm
 
High Performance, High Reliability Data Loading on ClickHouse
High Performance, High Reliability Data Loading on ClickHouseHigh Performance, High Reliability Data Loading on ClickHouse
High Performance, High Reliability Data Loading on ClickHouseAltinity Ltd
 

What's hot (20)

Deep Dive on ClickHouse Sharding and Replication-2202-09-22.pdf
Deep Dive on ClickHouse Sharding and Replication-2202-09-22.pdfDeep Dive on ClickHouse Sharding and Replication-2202-09-22.pdf
Deep Dive on ClickHouse Sharding and Replication-2202-09-22.pdf
 
MySQL High Availability Solutions
MySQL High Availability SolutionsMySQL High Availability Solutions
MySQL High Availability Solutions
 
Patroni - HA PostgreSQL made easy
Patroni - HA PostgreSQL made easyPatroni - HA PostgreSQL made easy
Patroni - HA PostgreSQL made easy
 
Stream processing using Kafka
Stream processing using KafkaStream processing using Kafka
Stream processing using Kafka
 
ClickHouse Monitoring 101: What to monitor and how
ClickHouse Monitoring 101: What to monitor and howClickHouse Monitoring 101: What to monitor and how
ClickHouse Monitoring 101: What to monitor and how
 
Using all of the high availability options in MariaDB
Using all of the high availability options in MariaDBUsing all of the high availability options in MariaDB
Using all of the high availability options in MariaDB
 
The Complete MariaDB Server tutorial
The Complete MariaDB Server tutorialThe Complete MariaDB Server tutorial
The Complete MariaDB Server tutorial
 
Percona XtraDB Cluster ( Ensure high Availability )
Percona XtraDB Cluster ( Ensure high Availability )Percona XtraDB Cluster ( Ensure high Availability )
Percona XtraDB Cluster ( Ensure high Availability )
 
Postgresql database administration volume 1
Postgresql database administration volume 1Postgresql database administration volume 1
Postgresql database administration volume 1
 
Galera Cluster DDL and Schema Upgrades 220217
Galera Cluster DDL and Schema Upgrades 220217Galera Cluster DDL and Schema Upgrades 220217
Galera Cluster DDL and Schema Upgrades 220217
 
Maxscale 소개 1.1.1
Maxscale 소개 1.1.1Maxscale 소개 1.1.1
Maxscale 소개 1.1.1
 
Automated master failover
Automated master failoverAutomated master failover
Automated master failover
 
ProxySQL for MySQL
ProxySQL for MySQLProxySQL for MySQL
ProxySQL for MySQL
 
Maxscale switchover, failover, and auto rejoin
Maxscale switchover, failover, and auto rejoinMaxscale switchover, failover, and auto rejoin
Maxscale switchover, failover, and auto rejoin
 
The Full MySQL and MariaDB Parallel Replication Tutorial
The Full MySQL and MariaDB Parallel Replication TutorialThe Full MySQL and MariaDB Parallel Replication Tutorial
The Full MySQL and MariaDB Parallel Replication Tutorial
 
RocksDB Performance and Reliability Practices
RocksDB Performance and Reliability PracticesRocksDB Performance and Reliability Practices
RocksDB Performance and Reliability Practices
 
PostgreSQL replication
PostgreSQL replicationPostgreSQL replication
PostgreSQL replication
 
How does PostgreSQL work with disks: a DBA's checklist in detail. PGConf.US 2015
How does PostgreSQL work with disks: a DBA's checklist in detail. PGConf.US 2015How does PostgreSQL work with disks: a DBA's checklist in detail. PGConf.US 2015
How does PostgreSQL work with disks: a DBA's checklist in detail. PGConf.US 2015
 
PostgreSQL HA
PostgreSQL   HAPostgreSQL   HA
PostgreSQL HA
 
High Performance, High Reliability Data Loading on ClickHouse
High Performance, High Reliability Data Loading on ClickHouseHigh Performance, High Reliability Data Loading on ClickHouse
High Performance, High Reliability Data Loading on ClickHouse
 

Similar to M|18 Architectural Overview: MariaDB MaxScale

epoll() - The I/O Hero
epoll() - The I/O Heroepoll() - The I/O Hero
epoll() - The I/O HeroMohsin Hijazee
 
Building a Unified Logging Layer with Fluentd, Elasticsearch and Kibana
Building a Unified Logging Layer with Fluentd, Elasticsearch and KibanaBuilding a Unified Logging Layer with Fluentd, Elasticsearch and Kibana
Building a Unified Logging Layer with Fluentd, Elasticsearch and KibanaMushfekur Rahman
 
Tornado Web Server Internals
Tornado Web Server InternalsTornado Web Server Internals
Tornado Web Server InternalsPraveen Gollakota
 
Linux multiplexing
Linux multiplexingLinux multiplexing
Linux multiplexingMark Veltzer
 
Non-DIY* Logging
Non-DIY* LoggingNon-DIY* Logging
Non-DIY* LoggingESUG
 
Let's Talk Locks!
Let's Talk Locks!Let's Talk Locks!
Let's Talk Locks!C4Media
 
Intro To .Net Threads
Intro To .Net ThreadsIntro To .Net Threads
Intro To .Net Threadsrchakra
 
Concurrency, Parallelism And IO
Concurrency,  Parallelism And IOConcurrency,  Parallelism And IO
Concurrency, Parallelism And IOPiyush Katariya
 
Fun with Network Interfaces
Fun with Network InterfacesFun with Network Interfaces
Fun with Network InterfacesKernel TLV
 
Dennis Benkert & Matthias Lübken - Patterns in a containerized world? - code....
Dennis Benkert & Matthias Lübken - Patterns in a containerized world? - code....Dennis Benkert & Matthias Lübken - Patterns in a containerized world? - code....
Dennis Benkert & Matthias Lübken - Patterns in a containerized world? - code....AboutYouGmbH
 
MULTI-THREADING in python appalication.pptx
MULTI-THREADING in python appalication.pptxMULTI-THREADING in python appalication.pptx
MULTI-THREADING in python appalication.pptxSaiDhanushM
 
Kubernetes @ Squarespace (SRE Portland Meetup October 2017)
Kubernetes @ Squarespace (SRE Portland Meetup October 2017)Kubernetes @ Squarespace (SRE Portland Meetup October 2017)
Kubernetes @ Squarespace (SRE Portland Meetup October 2017)Kevin Lynch
 
TCP Sockets Tutor maXbox starter26
TCP Sockets Tutor maXbox starter26TCP Sockets Tutor maXbox starter26
TCP Sockets Tutor maXbox starter26Max Kleiner
 
Introto netthreads-090906214344-phpapp01
Introto netthreads-090906214344-phpapp01Introto netthreads-090906214344-phpapp01
Introto netthreads-090906214344-phpapp01Aravindharamanan S
 
Scaling an ELK stack at bol.com
Scaling an ELK stack at bol.comScaling an ELK stack at bol.com
Scaling an ELK stack at bol.comRenzo Tomà
 
.NET Multithreading/Multitasking
.NET Multithreading/Multitasking.NET Multithreading/Multitasking
.NET Multithreading/MultitaskingSasha Kravchuk
 
Threads and multi threading
Threads and multi threadingThreads and multi threading
Threads and multi threadingAntonio Cesarano
 
How Splunk Is Using Pulsar IO
How Splunk Is Using Pulsar IOHow Splunk Is Using Pulsar IO
How Splunk Is Using Pulsar IOStreamNative
 

Similar to M|18 Architectural Overview: MariaDB MaxScale (20)

epoll() - The I/O Hero
epoll() - The I/O Heroepoll() - The I/O Hero
epoll() - The I/O Hero
 
Building a Unified Logging Layer with Fluentd, Elasticsearch and Kibana
Building a Unified Logging Layer with Fluentd, Elasticsearch and KibanaBuilding a Unified Logging Layer with Fluentd, Elasticsearch and Kibana
Building a Unified Logging Layer with Fluentd, Elasticsearch and Kibana
 
Tornado Web Server Internals
Tornado Web Server InternalsTornado Web Server Internals
Tornado Web Server Internals
 
Linux multiplexing
Linux multiplexingLinux multiplexing
Linux multiplexing
 
Non-DIY* Logging
Non-DIY* LoggingNon-DIY* Logging
Non-DIY* Logging
 
Let's Talk Locks!
Let's Talk Locks!Let's Talk Locks!
Let's Talk Locks!
 
Intro To .Net Threads
Intro To .Net ThreadsIntro To .Net Threads
Intro To .Net Threads
 
Concurrency, Parallelism And IO
Concurrency,  Parallelism And IOConcurrency,  Parallelism And IO
Concurrency, Parallelism And IO
 
Apache Kafka
Apache KafkaApache Kafka
Apache Kafka
 
Fun with Network Interfaces
Fun with Network InterfacesFun with Network Interfaces
Fun with Network Interfaces
 
Dennis Benkert & Matthias Lübken - Patterns in a containerized world? - code....
Dennis Benkert & Matthias Lübken - Patterns in a containerized world? - code....Dennis Benkert & Matthias Lübken - Patterns in a containerized world? - code....
Dennis Benkert & Matthias Lübken - Patterns in a containerized world? - code....
 
MULTI-THREADING in python appalication.pptx
MULTI-THREADING in python appalication.pptxMULTI-THREADING in python appalication.pptx
MULTI-THREADING in python appalication.pptx
 
rtnetlink
rtnetlinkrtnetlink
rtnetlink
 
Kubernetes @ Squarespace (SRE Portland Meetup October 2017)
Kubernetes @ Squarespace (SRE Portland Meetup October 2017)Kubernetes @ Squarespace (SRE Portland Meetup October 2017)
Kubernetes @ Squarespace (SRE Portland Meetup October 2017)
 
TCP Sockets Tutor maXbox starter26
TCP Sockets Tutor maXbox starter26TCP Sockets Tutor maXbox starter26
TCP Sockets Tutor maXbox starter26
 
Introto netthreads-090906214344-phpapp01
Introto netthreads-090906214344-phpapp01Introto netthreads-090906214344-phpapp01
Introto netthreads-090906214344-phpapp01
 
Scaling an ELK stack at bol.com
Scaling an ELK stack at bol.comScaling an ELK stack at bol.com
Scaling an ELK stack at bol.com
 
.NET Multithreading/Multitasking
.NET Multithreading/Multitasking.NET Multithreading/Multitasking
.NET Multithreading/Multitasking
 
Threads and multi threading
Threads and multi threadingThreads and multi threading
Threads and multi threading
 
How Splunk Is Using Pulsar IO
How Splunk Is Using Pulsar IOHow Splunk Is Using Pulsar IO
How Splunk Is Using Pulsar IO
 

More from MariaDB plc

MariaDB Paris Workshop 2023 - MaxScale 23.02.x
MariaDB Paris Workshop 2023 - MaxScale 23.02.xMariaDB Paris Workshop 2023 - MaxScale 23.02.x
MariaDB Paris Workshop 2023 - MaxScale 23.02.xMariaDB plc
 
MariaDB Paris Workshop 2023 - Newpharma
MariaDB Paris Workshop 2023 - NewpharmaMariaDB Paris Workshop 2023 - Newpharma
MariaDB Paris Workshop 2023 - NewpharmaMariaDB plc
 
MariaDB Paris Workshop 2023 - Cloud
MariaDB Paris Workshop 2023 - CloudMariaDB Paris Workshop 2023 - Cloud
MariaDB Paris Workshop 2023 - CloudMariaDB plc
 
MariaDB Paris Workshop 2023 - MariaDB Enterprise
MariaDB Paris Workshop 2023 - MariaDB EnterpriseMariaDB Paris Workshop 2023 - MariaDB Enterprise
MariaDB Paris Workshop 2023 - MariaDB EnterpriseMariaDB plc
 
MariaDB Paris Workshop 2023 - Performance Optimization
MariaDB Paris Workshop 2023 - Performance OptimizationMariaDB Paris Workshop 2023 - Performance Optimization
MariaDB Paris Workshop 2023 - Performance OptimizationMariaDB plc
 
MariaDB Paris Workshop 2023 - MaxScale
MariaDB Paris Workshop 2023 - MaxScale MariaDB Paris Workshop 2023 - MaxScale
MariaDB Paris Workshop 2023 - MaxScale MariaDB plc
 
MariaDB Paris Workshop 2023 - novadys presentation
MariaDB Paris Workshop 2023 - novadys presentationMariaDB Paris Workshop 2023 - novadys presentation
MariaDB Paris Workshop 2023 - novadys presentationMariaDB plc
 
MariaDB Paris Workshop 2023 - DARVA presentation
MariaDB Paris Workshop 2023 - DARVA presentationMariaDB Paris Workshop 2023 - DARVA presentation
MariaDB Paris Workshop 2023 - DARVA presentationMariaDB plc
 
MariaDB Tech und Business Update Hamburg 2023 - MariaDB Enterprise Server
MariaDB Tech und Business Update Hamburg 2023 - MariaDB Enterprise Server MariaDB Tech und Business Update Hamburg 2023 - MariaDB Enterprise Server
MariaDB Tech und Business Update Hamburg 2023 - MariaDB Enterprise Server MariaDB plc
 
MariaDB SkySQL Autonome Skalierung, Observability, Cloud-Backup
MariaDB SkySQL Autonome Skalierung, Observability, Cloud-BackupMariaDB SkySQL Autonome Skalierung, Observability, Cloud-Backup
MariaDB SkySQL Autonome Skalierung, Observability, Cloud-BackupMariaDB plc
 
Einführung : MariaDB Tech und Business Update Hamburg 2023
Einführung : MariaDB Tech und Business Update Hamburg 2023Einführung : MariaDB Tech und Business Update Hamburg 2023
Einführung : MariaDB Tech und Business Update Hamburg 2023MariaDB plc
 
Hochverfügbarkeitslösungen mit MariaDB
Hochverfügbarkeitslösungen mit MariaDBHochverfügbarkeitslösungen mit MariaDB
Hochverfügbarkeitslösungen mit MariaDBMariaDB plc
 
Die Neuheiten in MariaDB Enterprise Server
Die Neuheiten in MariaDB Enterprise ServerDie Neuheiten in MariaDB Enterprise Server
Die Neuheiten in MariaDB Enterprise ServerMariaDB plc
 
Global Data Replication with Galera for Ansell Guardian®
Global Data Replication with Galera for Ansell Guardian®Global Data Replication with Galera for Ansell Guardian®
Global Data Replication with Galera for Ansell Guardian®MariaDB plc
 
Introducing workload analysis
Introducing workload analysisIntroducing workload analysis
Introducing workload analysisMariaDB plc
 
Under the hood: SkySQL monitoring
Under the hood: SkySQL monitoringUnder the hood: SkySQL monitoring
Under the hood: SkySQL monitoringMariaDB plc
 
Introducing the R2DBC async Java connector
Introducing the R2DBC async Java connectorIntroducing the R2DBC async Java connector
Introducing the R2DBC async Java connectorMariaDB plc
 
MariaDB Enterprise Tools introduction
MariaDB Enterprise Tools introductionMariaDB Enterprise Tools introduction
MariaDB Enterprise Tools introductionMariaDB plc
 
Faster, better, stronger: The new InnoDB
Faster, better, stronger: The new InnoDBFaster, better, stronger: The new InnoDB
Faster, better, stronger: The new InnoDBMariaDB plc
 
The architecture of SkySQL
The architecture of SkySQLThe architecture of SkySQL
The architecture of SkySQLMariaDB plc
 

More from MariaDB plc (20)

MariaDB Paris Workshop 2023 - MaxScale 23.02.x
MariaDB Paris Workshop 2023 - MaxScale 23.02.xMariaDB Paris Workshop 2023 - MaxScale 23.02.x
MariaDB Paris Workshop 2023 - MaxScale 23.02.x
 
MariaDB Paris Workshop 2023 - Newpharma
MariaDB Paris Workshop 2023 - NewpharmaMariaDB Paris Workshop 2023 - Newpharma
MariaDB Paris Workshop 2023 - Newpharma
 
MariaDB Paris Workshop 2023 - Cloud
MariaDB Paris Workshop 2023 - CloudMariaDB Paris Workshop 2023 - Cloud
MariaDB Paris Workshop 2023 - Cloud
 
MariaDB Paris Workshop 2023 - MariaDB Enterprise
MariaDB Paris Workshop 2023 - MariaDB EnterpriseMariaDB Paris Workshop 2023 - MariaDB Enterprise
MariaDB Paris Workshop 2023 - MariaDB Enterprise
 
MariaDB Paris Workshop 2023 - Performance Optimization
MariaDB Paris Workshop 2023 - Performance OptimizationMariaDB Paris Workshop 2023 - Performance Optimization
MariaDB Paris Workshop 2023 - Performance Optimization
 
MariaDB Paris Workshop 2023 - MaxScale
MariaDB Paris Workshop 2023 - MaxScale MariaDB Paris Workshop 2023 - MaxScale
MariaDB Paris Workshop 2023 - MaxScale
 
MariaDB Paris Workshop 2023 - novadys presentation
MariaDB Paris Workshop 2023 - novadys presentationMariaDB Paris Workshop 2023 - novadys presentation
MariaDB Paris Workshop 2023 - novadys presentation
 
MariaDB Paris Workshop 2023 - DARVA presentation
MariaDB Paris Workshop 2023 - DARVA presentationMariaDB Paris Workshop 2023 - DARVA presentation
MariaDB Paris Workshop 2023 - DARVA presentation
 
MariaDB Tech und Business Update Hamburg 2023 - MariaDB Enterprise Server
MariaDB Tech und Business Update Hamburg 2023 - MariaDB Enterprise Server MariaDB Tech und Business Update Hamburg 2023 - MariaDB Enterprise Server
MariaDB Tech und Business Update Hamburg 2023 - MariaDB Enterprise Server
 
MariaDB SkySQL Autonome Skalierung, Observability, Cloud-Backup
MariaDB SkySQL Autonome Skalierung, Observability, Cloud-BackupMariaDB SkySQL Autonome Skalierung, Observability, Cloud-Backup
MariaDB SkySQL Autonome Skalierung, Observability, Cloud-Backup
 
Einführung : MariaDB Tech und Business Update Hamburg 2023
Einführung : MariaDB Tech und Business Update Hamburg 2023Einführung : MariaDB Tech und Business Update Hamburg 2023
Einführung : MariaDB Tech und Business Update Hamburg 2023
 
Hochverfügbarkeitslösungen mit MariaDB
Hochverfügbarkeitslösungen mit MariaDBHochverfügbarkeitslösungen mit MariaDB
Hochverfügbarkeitslösungen mit MariaDB
 
Die Neuheiten in MariaDB Enterprise Server
Die Neuheiten in MariaDB Enterprise ServerDie Neuheiten in MariaDB Enterprise Server
Die Neuheiten in MariaDB Enterprise Server
 
Global Data Replication with Galera for Ansell Guardian®
Global Data Replication with Galera for Ansell Guardian®Global Data Replication with Galera for Ansell Guardian®
Global Data Replication with Galera for Ansell Guardian®
 
Introducing workload analysis
Introducing workload analysisIntroducing workload analysis
Introducing workload analysis
 
Under the hood: SkySQL monitoring
Under the hood: SkySQL monitoringUnder the hood: SkySQL monitoring
Under the hood: SkySQL monitoring
 
Introducing the R2DBC async Java connector
Introducing the R2DBC async Java connectorIntroducing the R2DBC async Java connector
Introducing the R2DBC async Java connector
 
MariaDB Enterprise Tools introduction
MariaDB Enterprise Tools introductionMariaDB Enterprise Tools introduction
MariaDB Enterprise Tools introduction
 
Faster, better, stronger: The new InnoDB
Faster, better, stronger: The new InnoDBFaster, better, stronger: The new InnoDB
Faster, better, stronger: The new InnoDB
 
The architecture of SkySQL
The architecture of SkySQLThe architecture of SkySQL
The architecture of SkySQL
 

Recently uploaded

专业一比一美国俄亥俄大学毕业证成绩单pdf电子版制作修改
专业一比一美国俄亥俄大学毕业证成绩单pdf电子版制作修改专业一比一美国俄亥俄大学毕业证成绩单pdf电子版制作修改
专业一比一美国俄亥俄大学毕业证成绩单pdf电子版制作修改yuu sss
 
原版1:1定制南十字星大学毕业证(SCU毕业证)#文凭成绩单#真实留信学历认证永久存档
原版1:1定制南十字星大学毕业证(SCU毕业证)#文凭成绩单#真实留信学历认证永久存档原版1:1定制南十字星大学毕业证(SCU毕业证)#文凭成绩单#真实留信学历认证永久存档
原版1:1定制南十字星大学毕业证(SCU毕业证)#文凭成绩单#真实留信学历认证永久存档208367051
 
Predicting Salary Using Data Science: A Comprehensive Analysis.pdf
Predicting Salary Using Data Science: A Comprehensive Analysis.pdfPredicting Salary Using Data Science: A Comprehensive Analysis.pdf
Predicting Salary Using Data Science: A Comprehensive Analysis.pdfBoston Institute of Analytics
 
GA4 Without Cookies [Measure Camp AMS]
GA4 Without Cookies [Measure Camp AMS]GA4 Without Cookies [Measure Camp AMS]
GA4 Without Cookies [Measure Camp AMS]📊 Markus Baersch
 
RABBIT: A CLI tool for identifying bots based on their GitHub events.
RABBIT: A CLI tool for identifying bots based on their GitHub events.RABBIT: A CLI tool for identifying bots based on their GitHub events.
RABBIT: A CLI tool for identifying bots based on their GitHub events.natarajan8993
 
Statistics, Data Analysis, and Decision Modeling, 5th edition by James R. Eva...
Statistics, Data Analysis, and Decision Modeling, 5th edition by James R. Eva...Statistics, Data Analysis, and Decision Modeling, 5th edition by James R. Eva...
Statistics, Data Analysis, and Decision Modeling, 5th edition by James R. Eva...ssuserf63bd7
 
PKS-TGC-1084-630 - Stage 1 Proposal.pptx
PKS-TGC-1084-630 - Stage 1 Proposal.pptxPKS-TGC-1084-630 - Stage 1 Proposal.pptx
PKS-TGC-1084-630 - Stage 1 Proposal.pptxPramod Kumar Srivastava
 
Semantic Shed - Squashing and Squeezing.pptx
Semantic Shed - Squashing and Squeezing.pptxSemantic Shed - Squashing and Squeezing.pptx
Semantic Shed - Squashing and Squeezing.pptxMike Bennett
 
Consent & Privacy Signals on Google *Pixels* - MeasureCamp Amsterdam 2024
Consent & Privacy Signals on Google *Pixels* - MeasureCamp Amsterdam 2024Consent & Privacy Signals on Google *Pixels* - MeasureCamp Amsterdam 2024
Consent & Privacy Signals on Google *Pixels* - MeasureCamp Amsterdam 2024thyngster
 
While-For-loop in python used in college
While-For-loop in python used in collegeWhile-For-loop in python used in college
While-For-loop in python used in collegessuser7a7cd61
 
From idea to production in a day – Leveraging Azure ML and Streamlit to build...
From idea to production in a day – Leveraging Azure ML and Streamlit to build...From idea to production in a day – Leveraging Azure ML and Streamlit to build...
From idea to production in a day – Leveraging Azure ML and Streamlit to build...Florian Roscheck
 
RadioAdProWritingCinderellabyButleri.pdf
RadioAdProWritingCinderellabyButleri.pdfRadioAdProWritingCinderellabyButleri.pdf
RadioAdProWritingCinderellabyButleri.pdfgstagge
 
Student Profile Sample report on improving academic performance by uniting gr...
Student Profile Sample report on improving academic performance by uniting gr...Student Profile Sample report on improving academic performance by uniting gr...
Student Profile Sample report on improving academic performance by uniting gr...Seán Kennedy
 
9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort service
9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort service9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort service
9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort servicejennyeacort
 
Effects of Smartphone Addiction on the Academic Performances of Grades 9 to 1...
Effects of Smartphone Addiction on the Academic Performances of Grades 9 to 1...Effects of Smartphone Addiction on the Academic Performances of Grades 9 to 1...
Effects of Smartphone Addiction on the Academic Performances of Grades 9 to 1...limedy534
 
RS 9000 Call In girls Dwarka Mor (DELHI)⇛9711147426🔝Delhi
RS 9000 Call In girls Dwarka Mor (DELHI)⇛9711147426🔝DelhiRS 9000 Call In girls Dwarka Mor (DELHI)⇛9711147426🔝Delhi
RS 9000 Call In girls Dwarka Mor (DELHI)⇛9711147426🔝Delhijennyeacort
 
Learn How Data Science Changes Our World
Learn How Data Science Changes Our WorldLearn How Data Science Changes Our World
Learn How Data Science Changes Our WorldEduminds Learning
 
Generative AI for Social Good at Open Data Science East 2024
Generative AI for Social Good at Open Data Science East 2024Generative AI for Social Good at Open Data Science East 2024
Generative AI for Social Good at Open Data Science East 2024Colleen Farrelly
 

Recently uploaded (20)

专业一比一美国俄亥俄大学毕业证成绩单pdf电子版制作修改
专业一比一美国俄亥俄大学毕业证成绩单pdf电子版制作修改专业一比一美国俄亥俄大学毕业证成绩单pdf电子版制作修改
专业一比一美国俄亥俄大学毕业证成绩单pdf电子版制作修改
 
原版1:1定制南十字星大学毕业证(SCU毕业证)#文凭成绩单#真实留信学历认证永久存档
原版1:1定制南十字星大学毕业证(SCU毕业证)#文凭成绩单#真实留信学历认证永久存档原版1:1定制南十字星大学毕业证(SCU毕业证)#文凭成绩单#真实留信学历认证永久存档
原版1:1定制南十字星大学毕业证(SCU毕业证)#文凭成绩单#真实留信学历认证永久存档
 
Predicting Salary Using Data Science: A Comprehensive Analysis.pdf
Predicting Salary Using Data Science: A Comprehensive Analysis.pdfPredicting Salary Using Data Science: A Comprehensive Analysis.pdf
Predicting Salary Using Data Science: A Comprehensive Analysis.pdf
 
GA4 Without Cookies [Measure Camp AMS]
GA4 Without Cookies [Measure Camp AMS]GA4 Without Cookies [Measure Camp AMS]
GA4 Without Cookies [Measure Camp AMS]
 
RABBIT: A CLI tool for identifying bots based on their GitHub events.
RABBIT: A CLI tool for identifying bots based on their GitHub events.RABBIT: A CLI tool for identifying bots based on their GitHub events.
RABBIT: A CLI tool for identifying bots based on their GitHub events.
 
Statistics, Data Analysis, and Decision Modeling, 5th edition by James R. Eva...
Statistics, Data Analysis, and Decision Modeling, 5th edition by James R. Eva...Statistics, Data Analysis, and Decision Modeling, 5th edition by James R. Eva...
Statistics, Data Analysis, and Decision Modeling, 5th edition by James R. Eva...
 
PKS-TGC-1084-630 - Stage 1 Proposal.pptx
PKS-TGC-1084-630 - Stage 1 Proposal.pptxPKS-TGC-1084-630 - Stage 1 Proposal.pptx
PKS-TGC-1084-630 - Stage 1 Proposal.pptx
 
Semantic Shed - Squashing and Squeezing.pptx
Semantic Shed - Squashing and Squeezing.pptxSemantic Shed - Squashing and Squeezing.pptx
Semantic Shed - Squashing and Squeezing.pptx
 
Consent & Privacy Signals on Google *Pixels* - MeasureCamp Amsterdam 2024
Consent & Privacy Signals on Google *Pixels* - MeasureCamp Amsterdam 2024Consent & Privacy Signals on Google *Pixels* - MeasureCamp Amsterdam 2024
Consent & Privacy Signals on Google *Pixels* - MeasureCamp Amsterdam 2024
 
While-For-loop in python used in college
While-For-loop in python used in collegeWhile-For-loop in python used in college
While-For-loop in python used in college
 
Call Girls in Saket 99530🔝 56974 Escort Service
Call Girls in Saket 99530🔝 56974 Escort ServiceCall Girls in Saket 99530🔝 56974 Escort Service
Call Girls in Saket 99530🔝 56974 Escort Service
 
Deep Generative Learning for All - The Gen AI Hype (Spring 2024)
Deep Generative Learning for All - The Gen AI Hype (Spring 2024)Deep Generative Learning for All - The Gen AI Hype (Spring 2024)
Deep Generative Learning for All - The Gen AI Hype (Spring 2024)
 
From idea to production in a day – Leveraging Azure ML and Streamlit to build...
From idea to production in a day – Leveraging Azure ML and Streamlit to build...From idea to production in a day – Leveraging Azure ML and Streamlit to build...
From idea to production in a day – Leveraging Azure ML and Streamlit to build...
 
RadioAdProWritingCinderellabyButleri.pdf
RadioAdProWritingCinderellabyButleri.pdfRadioAdProWritingCinderellabyButleri.pdf
RadioAdProWritingCinderellabyButleri.pdf
 
Student Profile Sample report on improving academic performance by uniting gr...
Student Profile Sample report on improving academic performance by uniting gr...Student Profile Sample report on improving academic performance by uniting gr...
Student Profile Sample report on improving academic performance by uniting gr...
 
9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort service
9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort service9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort service
9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort service
 
Effects of Smartphone Addiction on the Academic Performances of Grades 9 to 1...
Effects of Smartphone Addiction on the Academic Performances of Grades 9 to 1...Effects of Smartphone Addiction on the Academic Performances of Grades 9 to 1...
Effects of Smartphone Addiction on the Academic Performances of Grades 9 to 1...
 
RS 9000 Call In girls Dwarka Mor (DELHI)⇛9711147426🔝Delhi
RS 9000 Call In girls Dwarka Mor (DELHI)⇛9711147426🔝DelhiRS 9000 Call In girls Dwarka Mor (DELHI)⇛9711147426🔝Delhi
RS 9000 Call In girls Dwarka Mor (DELHI)⇛9711147426🔝Delhi
 
Learn How Data Science Changes Our World
Learn How Data Science Changes Our WorldLearn How Data Science Changes Our World
Learn How Data Science Changes Our World
 
Generative AI for Social Good at Open Data Science East 2024
Generative AI for Social Good at Open Data Science East 2024Generative AI for Social Good at Open Data Science East 2024
Generative AI for Social Good at Open Data Science East 2024
 

M|18 Architectural Overview: MariaDB MaxScale

  • 2. Overview ● What is MaxScale ● Architecture ● Performance ● Summary
  • 4. What is MaxScale ● Cluster Abstraction ○ Hides the complexity. ○ Load Balancer ○ High Availability ○ Easier Maintenance ● And more ○ Firewall ○ Data masking ○ Logging ○ Cache ○ ... Client MaxScale Master Slave Slave
  • 5. Read Write Splitting ● Analyze statements ○ Send where appropriate Client MaxScale Master Slave Slave
  • 6. Read Write Splitting ● Analyze statements ○ Send where appropriate ● Write statements to master Client MaxScale Master Slave Slave > INSERT INTO ...
  • 7. Read Write Splitting ● Analyze statements ○ Send where appropriate ● Write statements to master ● Read statements to some slave Client MaxScale Master Slave Slave > SELECT * ...
  • 8. Read Write Splitting ● Analyze statements ○ Send where appropriate ● Write statements to master ● Read statements to some slave ● Session statements to all servers Client MaxScale Master Slave Slave > SET autocommit
  • 10. ......... Static Architecture Protocol Authenticator Filter Router Query Classifier Monitor MariaDBClient MySQLAuth ... DBFwfilter ... ReadWriteSplit qc_sqlite ... MariaDBMon Core ● Threading ● Logging ● Plugin loading ● Lifetime management ● REST-API ● Admin Functionality ● etc. APIs MaxScale
  • 12. Code MaxScale: 147 kloc Core: 51 kloc Authenticators: 5 kloc Filters: 27 kloc Routers: 43 kloc Monitors: 12 kloc Protocols: 9 kloc Modules: 96 kloc For comparison: ● MariaDB server: 2500 kloc
  • 13. Threading Architecture ● MaxScale is essentially a router. ○ It receives SQL packets from numerous clients and dispatches them to one or more servers. ○ Waits for responses from one or more servers, and sends a response to the client. ○ Number of clients may be large. ● Basic alternatives: ○ One thread per client. ○ Asynchronous I/O and fixed number of threads. ● Reason lost in the mists of history, but MaxScale is implemented using the latter approach.
  • 14. Asynchronous I/O in Principle. ● Basically: ● When there is no activity, the thread is idle. ● When something happens, the thread wakes up and handles the events. ○ May involve initiating asynchronous I/O whose result is later reported as an event. ● Once the event has been handled, the thread returns to waiting for events. setup(); while (true) { io_events events = wait_for_io(); handle_events(events); } ● Create some file descriptors ● Make them non-blocking ● Add them to some waiting mechanism. ● Wait for something to happen to those file descriptors ● Handle whatever happened
  • 15. So How do You Wait on Events? ● select ○ The original mechanism, been around since the beginning of time. Fixed size limit on the number of descriptors. O(N) ● poll ○ No limit on number of descriptors. O(N). ● epoll ○ More complex to set up. No limit on number of descriptors. All changes via system calls, i.e. thread safe. O(1). Epoll is not a better poll, it’s different.
  • 16. MaxScale epoll Setup ● At startup, socket creation is triggered by the presence of listeners. [TheListener] type=listener service=TheService ... port=4009 so = socket(...); ... listen(so); ... struct epoll_event ev; ev.events = events; ev.data.ptr = data; epoll_ctl(epoll_fd, EPOLL_CTL_ADD, so, &ev); epoll_fd = epoll_create(...);
  • 17. Client Connection Client MaxScale while (!shutdown) { struct epoll_event events[MAX_EVENTS]; int ndfs = epoll_wait(epoll_fd, events, ...); for (int i = 0; i < ndfs; ++i) { epoll_event* event = &events[i]; handle_event(event); } }
  • 18. Client Connection, cont’d void handle_event(struct epoll_event* event) { if (event->events & EPOLLIN) { if (descriptor was a listening socket) { handle_accept(event); } else { handle_read(event); } } if (event->events & ...) { ... } }
  • 19. Handle Accept void handle_accept(struct epoll_event* event) { for (all servers in service) { int so; connect each server; struct epoll_event ev; ev.events = events; ev.data.ptr = data; epoll_ctl(epoll_fd, EPOLL_CTL_ADD, so, &ev); } } [TheService] type=service service=readwritesplit servers=server,server2 ...
  • 20. Handle Read void handle_read(struct epoll_event* event) { char buffer[MAX_SIZE]; read(sd, buffer, sizeof(buffer)); figure out what to do with the data // - wait for more // - authenticate // - send to master, send to slaves, send to all // ... ... } ● When the servers reply, the response will be handled in a similar manner.
  • 21. Binding Things Together Client Server DCB Session MXS_ROUTER_SESSION RWSplitSession DCB 1..* 1 1 Plugin boundary Representation of a connection/descriptor ● A Session object ties together the client connection and all server connections associated with that client connection.
  • 22. MaxScale 1.0 - 2.0 epoll_fd = epoll_create(...); Thread 1 while (!shutdown) { epoll_wait(epoll_fd, ...); ... } Thread 2 while (!shutdown) { epoll_wait(epoll_fd, ...); ... } Thread 3 while (!shutdown) { epoll_wait(epoll_fd, ...); ... }
  • 23. Problematic with one epoll Instance ● There are multiple socket descriptors for each client session. ○ One for the client connection. ○ One for every backend server. ● It is possible that an event on each of those is concurrently handled by as many threads. ○ Client have issued a request that has been sent to all servers. ○ Response arrives from each server at the same time the client closes its connection. ● Session data ends up being manipulated by many threads concurrently.
  • 24. Implications ● Lots of locks and locking was needed. ○ Primarily spinlocks, intended to be held for brief periods of time. ● Events for a socket may be reported to a thread while another thread was still handling earlier events for that same socket. ○ Event extraction and event handling had to be decoupled => locking. ● Very hard to be sure no deadlocks could occur. ● Very hard to be sure no races were possible. ● Very hard to program, as it was not always obvious what could and what could not occur concurrently. ● The locks started to hurt under high load and lots of clients.
  • 25. MaxScale 2.1 Thread 1 while (!shutdown) { epoll_wait(epoll_fd, ...); ... } epoll_fd = epoll_create(...); Thread 2 while (!shutdown) { epoll_wait(epoll_fd, ...); ... } epoll_fd = epoll_create(...); Thread 2 while (!shutdown) { epoll_wait(epoll_fd, ...); ... } epoll_fd = epoll_create(...);
  • 26. MaxScale 2.1 ● Each thread has a epoll instance of its own. ● When a client connects: ○ The thread that handles the client will also handle all communication will all backends on behalf of that client. ○ All descriptors belonging to a particular client session are only added to the epoll instance of the thread in question. ● Listening sockets are still an exception; added to the poll set of all threads. ○ After accepting, the client socket is then moved in a round-robin fashion to some thread. ● Huge impact on the performance.
  • 27. MaxScale 2.2 ● Remove the last traces of inter-thread communication. ● Basic problem: How to distribute new connections among existing threads? ● New connections should be distributed across different threads in a roughly even manner. ● All ports must be treated in the same way. ○ So a particular port cannot e.g. be permanently assigned to a specific thread.
  • 28. epoll ● Two ways events can be triggered: ○ Edge-triggered, reported when something has happened. ○ Level-triggered, reported when something is available. Inactive Active Edge triggered Level triggered
  • 29. Implication of edge/level triggered epoll 1. The file descriptor that represents the read side of a pipe (rfd) is registered on the epoll instance. 2. A pipe writer writes 2 kB of data on the write side of the pipe. 3. A call to epoll_wait(2) is done that will return rfd as a ready file descriptor. 4. The pipe reader reads 1 kB of data from rfd. 5. A call to epoll_wait(2) is done. ● If rfd was added using EPOLLET (edge-triggered) then the call at 5 will hang. ● EPOLLET requires ○ Non-blocking descriptors. ○ Events can be waited for (epoll_wait) only after read or write return EAGAIN. Example straight from $ man epoll
  • 30. Two kind of Descriptors ● Listening sockets that all threads should handle. ● Sockets related to a client session that only a particular thread should handle. ● What’s the problem with the listening sockets being in the epoll instance of each thread (as in MaxScale 2.1)? ○ Also the listening socket must be non-blocking and added using EPOLLET. ○ A thread that returns from epoll_wait must call accept on the listening socket until it returns EWOULDBLOCK. ○ So, either we must accept that a thread suddenly may have to deal with a large number of clients (if there is a sudden surge) or a thread must be able to offload an accepted client socket to another thread.
  • 31. What we Want ● Each thread does not need to accept more than one client at a time. ○ That is, EPOLLET cannot be used. ● We don’t have to manipulate the epoll instance of a thread, from outside the thread. ○ Listening sockets are a global resource while sockets related to a client session are thread local resources. ○ Not having to do that also means that making it possible to increase and decrease the number of threads at runtime becomes easier.
  • 32. But epoll instances can also be waited for. ● If an epoll file descriptor has events waiting, then it will indicate that as being readable. ● So, ○ if a file descriptor is added to an epoll instance, and ○ the descriptor of that epoll instance is added to another second epoll instance, then ○ when something happens to the file descriptor, a thread blocked in an epoll_wait call on the second epoll instance will return. ● If the thread now calls epoll_wait on the first epoll instance, it will return with actual file descriptor on which some change has occurred.
  • 33. MaxScale 2.2 Thread N l_fd = epoll_create(...); struct epoll_event ev; ev.events = EPOLLIN; // NOT EPOLLET epoll_ctl(l_fd, EPOLL_CTL_ADD, g_fd, &ev); while (!shutdown) { epoll_wait(l_fd, ...); ... } g_fd = epoll_create(...); void add_listening_socket(int sd) { struct epoll_event ev; ev.events = EPOLLIN; // NOT EPOLLET epoll_ctl(g_fd, EPOLL_CTL_ADD, g_fd, &ev); } void add_client_socket(int l_fd, int sd) { struct epoll_event ev; ev.events = .. | EPOLLET; epoll_ctl(l_fd, EPOLL_CTL_ADD, sd, &ev); }
  • 34. Client Connecting typedef void (*handler_t)(epoll_event*); while (!shutdown) { struct epoll_event events[MAX_EVENTS]; int ndfs = epoll_wait(epoll_fd, events, ...); for (int i = 0; i < ndfs; ++i) { epoll_event* event = &events[i]; handler_t handler = get_handler(event); handler(event); } } void handle_epoll_event(epoll_event*) { struct epoll_event events[1]; int fd = get_fd(event); // fd == g_fd epoll_wait(fd, events, 1, 0); // 0 timeout. epoll_event* event = &events[0]; handler_t handler = get_handler(event); handler(event); } void handle_accept_event(epoll_event* event) { int sd = get_fd(event); while ((cd = accept(sd)) != NULL) { ... add_client_socket(cd, ...); } }
  • 35. get_hander(event) and get_fd(event) ? typedef union epoll_data { void *ptr; int fd; uint32_t u32; uint64_t u64; } epoll_data_t; int epoll_ctl(int epfd, int op, int fd, struct epoll_event *event); int epoll_wait(int epfd, struct epoll_event *events, int maxevents, int timeout); struct epoll_event { uint32_t events; epoll_data_t data; }; ● When adding a descriptor to an epoll instance you can associate a value. ● When something occurs you get that value back. ○ If you do not store the fd, you do not know what fd the event relates to. ○ If you store the fd, you cannot store anything else.
  • 36. Storing More Context With an Event typedef uint32_t (*mxs_poll_handler_t)(struct mxs_poll_data* data, int wid, uint32_t events); typedef struct mxs_poll_data { mxs_poll_handler_t handler; /*< Handler for this particular kind of mxs_poll_data. */ } MXS_POLL_DATA; typedef struct dcb { MXS_POLL_DATA poll; int fd; ... } DCB; static uint32_t dcb_poll_handler(MXS_POLL_DATA *data, ...) { DCB *dcb = (DCB*)data; ... }; DCB* create_dcb(...) { DCB* dcb = alloc_dcb(...); dcb.poll.handler = dcb_poll_handler; return dcb; } class Worker : private MXS_POLL_DATA { public: Worker() { MXS_POLL_DATA::handler = &Worker::epoll_handler; ... }; static uint32_t epoll_handler(MXS_POLL_DATA* data, ...) { return ((Worker*)data)->handler(...); } int fd; };
  • 37. Adding and Extracting Events void poll_add_fd(int fd, uint32_t events, MXS_POLL_DATA* pData) { struct epoll_event ev; ev.events = events; ev.data.ptr = pData; epoll_ctl(m_epoll_fd, EPOLL_CTL_ADD, fd, &ev); } DCB* dcb = ...; poll_add_events(dcb->fd, ..., &dcb->poll); Worker* pWorker = ...; poll_add_events(pWorker->fd, ..., pWorker); while (!should_shutdown) { struct epoll_event events[MAX_EVENTS]; int n = epoll_wait(epoll_fd, events, MAX_EVENTS, -1); for (int i = 0; i < n; ++i) { MXS_POLL_DATA* data = (MXS_POLL_DATA)events[i].data.ptr; data->handler(data, ..., events[i].events); } } Each worker thread sits in this loop.
  • 39. MaxScale 2.0.5 (up to) Hardware: Two physical servers, 16 cores / 32 hyperthreads, 128GB RAM and an SSD drive, connected using GBE LAN. One runs MaxScale and sysbench, the other 4 MariaDB servers setup as Master and 3 Slaves. Workload: OLTP read-only, 100 simple selects per iteration, no transaction boundaries. ● direct: Sysbench uses all servers directly in round-robin fashion. ● rcr: MaxScale readonnroute router. ● rws: MaxScale readwritesplit router.
  • 40. MaxScale 2.1.0 ● The architectural change that allowed the removal of a large number of locks provided a dramatic improvement for readconnroute. ● No change for readwritesplit. ● With small number of clients the introduced cache improved the performance, with large number no impact.
  • 41. Query Classification ● When ReadWriteSplitting, MaxScale must parse the statement. ○ Does it need to be sent to the master, to some slave or to all servers? ● The classification is done using a significantly modified parser from sqlite. ● In each thread, the parsing is done using a thread specific in-memory database. Thread 1 Sqlite Thread 2 sqlite ● No shared data, should be no contention. ● Sqlite was not built using the right flags, but there was serialization going on.
  • 42. Data Collection ● While parsing a statement, a fair amount of information was collected. ○ What tables and columns are accessed. What functions are called. Etc. ● Allocating memory for that information did not come without a cost. ○ Basically only the firewall filter uses that information. ● Now no information is collected by default, but a filter that is interested in that information must express it explicitly. qc_parse_result_t parse_result = qc_parse(stmt, QC_COLLECT_ALL);
  • 43. Custom Parser ● Many routers and filters need to know whether a transaction is ongoing. ● Up until MaxScale 2.1.1 that implied that the statements had to be parsed using the query classifier. ● For MaxScale 2.1.2 we introduced a custom parser that only detects statements affecting the autocommit mode & transaction state. ○ Much faster than full parsing. ● In MaxScale 2.3 we will rely upon the server telling the autocommit mode & transaction state. ○ Implies that changes performed via prepared statements or functions will also be detected.
  • 45. Cache ● The cache was introduced in 2.1.0 but the performance was less than satisfactory. ● Problem was caused by parsing. ○ The cache parsed all statements to detect non-cacheable statements. ○ E.g. SELECT CURRENT_DATE(); ● Added possibility to declare that all SELECT statements are cacheable. [TheCache] type=filter module=cache ... selects=assume_cacheable ● Huge impact on the performance.
  • 46. MaxScale ReadWriteSplit ● In the best case the performance of MaxScale 2.1.3 is three times, eight if caching is used, than the performance of MaxScale 2.0.5.
  • 47. The Importance of Early Customer Feedback ● MaxScale caches user information so that it can authenticate users. ● In MaxScale 2.2.0 the user database was shared between threads. ● Worked fine when connection attempts were relatively rare and sessions were relatively long Thread 1 Users Thread 2 ● If a user is not found, any thread may refresh the user data from the server and update the database. ● All access must use locks.
  • 48. User Report ● With MaxScale 2.2.0 Beta a user reported that he got only 6000 qps. function event(thread_id) db_connect() rs = db_query("select 1;") db_disconnect() ● Reason turned out to be thread contention in relation to the user database. Thread 1 Users Thread 2 Users ● We split it, so that each thread has its own user database. “I just tested the same case, got 361437 queries/second, I think it works for us”
  • 49. What about MaxScale 2.2.2 ● No real difference, which is good, because 2.2 does more than 2.1. ○ E.g. must catch “SET SESSION SQL_MODE=ORACLE”
  • 51. Summary ● MaxScale 0.1 -> 2.0 ○ One epoll instance that all worker threads wait on. ○ Any thread can handle anything. ○ Lots of locking needed, and lots of potential for hard-to-resolve races. ○ Performance problems. ● MaxScale 2.1 ○ One epoll instance per worker thread. ○ Any thread can accept, but must distribute the client socket to a particular thread. ○ All activity related to a particular session handled by one thread. ○ Significantly reduced need for locking and race risk effectively eliminated. ○ Good performance. ● MaxScale 2.2 ○ One epoll instance for “shared” descriptors (listening sockets). ○ One epoll instance per worker thread. ○ All activity related to a particular session handled by one thread. ○ Even less locking needed. ○ Good performance.
  • 52. Where do We Go From Here? ● The architectural evolution of MaxScale can be summarized as: ○ Decrease the explicit coupling between the worker threads. ■ If that leads to duplicate work or increased memory usage, fine. ● We are likely to continue moving in that direction still, so that we conceptually will end up running N “mini”-MaxScales in parallel, completely oblivious of each other. ● That would also make it easy to allow the starting and stopping of threads, while MaxScale is running.