The document discusses the challenges of scaling a PostgreSQL database for a SAAS backend with growing data. It describes how the company initially separated OLTP and OLAP data into separate databases but later unified them into a single database approach. It discusses partitioning the data using separate databases for each customer account and the benefits and limitations of this approach. It also covers additional performance issues encountered and solutions implemented including advisory locks, bulk loading optimizations, and maintaining spare databases to speed up new account creation. The document emphasizes the importance of schemas for code versioning and staging releases.
%in tembisa+277-882-255-28 abortion pills for sale in tembisa
Scaling a SaaS backend with PostgreSQL - A case study
1. SCALING A SAAS BACKEND
WITH POSTGRESQL – A CASE STUDY
PostgreSQL Conference Europe
Madrid 2014-10-24
Oliver Seemann - Bidmanagement GmbH
oliver.seemann@adspert.net
Hi, I’m Oliver, I’m a software developer, currently heading the development team at Bidmanagement GmbH in Berlin.
I’m going to talk about how we’re using it as main datastore in our system
Non of the solutions or approaches are ..
But by using some pg features in a non-standard way certain problems can be solved quite elegantly
And seeing that this works very well for some, maybe will be helpful to some of you when you have similar problems now or in the future.
Mostly in the area of search engine marketing
Which today is mostly adwords, however we also support other networks, for example yandex
Our flagship product is a fully automatic bid management solution. Everyday we’re changing the bids on tens of millions of keywords and learn from the effects to steer campaign performance towards goals configured by the user.
The philosophy is to take the mind numbing number crunching tasks away from the user, because a computer can do it better and much more efficiently, especially when you have thousands or millions of objects to analyze.
- Replicate the campaign structure
Provide reporting interface
I don’t want to bore you with the technical details about how search engine marketing work so let’s just say we store a lot of ints and floats and especially time series data of those.
To get an idea of the ballpark we’re working with let’s have a look at the upper boundary
Ballpark estimates
Upper bound
Time series data
Hierarchical data
Clicks, impressions and also lots of statistical projections with confidence intervals
Of course most of those values are actually zero and those can be omitted when storing the data.
So it may actually only be 5 or 10% or that.
However we have thousands of accounts, most of which only have a few hundred MB to a few GB.
But the occasional outlier with 100GB must work just as well.
The different kinds of data we store can be largely separated into two groups.
One internal (batch processing),
One external (web app access)
Mostly the time series data
So we had to either duplicate lots data and synchronize changes.
Or integrate both into one and make sure different parts of the system don’t get in each other’s way.
We opted for the latter because it makes for a simpler system.
We just have to make sure
So far it turned out well and we havent looked back.
Let’s have a peek into the past in order to understand how the system evolved.
Our CTO is a mathematician
Skunk works project
PostgreSQL supports partitioning via inheritance
[insert scheme]
Use CHECK constraints to tell Query Planner where to look
Cannot insert into parent table, must insert into child table
Lot of effort goes to application logic
Tried it on one table, weren’t it conviced
The database or schema as a logical unit is a central part of PG with good tool support
Easy to add, easy to drop
Can be Backed up
Restored
Moved between machines
Very Tangible from an ops view
MainDB still replicated
To enable quick failover
Here we can’t afford extended downtime
Can make availability / cost trade offs here
Big cheap HDDs
Bottle neck is Gigabit Ethernet
Capacity doubled, cost reduced 40%
The more servers, the faster the restore
Gbit Ethernet on backup server is limiting factor
Not really feasible:
We rewrite lots of data every day (crude approach, but simpler code)
Complex Administration (no dedicated DBA)
From sequential reads to random reads
The cause of the problem is only on one side ..
Webapp-queries with humans waiting are quite fast
Problematic queries done by the analysis jobs
Frequent full table scans
Queries with huge results
Need way to synchronize queries, control concurrency
Could use a connection pooler
Or an external synchronization mechanism
e.g. Zookeeper
Webapp-queries with humans waiting are quite fast
Problematic queries done by the analysis jobs
Frequent full table scans
Queries with huge results
Need way to synchronize queries, control concurrency
Could use a connection pooler
Or an external synchronization mechanism
e.g. Zookeeper
Very simple mechanism
Unfair, but that’s no problem
However, it’s starting to spread with a tendency to be mis-used.
An ALTER INDEX foo DISABLE would come in handy;
We added a self-service signup
2-minute process to add AdWords account to the system
OAuth User Info Optimization Bootstrap
Biggest problem:
CREATE DATABASE can take several minutes
Depends on current amount of write activity
More granular checkpoint (per db) would be cool?
Restrict checkpoints to databases?
So all of the drawbacks that came up could be worked around, more or less elegantly.
In total, we’re very happy with the way the approach has turned out. Especially the scalability and isolation aspects of it have us very pleased.
So much in fact, that we also used it for a second product and it feels very natural.
Databases as a unit of abstraction on a client or account level level are very much tangible.
Which makes it comfortable both from an development and operations point of view.
They can be connected to, renamed, cloned, copied, moved, backed up and restored.
When we remove a customer from the system we just dump the account databases and put them on S3 Glacier for some amount of time, instead of keeping the 100GB in the system.
To manage capacity. Currently this is still a manual process because it’s not required very often.
Making it automatic would require amongst other things a means to briefly disable the app from connecting to it.
Does “ALTER DATABASE set CONNECTION LIMIT 0” work?
Moving between hosts means we can also move it between PG versions.
We upgraded from 9.0 to 9.3 without much effort by installing both on all machines and then dumping them one after another from 9.0 and restoring into 9.3. Over a period of 2-3 months.
Memory is not a problem as shared_buffers is relatively low (a few gigabytes) and most memory is used by page and buffer cache and all files continue to exist only once.
Used 9.3 in development for a few months
Btw, I only remember one case where we needed to adapt code for 9.3, something with the order of a query result. Otherwise the ugprade was a breeze.
But, even though this works very well with using databases as partitions. Would schemas have worked the same way?
The biggest problem we would have had is, that we wouldnt be able to use schemas for other purposes anymore.
This is become necessary
It has grown quite a bit because we started with lots of Perl codeand a “dumb” data store
Up to 100GB memory in step 2
Only works when we can limit concurrent the batch jobs per machine (advisory locks).
But it’s not all sunshine and rainbows with that approach, of course.
Because SQL is much harder to write and to read than procedural code.
The notion that “code is a liability” has some truth to it. So the more we move into the database, the hard it becomes to manage.
Python I just much more tangible and malleable than SQL.
We have to compromise between easy to debug&test and performance.
But, given a bit of time and quiet one can accomplish much with little code in SQL.
Testing of individual snippets can be done by calling it from the application code, as part of an integrated test suite that has test data and expects certain results.
Covering most of the code in tests is not the problem, but covering most data scenarios is much more work (Div by zero sneeks in from time to time).
Those cases are postponed to …
The SQL code decides how to spend millions of euros in advertising money every months.
We can’t afford deploying any code changes (app or db) to all account databases at the same time.
So we use schemas to manage multiple versions of the optimization code.
The schema is filled with all the objects from a set of source .sql files.
The application software version that uses the db and schema version are identical, app sets search path.
We don’t use minor versions for fixes in the db code.
What we do is, assign each version a stage.
Unstable, testing, stable, borrowed from Debian.
And we also can assign individual client accounts a stage.
Typically test accounts or one with a pathological case that is fixed by the new release.
Those are closely monitored (performance, errors, log files, debugging data).
Brand new unstable: few, selected (test-)accounts
Testing stage for incremental roll-out