SlideShare une entreprise Scribd logo
1  sur  8
Télécharger pour lire hors ligne
International Journal of Database Management Systems ( IJDMS ) Vol.8, No.6, December 2016
DOI: 10.5121/ijdms.2016.8603 27
HIGH AVAILABILITY AND LOAD BALANCING FOR
POSTGRESQL DATABASES: DESIGNING AND
IMPLEMENTING.
Pablo Bárbaro Martinez Pedroso
Universidad De Las Ciencias Informáticas, Cuba
ABSTRACT
The aim of the paper is to design and implement an approach that provides high availability and load
balancing to PostgreSQL databases. Shared nothing architecture and replicated centralized middleware
architecture were selected in order to achieve data replication and load balancing among database
servers. Pgpool-II was used to implementing these architectures. Besides, taking advantage of pgpool-II as
a framework, several scripts were developed in Bash for restoration of corrupted databases. In order to
avoid single point of failure Linux HA (Heartbeat and Pacemaker) was used, whose responsibility is
monitoring all processes involved in the whole solution. As a result applications can operate with a more
reliable PostgreSQL database server, the most suitable situation is for applications with more load of
reading statement (typically select) over writing statement (typically update). Also approach presented is
only intended for Linux based Operating System.
KEYWORDS
Distributed Database, Parallel Database.
1. INTRODUCTION
With increasing and fast development of connectivity, web applications have growth enormously,
in part because of the growing of the amount of clients and the requests of these as well. These
applications work with an underlying database. Having interruptions of the database may cause
unexpected functioning. For small applications this could be permissible, however for critical
applications constant interruptions of databases are unacceptable. For instance, applications in
airport that monitor and control air traffic should be equipped with a database capable of serving
the data every time that is requested. If this database gets down this airport may suffer a serious
catastrophe. There exits tons of phenomena that could crashes services provided for databases.
Disconnections or failures of electronic devices responsible of guarantee connectivity such as
wires, switches, routers, etc.; operating systems failures: main memory, processors, storage
devices, short circuit, unexpected outage; software malfunction of Data Base Manager System,
cyber-attacks, undetected and unsolved programming bugs; even natural phenomena such as
cyclones, tornados, earth quakes, thunders and fires. Any of these situations could stop database
servers from continuing offering services. For this reason is necessary to design and implement a
solution that provides high availability and load balancing to databases.
High availability to databases can be accomplished throw replication of data, which is storing and
maintaining of one or more copies or replicas of databases in multiples sites (servers,
nodes)(Raghu Ramakrishnan 2002). This is useful to support failover, a technique that enables
International Journal of Database Management Systems ( IJDMS ) Vol.8, No.6, December 2016
28
automatic redirection of transactions from a failed site1
to another that stores a copy of the
data(M. Tamer Özsu 2011).
A replication solution is based mainly in two decisions: where updates are performed (writing
statements) and how propagate the updates to the copies (M. Tamer Özsu 2011). In (Jim Gray
1996) are defined two alternatives of where updates can be performed. The first called group
(master-master, update anywhere) states that any site with a copy of data item can update it. And
the second called master (master-slave) proposes that each object (data item) belongs to a master
site. Only the master can update the primary copy of the object. All other replicas are read-only,
and waits for the propagation of updates carried out in master site.
Also Gray et al. 1996 defines two ways of propagating updates: eager (synchronous) and lazy
(asynchronous) replication. Eager replication applies every update to all copies as a global
transaction. As a result either all sites commit the updates or none does it. This ensures that all
sites have full copy of data all the time. On the other hand, lazy replication first commits updates
in master server (site owner of primary copy of object), and later propagates the updates to others
replicas as a separate transaction for every server. As a result part of the time not all sites have
data updated.
Different combinations can be obtained as the table below shows.
Table 1Combination of where and how Updates can be Performed.
Each of these solutions has its advantages and drawbacks depending of its use. Therefore the
configuration must be selected (depending on its characteristics) for the most suitable
environment or situation. In (Bettina Kemme 2010; M. Tamer Özsu 2011) is available conceptual
basis of database replication while (Salman Abdul Moiz 2011) offers a survey of database
replication tools.
2. MATERIAL AND METHODS
The schema selected is synchronous multi-master replication. As it was previously stated in
synchronous replication every site has a full copy of data. As consequence, only one site is
needed to process reading statement. Therefore load balancing can be achieved, since reading
transactions can be distributed and processed in parallel among servers. This avoids break downs
of services for overload causes. Also decreases response time to client applications, due to
parallel processing of reading statements. The main drawback of this schema is that global
transaction involving all sites slows down connectivity speed between them. Therefore this
approach is recommended for systems with large volume of reading statement instead of tons of
update transactions. Other important feature can be observed in failover mechanism. In case of
the master node fails (master-slave situation), the slave promoted as new master, does not require
additional operations to recover last consistent state of failed master. Wasting of time only occurs
in detecting master node has failed and promoting a slave as new master. In multi-master case the
1
Database server that for some reason (software malfunction, network failure, inconsistent state reached) is
not able of provides services.
International Journal of Database Management Systems ( IJDMS ) Vol.8, No.6, December 2016
29
behavior is the same. For critical scenario where − 1 nodes have failed, the last one remaining
online and consistent can guarantee by itself the services until failed nodes get back online.
Synchronous replication in multi-master scheme proposes a protocol named ROWA 2
(Read One
Write All). This means transactions reach commit step if and only if all sites update the changes.
As can be seen if any of the sites is down none of the updates can be achieved (David Taniar
2008). This protocol assures consistency among nodes over availability. Despite high consistency
among nodes is an important feature it is possible continuing accepting updates and later restoring
down sites, so high availability is maintained along the process. In order to overcome previous
limitation was designed ROWA-A protocol (Read One Write All Available) which indicates that
updates can be processed for all sites that are still online (David Taniar 2008). Although this
approach may compromise consistency on the servers that were not part of transaction when they
get back online, nevertheless down sites could be identified and restored, so inconsistency will be
solved.
2.1. DISTRIBUTED DATABASE ARCHITECTURE
Physical design is needed to collocate data across sites (Raghu Ramakrishnan 2002). The
architecture of distributed database system is based in shared nothing architecture. Each node has
its own processor unit (P), main memory (M), store device (SD) and Database Manager System,
all running on independents operating systems interconnected throw network. Since corrupted
servers can be detached instantaneously and remaining servers can resume operations, high
availability can be achieved; besides, new nodes could be added easily, which leads to a scalable
solution(M. Tamer Özsu 2011; Raghu Ramakrishnan 2002).The image below shows the
architecture adopted.
Figure 1:Shared Nothing Architecture. Bibliographic Source: (Raghu Ramakrishnan 2002).
2.2. DATABASE REPLICATION ARCHITECTURE
The replication architecture is based on middleware, specifically replicated centralized
middleware replication (Bettina Kemme 2010). This architecture removes single point of failure,
since at least one stand by middleware is maintained waiting in case primary fails, in such case
stand by middleware becomes primary and resumes operations. Middleware is placed between
clients and server. It receives all client requests and forwards them to the relevant replicas. The
resulting answers of the database servers are replied by the middleware to the corresponding
clients. Writing statement (typically insert, update, delete) are send them to all sites, this allow to
achieve full replication system. Because of this, reading statement (typically select) is processed
only for one site. Load balancing can be achieved as high availability as well.
2
In ROWA protocol, with at least one site down and one site online, data still can be served but it cannot be
updated, and forward databases are considered unavailable.
International Journal of Database Management Systems ( IJDMS ) Vol.8, No.6, December 2016
30
Figure 2: Replicated Centralized Middleware Architecture. Bibliographic Source (Bettina Kemme 2010).
2.3. TOOLS
Taking into account these architectures some tools are needed for implementing them. This
section will discuss such topic. Pgpool-II is used as PostgreSQL database middleware. It provides
connection pooling3
, database replication and load balancing. Besides, pgpool-II implements
ROWA-A protocol, thus when failover it occurs online database servers resume operations of the
applications. This is an important feature because assures availability of database services even
when at most − 1	servers have failed, with total number of database servers. As can be seen
as more database servers are used more high is the level of availability of database services. At
the same time, as more middle wares (instances of pgpool-II running on different sites) are used,
more is the certainty that services still will be running after primary (or any stand by promoted as
primary) fails. In order to provide high availability to Pgpool-II, PostgreSQL processes and
network services was used Linux HA (High Availability). Linux HA is the origin of two projects
Heartbeat and Pacemaker. Heartbeat provides messaging layer between all nodes involved while
Pacemaker is a crm (cluster resource manager), whose purpose is to perform complex
configurable operations to provide high availability to resources (processes, daemons, services)
(Team 2015).This involves restart a stopped resource and maintain a set of rules or restrictions
between them.
For a detailed description about how to install and configure these tools consider consult (Group
2014)in the case of Pgpool-II, (Project 2014)for Heartbeat and (Team 2015) for Pacemaker.
2.4 ARCHITECTURE OF THE APPROACH.
After previous decisions were made a more general architecture is presented (Figure 3). As can be
seen, replicated centralized middleware is encapsulated at middle wares level whereas shared
nothing architecture is at database servers level. At a higher level monitoring of processes is
carried out at middle wares level and at database servers level as well throw Linux HA.
3
“Pgpool-II maintains established connections to the PostgreSQL servers, and reuses them whenever a new
connection with the same properties (i.e. user name, database, protocol version) comes in. It reduces the
connection overhead, and improves system's overall throughput.” Bibliographic source: GROUP, P. G. D.
Pgpool-II Manual Page. In., 2014, vol. 2015.
International Journal of Database Management Systems ( IJDMS ) Vol.8, No.6, December 2016
31
Connections are represented by arrows, and should be faster enough in order to avoid a bottleneck
at messaging and communication process.
Figure 3: General architecture.
An interesting feature that presents this architecture is that is no restricted only for the tools used
in this paper. Therefore different tools can implement this architecture, leading to be operating
system independent as well.
3. RELATED WORK
Previous works have implemented replicated centralized middleware architecture (Felix Noel
Abelardo Santana and Méndez Roldán ; Partio 2007; Pupo 2013; Rodríguez 2011; Santana 2010)
throw pgpool-II. Some of them use asynchronous master-slave scheme (Felix Noel Abelardo
Santana and Méndez Roldán ; Partio 2007), while others are based on synchronous multi-master
replication scheme (Pupo 2013; Rodríguez 2011; Santana 2010).
However none of these solutions has planned the restoration of a corrupted database. This is
known as online recovery: the capability of restoring a crashed database while online databases
still servicing clients. Also the monitoring of PostgreSQL instances had not been designed.
Pgpool-II provides a two phase mechanism to achieve database restoration. First phase is
triggered when a database is detected failed executing a script that performs a PostgreSQL
physical backup from an online master to failed site. During this phase available sites process
client’s requests saving changes into WAL (Write Ahead Logs) files. Once physical backup has
done first stage is finished. At starting second stage clients are disconnected, a second scripts is
executed, which synchronizes WAL files generated during first phase to failed site, assuring that
this site is fully recovered. After this the node is attached to the system and operations of
databases are resumed. Taking advantage of Linux, scripts triggered during online recovery were
developed in Bash, which is a native interpreter of Linux based Operating System, so no extra
software is needed. Restoration of databases are based on Point in Time Recovery feature of
PostgreSQL, pgpool-II as event trigger mechanism and Bash as executor of predefined routines
encapsulated in multiples scripts. Recipe adopted for implementing this process was extracted
from(Simon Riggs 2010) and follows the steps described in Algorithm 1.
International Journal of Database Management Systems ( IJDMS ) Vol.8, No.6, December 2016
32
In order to detail the whole process of recovering, Algorithms 2 and 3 present the procedures
implemented in the scripts.
Let failed node database server crashed database server targeted to be recovered, master node
database server from recovery process will be carried out and therefore scripts executed,
PGDATA PostgreSQL database directory, Backup-mm-dd-yyyy.tar.gz file compressed with
PGDATA PostgreSQL database directory with format date mm-dd-yyyy for month, day and year
respectively.
Once 1st
Stage is accomplished and there is not active client connections, routine programmed in
Algorithm 3 is executed at 2nd
Stage of recovering process. Its responsibility is synchronize
changes made during 1st
Stage (e.g. Algorithm 2). At the end of Algorithm 3,
pgpool_remote_start script is triggered. Its function is the initialization of postgresql resource
(e.g. PostgreSQL process) in order to Pacemaker can monitor such process. After this, recovering
process is accomplished, failed node is fully recovered and clients can resume connections to
middleware level.
PostgreSQL servers is where bash scripts are executed throw stored procedures invoked from
pgpool-II. Because of this, pgpool-IIneeds to be set with recovery_user and recovery_password at
pgpool.conf configuration file in order to connect to databases and trigger recovery stored
procedure. Also it must provide as arguments the paths of bash scripts relative to PGDATA
directory (e.g. location where bash scripts are placed in PostgreSQL database servers). These are
recovery_1st_stage_command and recovery_2nd_stage_commandfor copy_base_backup and
pgpool_recovery_pitr respectively. The script pgpool_remote_start does not need to be set, only
be located in PGDATA directory. For creating recovery stored procedures please consult (Group
2014).
Online recovery can be done manually allowing not only restoration of a databases server, but
adding new servers since it carries out a full recovery. Moreover online recovery is designed to be
executed automatically in order to avoid database manager intervention.
International Journal of Database Management Systems ( IJDMS ) Vol.8, No.6, December 2016
33
Also none of the works surveyed that uses Heartbeat make use of Pacemaker (Felix Noel
Abelardo Santana and Méndez Roldán; Pupo 2013; Rodríguez 2011; Santana 2010). Although
Heartbeat contains a crm capable of giving high availability, this is a primitive version of
pacemaker which is incapable of making complex operations as restriction of location and co
location concerned to resources. Without this it is not possible guarantee high availability to
PostgreSQL process and network services to all sites. Moreover, pgpool-II could be running and
expecting client connections in one server while database services ip address is expecting
database client connections on other server which could lead to unavailability of database
services. To overcome this limitation co location restriction throw Pacemaker can be used.
As was stated previously processes monitoring will be carried out across the system throw
Heartbeat/Pacemaker, therefore it needs to be installed on all server used.LSB class process init
scripts pgpool2, postgresql and networking must be added as resources in order to monitor
pgpool-II and PostgreSQL processes and networking services. OCF class script IPaddr2 must be
also added as resource and as argument, must be set IP address (or subnet address). Once IPaddr2
is running will publish this address in order clients can connect with middleware level.
As can be seen some constraints must be defined in order to control resources behavior. Pgpool2
and IPaddr2 can only be executed on middleware level whereas postgresql on database server
level. This is known as location constraints. On the other hand Pgpool2 and IPaddr2 should be
running on the same middleware (e.g. the one that is active), otherwise pgpool-II could be
running and expecting client connections in one middleware while IPaddr2 publishes database
services ip address on other middleware which could lead to unavailability of database services.
3.1. RESULTS AND BRIEF DISCUSSION
Shared nothing was selected as distributed database architecture which provides high availability
and scalability; whereas replicated centralized middleware is used with synchronous multi-master
replication, in order to keep databases servers fully synchronized. Some tools were used to
accomplish all this. Pgpool-II was used as middleware responsible of replication of data, load
balancer and framework for online recovery. On the other hand Heartbeat and Pacemaker were
used for monitoring all processes involved in the solution such as pgpool-II, PostgreSQL
instances, IPaddr2 (e.g. database services ip address), and restrictions between all these processes.
Multiples scripts were developed in Bash in order to achieve the restoration of corrupted
databases. Such restoration can be done manually and/or automatically, depending on
requirements specified.
4. CONCLUSIONS
1. As a result of this investigation was designed and implemented a solution to provide high
availability and load balancing to PostgreSQL databases. Therefore applications can
count with a more reliable PostgreSQL database service.
2. Having multiples sites with full copy of data allows not only failover mechanism, but
also, load balancing among the servers. Since every server has an entire and consistent
copy of data, each of them can process all kind of reading statement. This situation avoids
a breakdown of the services for overload causes.
3. Using more than one middleware (one active, at least one stand by) removes single point
of failure, thus high availability can be obtained at middleware level.
4. Since distributed database architecture is shared nothing, many servers can be added or
removed in a few configuration steps, because of this approach presented is flexible and
scalable.
5. All tools selected are open source and developed for Linux, thus solution of this
investigation is intended only for Linux based Operating System.
International Journal of Database Management Systems ( IJDMS ) Vol.8, No.6, December 2016
34
6. Since global transactions (of writing statement)slow down connectivity speed, this
approach is more suitable and recommended for having big amounts of reading statement
instead writing statement.
5. FUTURE WORK
1. Designing and carrying out several of experiments in order to measuring performance of
heavy reading statements. Moreover, obtain a mathematical model which describes the
phenomena of performance depending on number of database servers. This model could
predict performance in a manner that for scalable solutions more precise decisions can be
made.
2. Designing and implementing a solution intended for big amount of writing statement.Due
to the fact that no solution fits all kind of problems, heavy write statement should be also
explored in order to produce an approach in such sense.
3. Approach presented in this paper suffers from the fact that it is only intended for Linux
based Operating Systems thus, is needed to extend this solution to other Operating
Systems.
REFERENCES
[1] Bettina Kemme, R. J. P. A. M. P.-M. Database Replication. Edited By M.T. Özsu.Edtion Ed.: Morgan
& Claypool, 2010. Isbn 9781608453825.
[2] David Taniar, C. H. C. L., Wenny Rahayu, Sushant Goel High-Performance Parallel Database
Processing And Grid Databases. Edited By A. Zamoya. Edtion Ed. New Jersey: Wiley, 2008. 574 P.
[3] Felix Noel Abelardo Santana, P. Y. P. P., Ernesto Ahmed Mederos, Javier Menéndez Rizo, Isuel And
H. P. P. Méndez Roldán Databases Cluster Postgresql 9.0 High Performance, 7.
[4] Group, P. G. D. Pgpool-Ii Manual Page. In., 2014, Vol.
2015.Http://Www.Pgpool.Net/Docs/Latest/Pgpool-En.Html
[5] Jim Gray, P. H., Patrick O'neil, Dennis Sasha. The Dangers Of Replication And A Solution. In
Proceeding Acm Sigmod Int. Conference On Management Of Data. Monreal, Canada: Acm, 1996, P.
10.
[6] M. Tamer Özsu, P. V. Principles Of Distributed Database Systems. Edtion Ed.: Springer, 2011. 866 P.
Isbn 978-1-4419-8833-1.
[7] Partio, M. Ev Aluation Of Postgresql Replica Tion And Load Balancingimplement A Tions. 2007.
[8] Project, L. H. Linux Ha User's Guide. In., 2014, Vol. 2015. Http://Www.Linux-Ha.Org/Doc/Users-
Guide/Users-Guide.Html
[9] Pupo, I. R. Base De Datos De Alta Disponibilidad Para La Plataforma Videoweb 2.0. University Of
Informatics Science, 2013.
[10] Raghu Ramakrishnan, J. G. Database Management Systems. Edtion Ed.: Mcgraw-Hill, 2002. 931 P.
Isbn 0-07-246535-2.
[11] Rodríguez, M. R. R. Solución De Clúster De Bases De Datos Para Sistemas Que Gestionan Grandes
Volúmenes De Datos Utilizando Postgresql. Universidad De Las Ciencias Informáticas, 2011.
[12] Salman Abdul Moiz, S. P., Venkataswamy G, Supriya N. Pal Database Replication: A Survey Of
Open Source And Commercial Tools. International Journal Of Computer Applications, 2011, 13(6),
8.
[13] Santana, Y. O. Clúster De Altas Prestaciones Para Medianas Y Pequeñas Bases De Datos Que
Utilizan A Postgresql Como Sistema De Gestión De Bases De Datos. Universidad De Las Ciencias
Informáticas, 2010.
[14] Simon Riggs, H. K. Postgresql 9 Administration Cookbook. Edtion Ed. Birmingham, Uk: Packt
Publishing Ltd., 2010. Isbn 978-1-849510-28-8.
[15] Team, P. D. Pacemaker Main Page. In., 2015, Vol. 2015.Http://Clusterlabs.Org/Wiki/Main_Page

Contenu connexe

Tendances

High Availability And Oracle Data Guard 11g R2
High Availability And Oracle Data Guard 11g R2High Availability And Oracle Data Guard 11g R2
High Availability And Oracle Data Guard 11g R2Mario Redón Luz
 
TPT connection Implementation in Informatica
TPT connection Implementation in InformaticaTPT connection Implementation in Informatica
TPT connection Implementation in InformaticaYagya Sharma
 
19. Distributed Databases in DBMS
19. Distributed Databases in DBMS19. Distributed Databases in DBMS
19. Distributed Databases in DBMSkoolkampus
 
Dynamics of Leading Legacy Databases
Dynamics of Leading Legacy DatabasesDynamics of Leading Legacy Databases
Dynamics of Leading Legacy DatabasesCognizant
 
Cost effective failover clustering
Cost effective failover clusteringCost effective failover clustering
Cost effective failover clusteringeSAT Journals
 
Cloud File System with GFS and HDFS
Cloud File System with GFS and HDFS  Cloud File System with GFS and HDFS
Cloud File System with GFS and HDFS Dr Neelesh Jain
 
Datastage parallell jobs vs datastage server jobs
Datastage parallell jobs vs datastage server jobsDatastage parallell jobs vs datastage server jobs
Datastage parallell jobs vs datastage server jobsshanker_uma
 
Distributed systems and scalability rules
Distributed systems and scalability rulesDistributed systems and scalability rules
Distributed systems and scalability rulesOleg Tsal-Tsalko
 
High availability solutions bakostech
High availability solutions bakostechHigh availability solutions bakostech
High availability solutions bakostechViktoria Bakos
 
Mysql Replication Excerpt 5.1 En
Mysql Replication Excerpt 5.1 EnMysql Replication Excerpt 5.1 En
Mysql Replication Excerpt 5.1 Enliufabin 66688
 
Database architectureby howard
Database architectureby howardDatabase architectureby howard
Database architectureby howardoracle documents
 

Tendances (20)

High Availability And Oracle Data Guard 11g R2
High Availability And Oracle Data Guard 11g R2High Availability And Oracle Data Guard 11g R2
High Availability And Oracle Data Guard 11g R2
 
201 Pdfsam
201 Pdfsam201 Pdfsam
201 Pdfsam
 
TPT connection Implementation in Informatica
TPT connection Implementation in InformaticaTPT connection Implementation in Informatica
TPT connection Implementation in Informatica
 
Ioug tip book11_gunukula
Ioug tip book11_gunukulaIoug tip book11_gunukula
Ioug tip book11_gunukula
 
19. Distributed Databases in DBMS
19. Distributed Databases in DBMS19. Distributed Databases in DBMS
19. Distributed Databases in DBMS
 
Dynamics of Leading Legacy Databases
Dynamics of Leading Legacy DatabasesDynamics of Leading Legacy Databases
Dynamics of Leading Legacy Databases
 
21 Pdfsam
21 Pdfsam21 Pdfsam
21 Pdfsam
 
381 Pdfsam
381 Pdfsam381 Pdfsam
381 Pdfsam
 
Active database
Active databaseActive database
Active database
 
Cost effective failover clustering
Cost effective failover clusteringCost effective failover clustering
Cost effective failover clustering
 
Cost effective failover clustering
Cost effective failover clusteringCost effective failover clustering
Cost effective failover clustering
 
Cloud File System with GFS and HDFS
Cloud File System with GFS and HDFS  Cloud File System with GFS and HDFS
Cloud File System with GFS and HDFS
 
Transparency and concurrency
Transparency and concurrencyTransparency and concurrency
Transparency and concurrency
 
Migration from 8.1 to 11.3
Migration from 8.1 to 11.3Migration from 8.1 to 11.3
Migration from 8.1 to 11.3
 
Datastage parallell jobs vs datastage server jobs
Datastage parallell jobs vs datastage server jobsDatastage parallell jobs vs datastage server jobs
Datastage parallell jobs vs datastage server jobs
 
Distributed systems and scalability rules
Distributed systems and scalability rulesDistributed systems and scalability rules
Distributed systems and scalability rules
 
161 Pdfsam
161 Pdfsam161 Pdfsam
161 Pdfsam
 
High availability solutions bakostech
High availability solutions bakostechHigh availability solutions bakostech
High availability solutions bakostech
 
Mysql Replication Excerpt 5.1 En
Mysql Replication Excerpt 5.1 EnMysql Replication Excerpt 5.1 En
Mysql Replication Excerpt 5.1 En
 
Database architectureby howard
Database architectureby howardDatabase architectureby howard
Database architectureby howard
 

En vedette

WHAT ARE THE ALTERNATIVE FUNCTIONS AND BENEFITS OF CELL PHONES FOR STUDENTS?
WHAT ARE THE ALTERNATIVE FUNCTIONS AND BENEFITS OF CELL PHONES FOR STUDENTS?WHAT ARE THE ALTERNATIVE FUNCTIONS AND BENEFITS OF CELL PHONES FOR STUDENTS?
WHAT ARE THE ALTERNATIVE FUNCTIONS AND BENEFITS OF CELL PHONES FOR STUDENTS?IJITE
 
DYNAMIC CLASSIFICATION OF SENSITIVITY LEVELS OF DATAWAREHOUSE BASED ON USER P...
DYNAMIC CLASSIFICATION OF SENSITIVITY LEVELS OF DATAWAREHOUSE BASED ON USER P...DYNAMIC CLASSIFICATION OF SENSITIVITY LEVELS OF DATAWAREHOUSE BASED ON USER P...
DYNAMIC CLASSIFICATION OF SENSITIVITY LEVELS OF DATAWAREHOUSE BASED ON USER P...ijdms
 
INTEGRATED FRAMEWORK TO MODEL DATA WITH BUSINESS PROCESS AND BUSINESS RULES
INTEGRATED FRAMEWORK TO MODEL DATA WITH BUSINESS PROCESS AND BUSINESS RULESINTEGRATED FRAMEWORK TO MODEL DATA WITH BUSINESS PROCESS AND BUSINESS RULES
INTEGRATED FRAMEWORK TO MODEL DATA WITH BUSINESS PROCESS AND BUSINESS RULESijdms
 
SecureMAG vol9
SecureMAG vol9SecureMAG vol9
SecureMAG vol9alvin chin
 
ข่าวการศึกษา (สพฐ.) วันที่ 9 ม.ค. 60
ข่าวการศึกษา (สพฐ.) วันที่ 9 ม.ค. 60ข่าวการศึกษา (สพฐ.) วันที่ 9 ม.ค. 60
ข่าวการศึกษา (สพฐ.) วันที่ 9 ม.ค. 60PR OBEC
 
THE EFFECTS OF COMMUNICATION NETWORKS ON STUDENTS’ ACADEMIC PERFORMANCE: THE ...
THE EFFECTS OF COMMUNICATION NETWORKS ON STUDENTS’ ACADEMIC PERFORMANCE: THE ...THE EFFECTS OF COMMUNICATION NETWORKS ON STUDENTS’ ACADEMIC PERFORMANCE: THE ...
THE EFFECTS OF COMMUNICATION NETWORKS ON STUDENTS’ ACADEMIC PERFORMANCE: THE ...IJITE
 
ON THE USAGE OF DATABASES OF EDUCATIONAL MATERIALS IN MACEDONIAN EDUCATION
ON THE USAGE OF DATABASES OF EDUCATIONAL MATERIALS IN MACEDONIAN EDUCATIONON THE USAGE OF DATABASES OF EDUCATIONAL MATERIALS IN MACEDONIAN EDUCATION
ON THE USAGE OF DATABASES OF EDUCATIONAL MATERIALS IN MACEDONIAN EDUCATIONIJITE
 
Festividades tradicionales de fin de año cultura
Festividades tradicionales de fin de año culturaFestividades tradicionales de fin de año cultura
Festividades tradicionales de fin de año culturaTheo Banda
 
EFFECT OF POSTURAL CONTROL BIOMECHANICAL GAIN ON PSYCHOPHYSICAL DETECTION THR...
EFFECT OF POSTURAL CONTROL BIOMECHANICAL GAIN ON PSYCHOPHYSICAL DETECTION THR...EFFECT OF POSTURAL CONTROL BIOMECHANICAL GAIN ON PSYCHOPHYSICAL DETECTION THR...
EFFECT OF POSTURAL CONTROL BIOMECHANICAL GAIN ON PSYCHOPHYSICAL DETECTION THR...ijbbjournal
 
LOCALITY SIM: CLOUD SIMULATOR WITH DATA LOCALITY
LOCALITY SIM: CLOUD SIMULATOR WITH DATA LOCALITY LOCALITY SIM: CLOUD SIMULATOR WITH DATA LOCALITY
LOCALITY SIM: CLOUD SIMULATOR WITH DATA LOCALITY ijccsa
 
Resume AMIT VYAS new
Resume AMIT VYAS newResume AMIT VYAS new
Resume AMIT VYAS newDr Amit Vyas
 
Professional LCD Displays
Professional LCD DisplaysProfessional LCD Displays
Professional LCD Displaysfloridacopiers
 

En vedette (20)

WHAT ARE THE ALTERNATIVE FUNCTIONS AND BENEFITS OF CELL PHONES FOR STUDENTS?
WHAT ARE THE ALTERNATIVE FUNCTIONS AND BENEFITS OF CELL PHONES FOR STUDENTS?WHAT ARE THE ALTERNATIVE FUNCTIONS AND BENEFITS OF CELL PHONES FOR STUDENTS?
WHAT ARE THE ALTERNATIVE FUNCTIONS AND BENEFITS OF CELL PHONES FOR STUDENTS?
 
DYNAMIC CLASSIFICATION OF SENSITIVITY LEVELS OF DATAWAREHOUSE BASED ON USER P...
DYNAMIC CLASSIFICATION OF SENSITIVITY LEVELS OF DATAWAREHOUSE BASED ON USER P...DYNAMIC CLASSIFICATION OF SENSITIVITY LEVELS OF DATAWAREHOUSE BASED ON USER P...
DYNAMIC CLASSIFICATION OF SENSITIVITY LEVELS OF DATAWAREHOUSE BASED ON USER P...
 
INTEGRATED FRAMEWORK TO MODEL DATA WITH BUSINESS PROCESS AND BUSINESS RULES
INTEGRATED FRAMEWORK TO MODEL DATA WITH BUSINESS PROCESS AND BUSINESS RULESINTEGRATED FRAMEWORK TO MODEL DATA WITH BUSINESS PROCESS AND BUSINESS RULES
INTEGRATED FRAMEWORK TO MODEL DATA WITH BUSINESS PROCESS AND BUSINESS RULES
 
Collaboration for the Mapping Industry
Collaboration for the Mapping IndustryCollaboration for the Mapping Industry
Collaboration for the Mapping Industry
 
Áreas de intervención tutorial
Áreas de intervención tutorialÁreas de intervención tutorial
Áreas de intervención tutorial
 
SecureMAG vol9
SecureMAG vol9SecureMAG vol9
SecureMAG vol9
 
ข่าวการศึกษา (สพฐ.) วันที่ 9 ม.ค. 60
ข่าวการศึกษา (สพฐ.) วันที่ 9 ม.ค. 60ข่าวการศึกษา (สพฐ.) วันที่ 9 ม.ค. 60
ข่าวการศึกษา (สพฐ.) วันที่ 9 ม.ค. 60
 
THE EFFECTS OF COMMUNICATION NETWORKS ON STUDENTS’ ACADEMIC PERFORMANCE: THE ...
THE EFFECTS OF COMMUNICATION NETWORKS ON STUDENTS’ ACADEMIC PERFORMANCE: THE ...THE EFFECTS OF COMMUNICATION NETWORKS ON STUDENTS’ ACADEMIC PERFORMANCE: THE ...
THE EFFECTS OF COMMUNICATION NETWORKS ON STUDENTS’ ACADEMIC PERFORMANCE: THE ...
 
ON THE USAGE OF DATABASES OF EDUCATIONAL MATERIALS IN MACEDONIAN EDUCATION
ON THE USAGE OF DATABASES OF EDUCATIONAL MATERIALS IN MACEDONIAN EDUCATIONON THE USAGE OF DATABASES OF EDUCATIONAL MATERIALS IN MACEDONIAN EDUCATION
ON THE USAGE OF DATABASES OF EDUCATIONAL MATERIALS IN MACEDONIAN EDUCATION
 
Festividades tradicionales de fin de año cultura
Festividades tradicionales de fin de año culturaFestividades tradicionales de fin de año cultura
Festividades tradicionales de fin de año cultura
 
Unidad 1
Unidad 1Unidad 1
Unidad 1
 
EFFECT OF POSTURAL CONTROL BIOMECHANICAL GAIN ON PSYCHOPHYSICAL DETECTION THR...
EFFECT OF POSTURAL CONTROL BIOMECHANICAL GAIN ON PSYCHOPHYSICAL DETECTION THR...EFFECT OF POSTURAL CONTROL BIOMECHANICAL GAIN ON PSYCHOPHYSICAL DETECTION THR...
EFFECT OF POSTURAL CONTROL BIOMECHANICAL GAIN ON PSYCHOPHYSICAL DETECTION THR...
 
LOCALITY SIM: CLOUD SIMULATOR WITH DATA LOCALITY
LOCALITY SIM: CLOUD SIMULATOR WITH DATA LOCALITY LOCALITY SIM: CLOUD SIMULATOR WITH DATA LOCALITY
LOCALITY SIM: CLOUD SIMULATOR WITH DATA LOCALITY
 
Resume AMIT VYAS new
Resume AMIT VYAS newResume AMIT VYAS new
Resume AMIT VYAS new
 
Chapter4
Chapter4Chapter4
Chapter4
 
Chapter2
Chapter2Chapter2
Chapter2
 
Chapter3
Chapter3Chapter3
Chapter3
 
Chapter1
Chapter1Chapter1
Chapter1
 
Professional LCD Displays
Professional LCD DisplaysProfessional LCD Displays
Professional LCD Displays
 
Horòscop,
Horòscop,Horòscop,
Horòscop,
 

Similaire à HIGH AVAILABILITY AND LOAD BALANCING FOR POSTGRESQL DATABASES: DESIGNING AND IMPLEMENTING.

Presentation on Large Scale Data Management
Presentation on Large Scale Data ManagementPresentation on Large Scale Data Management
Presentation on Large Scale Data ManagementChris Bunch
 
The Concept of Load Balancing Server in Secured and Intelligent Network
The Concept of Load Balancing Server in Secured and Intelligent NetworkThe Concept of Load Balancing Server in Secured and Intelligent Network
The Concept of Load Balancing Server in Secured and Intelligent NetworkIJAEMSJORNAL
 
Distributed Algorithms
Distributed AlgorithmsDistributed Algorithms
Distributed Algorithms913245857
 
The Grouping of Files in Allocation of Job Using Server Scheduling In Load Ba...
The Grouping of Files in Allocation of Job Using Server Scheduling In Load Ba...The Grouping of Files in Allocation of Job Using Server Scheduling In Load Ba...
The Grouping of Files in Allocation of Job Using Server Scheduling In Load Ba...iosrjce
 
Talon systems - Distributed multi master replication strategy
Talon systems - Distributed multi master replication strategyTalon systems - Distributed multi master replication strategy
Talon systems - Distributed multi master replication strategySaptarshi Chatterjee
 
60141457-Oracle-Golden-Gate-Presentation.ppt
60141457-Oracle-Golden-Gate-Presentation.ppt60141457-Oracle-Golden-Gate-Presentation.ppt
60141457-Oracle-Golden-Gate-Presentation.pptpadalamail
 
Veritas Failover3
Veritas Failover3Veritas Failover3
Veritas Failover3grogers1124
 
Software architecture case study - why and why not sql server replication
Software architecture   case study - why and why not sql server replicationSoftware architecture   case study - why and why not sql server replication
Software architecture case study - why and why not sql server replicationShahzad
 
HMR Log Analyzer: Analyze Web Application Logs Over Hadoop MapReduce
HMR Log Analyzer: Analyze Web Application Logs Over Hadoop MapReduce HMR Log Analyzer: Analyze Web Application Logs Over Hadoop MapReduce
HMR Log Analyzer: Analyze Web Application Logs Over Hadoop MapReduce ijujournal
 
HMR LOG ANALYZER: ANALYZE WEB APPLICATION LOGS OVER HADOOP MAPREDUCE
HMR LOG ANALYZER: ANALYZE WEB APPLICATION LOGS OVER HADOOP MAPREDUCEHMR LOG ANALYZER: ANALYZE WEB APPLICATION LOGS OVER HADOOP MAPREDUCE
HMR LOG ANALYZER: ANALYZE WEB APPLICATION LOGS OVER HADOOP MAPREDUCEijujournal
 
An asynchronous replication model to improve data available into a heterogene...
An asynchronous replication model to improve data available into a heterogene...An asynchronous replication model to improve data available into a heterogene...
An asynchronous replication model to improve data available into a heterogene...Alexander Decker
 
Survey on Load Rebalancing for Distributed File System in Cloud
Survey on Load Rebalancing for Distributed File System in CloudSurvey on Load Rebalancing for Distributed File System in Cloud
Survey on Load Rebalancing for Distributed File System in CloudAM Publications
 
Data Distribution Handling on Cloud for Deployment of Big Data
Data Distribution Handling on Cloud for Deployment of Big DataData Distribution Handling on Cloud for Deployment of Big Data
Data Distribution Handling on Cloud for Deployment of Big Dataneirew J
 
Data Distribution Handling on Cloud for Deployment of Big Data
Data Distribution Handling on Cloud for Deployment of Big DataData Distribution Handling on Cloud for Deployment of Big Data
Data Distribution Handling on Cloud for Deployment of Big Dataijccsa
 
Designing distributed systems
Designing distributed systemsDesigning distributed systems
Designing distributed systemsMalisa Ncube
 
HbaseHivePigbyRohitDubey
HbaseHivePigbyRohitDubeyHbaseHivePigbyRohitDubey
HbaseHivePigbyRohitDubeyRohit Dubey
 

Similaire à HIGH AVAILABILITY AND LOAD BALANCING FOR POSTGRESQL DATABASES: DESIGNING AND IMPLEMENTING. (20)

Presentation on Large Scale Data Management
Presentation on Large Scale Data ManagementPresentation on Large Scale Data Management
Presentation on Large Scale Data Management
 
The Concept of Load Balancing Server in Secured and Intelligent Network
The Concept of Load Balancing Server in Secured and Intelligent NetworkThe Concept of Load Balancing Server in Secured and Intelligent Network
The Concept of Load Balancing Server in Secured and Intelligent Network
 
Distributed Algorithms
Distributed AlgorithmsDistributed Algorithms
Distributed Algorithms
 
The Grouping of Files in Allocation of Job Using Server Scheduling In Load Ba...
The Grouping of Files in Allocation of Job Using Server Scheduling In Load Ba...The Grouping of Files in Allocation of Job Using Server Scheduling In Load Ba...
The Grouping of Files in Allocation of Job Using Server Scheduling In Load Ba...
 
J017367075
J017367075J017367075
J017367075
 
Talon systems - Distributed multi master replication strategy
Talon systems - Distributed multi master replication strategyTalon systems - Distributed multi master replication strategy
Talon systems - Distributed multi master replication strategy
 
60141457-Oracle-Golden-Gate-Presentation.ppt
60141457-Oracle-Golden-Gate-Presentation.ppt60141457-Oracle-Golden-Gate-Presentation.ppt
60141457-Oracle-Golden-Gate-Presentation.ppt
 
Veritas Failover3
Veritas Failover3Veritas Failover3
Veritas Failover3
 
Software architecture case study - why and why not sql server replication
Software architecture   case study - why and why not sql server replicationSoftware architecture   case study - why and why not sql server replication
Software architecture case study - why and why not sql server replication
 
HMR Log Analyzer: Analyze Web Application Logs Over Hadoop MapReduce
HMR Log Analyzer: Analyze Web Application Logs Over Hadoop MapReduce HMR Log Analyzer: Analyze Web Application Logs Over Hadoop MapReduce
HMR Log Analyzer: Analyze Web Application Logs Over Hadoop MapReduce
 
HMR LOG ANALYZER: ANALYZE WEB APPLICATION LOGS OVER HADOOP MAPREDUCE
HMR LOG ANALYZER: ANALYZE WEB APPLICATION LOGS OVER HADOOP MAPREDUCEHMR LOG ANALYZER: ANALYZE WEB APPLICATION LOGS OVER HADOOP MAPREDUCE
HMR LOG ANALYZER: ANALYZE WEB APPLICATION LOGS OVER HADOOP MAPREDUCE
 
Facade
FacadeFacade
Facade
 
An asynchronous replication model to improve data available into a heterogene...
An asynchronous replication model to improve data available into a heterogene...An asynchronous replication model to improve data available into a heterogene...
An asynchronous replication model to improve data available into a heterogene...
 
Survey on Load Rebalancing for Distributed File System in Cloud
Survey on Load Rebalancing for Distributed File System in CloudSurvey on Load Rebalancing for Distributed File System in Cloud
Survey on Load Rebalancing for Distributed File System in Cloud
 
Document 22.pdf
Document 22.pdfDocument 22.pdf
Document 22.pdf
 
Database System Architectures
Database System ArchitecturesDatabase System Architectures
Database System Architectures
 
Data Distribution Handling on Cloud for Deployment of Big Data
Data Distribution Handling on Cloud for Deployment of Big DataData Distribution Handling on Cloud for Deployment of Big Data
Data Distribution Handling on Cloud for Deployment of Big Data
 
Data Distribution Handling on Cloud for Deployment of Big Data
Data Distribution Handling on Cloud for Deployment of Big DataData Distribution Handling on Cloud for Deployment of Big Data
Data Distribution Handling on Cloud for Deployment of Big Data
 
Designing distributed systems
Designing distributed systemsDesigning distributed systems
Designing distributed systems
 
HbaseHivePigbyRohitDubey
HbaseHivePigbyRohitDubeyHbaseHivePigbyRohitDubey
HbaseHivePigbyRohitDubey
 

Dernier

Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeThiyagu K
 
Sports & Fitness Value Added Course FY..
Sports & Fitness Value Added Course FY..Sports & Fitness Value Added Course FY..
Sports & Fitness Value Added Course FY..Disha Kariya
 
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...fonyou31
 
Web & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfWeb & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfJayanti Pande
 
Introduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsIntroduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsTechSoup
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxiammrhaywood
 
The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13Steve Thomason
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Sapana Sha
 
APM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across SectorsAPM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across SectorsAssociation for Project Management
 
social pharmacy d-pharm 1st year by Pragati K. Mahajan
social pharmacy d-pharm 1st year by Pragati K. Mahajansocial pharmacy d-pharm 1st year by Pragati K. Mahajan
social pharmacy d-pharm 1st year by Pragati K. Mahajanpragatimahajan3
 
Holdier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfHoldier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfagholdier
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdfQucHHunhnh
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfciinovamais
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introductionMaksud Ahmed
 
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...Sapna Thakur
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...EduSkills OECD
 
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdfBASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdfSoniaTolstoy
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxheathfieldcps1
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationnomboosow
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdfQucHHunhnh
 

Dernier (20)

Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and Mode
 
Sports & Fitness Value Added Course FY..
Sports & Fitness Value Added Course FY..Sports & Fitness Value Added Course FY..
Sports & Fitness Value Added Course FY..
 
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
 
Web & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfWeb & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdf
 
Introduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsIntroduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The Basics
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
 
The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
 
APM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across SectorsAPM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across Sectors
 
social pharmacy d-pharm 1st year by Pragati K. Mahajan
social pharmacy d-pharm 1st year by Pragati K. Mahajansocial pharmacy d-pharm 1st year by Pragati K. Mahajan
social pharmacy d-pharm 1st year by Pragati K. Mahajan
 
Holdier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfHoldier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdf
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdf
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdf
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introduction
 
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
 
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdfBASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptx
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communication
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
 

HIGH AVAILABILITY AND LOAD BALANCING FOR POSTGRESQL DATABASES: DESIGNING AND IMPLEMENTING.

  • 1. International Journal of Database Management Systems ( IJDMS ) Vol.8, No.6, December 2016 DOI: 10.5121/ijdms.2016.8603 27 HIGH AVAILABILITY AND LOAD BALANCING FOR POSTGRESQL DATABASES: DESIGNING AND IMPLEMENTING. Pablo Bárbaro Martinez Pedroso Universidad De Las Ciencias Informáticas, Cuba ABSTRACT The aim of the paper is to design and implement an approach that provides high availability and load balancing to PostgreSQL databases. Shared nothing architecture and replicated centralized middleware architecture were selected in order to achieve data replication and load balancing among database servers. Pgpool-II was used to implementing these architectures. Besides, taking advantage of pgpool-II as a framework, several scripts were developed in Bash for restoration of corrupted databases. In order to avoid single point of failure Linux HA (Heartbeat and Pacemaker) was used, whose responsibility is monitoring all processes involved in the whole solution. As a result applications can operate with a more reliable PostgreSQL database server, the most suitable situation is for applications with more load of reading statement (typically select) over writing statement (typically update). Also approach presented is only intended for Linux based Operating System. KEYWORDS Distributed Database, Parallel Database. 1. INTRODUCTION With increasing and fast development of connectivity, web applications have growth enormously, in part because of the growing of the amount of clients and the requests of these as well. These applications work with an underlying database. Having interruptions of the database may cause unexpected functioning. For small applications this could be permissible, however for critical applications constant interruptions of databases are unacceptable. For instance, applications in airport that monitor and control air traffic should be equipped with a database capable of serving the data every time that is requested. If this database gets down this airport may suffer a serious catastrophe. There exits tons of phenomena that could crashes services provided for databases. Disconnections or failures of electronic devices responsible of guarantee connectivity such as wires, switches, routers, etc.; operating systems failures: main memory, processors, storage devices, short circuit, unexpected outage; software malfunction of Data Base Manager System, cyber-attacks, undetected and unsolved programming bugs; even natural phenomena such as cyclones, tornados, earth quakes, thunders and fires. Any of these situations could stop database servers from continuing offering services. For this reason is necessary to design and implement a solution that provides high availability and load balancing to databases. High availability to databases can be accomplished throw replication of data, which is storing and maintaining of one or more copies or replicas of databases in multiples sites (servers, nodes)(Raghu Ramakrishnan 2002). This is useful to support failover, a technique that enables
  • 2. International Journal of Database Management Systems ( IJDMS ) Vol.8, No.6, December 2016 28 automatic redirection of transactions from a failed site1 to another that stores a copy of the data(M. Tamer Özsu 2011). A replication solution is based mainly in two decisions: where updates are performed (writing statements) and how propagate the updates to the copies (M. Tamer Özsu 2011). In (Jim Gray 1996) are defined two alternatives of where updates can be performed. The first called group (master-master, update anywhere) states that any site with a copy of data item can update it. And the second called master (master-slave) proposes that each object (data item) belongs to a master site. Only the master can update the primary copy of the object. All other replicas are read-only, and waits for the propagation of updates carried out in master site. Also Gray et al. 1996 defines two ways of propagating updates: eager (synchronous) and lazy (asynchronous) replication. Eager replication applies every update to all copies as a global transaction. As a result either all sites commit the updates or none does it. This ensures that all sites have full copy of data all the time. On the other hand, lazy replication first commits updates in master server (site owner of primary copy of object), and later propagates the updates to others replicas as a separate transaction for every server. As a result part of the time not all sites have data updated. Different combinations can be obtained as the table below shows. Table 1Combination of where and how Updates can be Performed. Each of these solutions has its advantages and drawbacks depending of its use. Therefore the configuration must be selected (depending on its characteristics) for the most suitable environment or situation. In (Bettina Kemme 2010; M. Tamer Özsu 2011) is available conceptual basis of database replication while (Salman Abdul Moiz 2011) offers a survey of database replication tools. 2. MATERIAL AND METHODS The schema selected is synchronous multi-master replication. As it was previously stated in synchronous replication every site has a full copy of data. As consequence, only one site is needed to process reading statement. Therefore load balancing can be achieved, since reading transactions can be distributed and processed in parallel among servers. This avoids break downs of services for overload causes. Also decreases response time to client applications, due to parallel processing of reading statements. The main drawback of this schema is that global transaction involving all sites slows down connectivity speed between them. Therefore this approach is recommended for systems with large volume of reading statement instead of tons of update transactions. Other important feature can be observed in failover mechanism. In case of the master node fails (master-slave situation), the slave promoted as new master, does not require additional operations to recover last consistent state of failed master. Wasting of time only occurs in detecting master node has failed and promoting a slave as new master. In multi-master case the 1 Database server that for some reason (software malfunction, network failure, inconsistent state reached) is not able of provides services.
  • 3. International Journal of Database Management Systems ( IJDMS ) Vol.8, No.6, December 2016 29 behavior is the same. For critical scenario where − 1 nodes have failed, the last one remaining online and consistent can guarantee by itself the services until failed nodes get back online. Synchronous replication in multi-master scheme proposes a protocol named ROWA 2 (Read One Write All). This means transactions reach commit step if and only if all sites update the changes. As can be seen if any of the sites is down none of the updates can be achieved (David Taniar 2008). This protocol assures consistency among nodes over availability. Despite high consistency among nodes is an important feature it is possible continuing accepting updates and later restoring down sites, so high availability is maintained along the process. In order to overcome previous limitation was designed ROWA-A protocol (Read One Write All Available) which indicates that updates can be processed for all sites that are still online (David Taniar 2008). Although this approach may compromise consistency on the servers that were not part of transaction when they get back online, nevertheless down sites could be identified and restored, so inconsistency will be solved. 2.1. DISTRIBUTED DATABASE ARCHITECTURE Physical design is needed to collocate data across sites (Raghu Ramakrishnan 2002). The architecture of distributed database system is based in shared nothing architecture. Each node has its own processor unit (P), main memory (M), store device (SD) and Database Manager System, all running on independents operating systems interconnected throw network. Since corrupted servers can be detached instantaneously and remaining servers can resume operations, high availability can be achieved; besides, new nodes could be added easily, which leads to a scalable solution(M. Tamer Özsu 2011; Raghu Ramakrishnan 2002).The image below shows the architecture adopted. Figure 1:Shared Nothing Architecture. Bibliographic Source: (Raghu Ramakrishnan 2002). 2.2. DATABASE REPLICATION ARCHITECTURE The replication architecture is based on middleware, specifically replicated centralized middleware replication (Bettina Kemme 2010). This architecture removes single point of failure, since at least one stand by middleware is maintained waiting in case primary fails, in such case stand by middleware becomes primary and resumes operations. Middleware is placed between clients and server. It receives all client requests and forwards them to the relevant replicas. The resulting answers of the database servers are replied by the middleware to the corresponding clients. Writing statement (typically insert, update, delete) are send them to all sites, this allow to achieve full replication system. Because of this, reading statement (typically select) is processed only for one site. Load balancing can be achieved as high availability as well. 2 In ROWA protocol, with at least one site down and one site online, data still can be served but it cannot be updated, and forward databases are considered unavailable.
  • 4. International Journal of Database Management Systems ( IJDMS ) Vol.8, No.6, December 2016 30 Figure 2: Replicated Centralized Middleware Architecture. Bibliographic Source (Bettina Kemme 2010). 2.3. TOOLS Taking into account these architectures some tools are needed for implementing them. This section will discuss such topic. Pgpool-II is used as PostgreSQL database middleware. It provides connection pooling3 , database replication and load balancing. Besides, pgpool-II implements ROWA-A protocol, thus when failover it occurs online database servers resume operations of the applications. This is an important feature because assures availability of database services even when at most − 1 servers have failed, with total number of database servers. As can be seen as more database servers are used more high is the level of availability of database services. At the same time, as more middle wares (instances of pgpool-II running on different sites) are used, more is the certainty that services still will be running after primary (or any stand by promoted as primary) fails. In order to provide high availability to Pgpool-II, PostgreSQL processes and network services was used Linux HA (High Availability). Linux HA is the origin of two projects Heartbeat and Pacemaker. Heartbeat provides messaging layer between all nodes involved while Pacemaker is a crm (cluster resource manager), whose purpose is to perform complex configurable operations to provide high availability to resources (processes, daemons, services) (Team 2015).This involves restart a stopped resource and maintain a set of rules or restrictions between them. For a detailed description about how to install and configure these tools consider consult (Group 2014)in the case of Pgpool-II, (Project 2014)for Heartbeat and (Team 2015) for Pacemaker. 2.4 ARCHITECTURE OF THE APPROACH. After previous decisions were made a more general architecture is presented (Figure 3). As can be seen, replicated centralized middleware is encapsulated at middle wares level whereas shared nothing architecture is at database servers level. At a higher level monitoring of processes is carried out at middle wares level and at database servers level as well throw Linux HA. 3 “Pgpool-II maintains established connections to the PostgreSQL servers, and reuses them whenever a new connection with the same properties (i.e. user name, database, protocol version) comes in. It reduces the connection overhead, and improves system's overall throughput.” Bibliographic source: GROUP, P. G. D. Pgpool-II Manual Page. In., 2014, vol. 2015.
  • 5. International Journal of Database Management Systems ( IJDMS ) Vol.8, No.6, December 2016 31 Connections are represented by arrows, and should be faster enough in order to avoid a bottleneck at messaging and communication process. Figure 3: General architecture. An interesting feature that presents this architecture is that is no restricted only for the tools used in this paper. Therefore different tools can implement this architecture, leading to be operating system independent as well. 3. RELATED WORK Previous works have implemented replicated centralized middleware architecture (Felix Noel Abelardo Santana and Méndez Roldán ; Partio 2007; Pupo 2013; Rodríguez 2011; Santana 2010) throw pgpool-II. Some of them use asynchronous master-slave scheme (Felix Noel Abelardo Santana and Méndez Roldán ; Partio 2007), while others are based on synchronous multi-master replication scheme (Pupo 2013; Rodríguez 2011; Santana 2010). However none of these solutions has planned the restoration of a corrupted database. This is known as online recovery: the capability of restoring a crashed database while online databases still servicing clients. Also the monitoring of PostgreSQL instances had not been designed. Pgpool-II provides a two phase mechanism to achieve database restoration. First phase is triggered when a database is detected failed executing a script that performs a PostgreSQL physical backup from an online master to failed site. During this phase available sites process client’s requests saving changes into WAL (Write Ahead Logs) files. Once physical backup has done first stage is finished. At starting second stage clients are disconnected, a second scripts is executed, which synchronizes WAL files generated during first phase to failed site, assuring that this site is fully recovered. After this the node is attached to the system and operations of databases are resumed. Taking advantage of Linux, scripts triggered during online recovery were developed in Bash, which is a native interpreter of Linux based Operating System, so no extra software is needed. Restoration of databases are based on Point in Time Recovery feature of PostgreSQL, pgpool-II as event trigger mechanism and Bash as executor of predefined routines encapsulated in multiples scripts. Recipe adopted for implementing this process was extracted from(Simon Riggs 2010) and follows the steps described in Algorithm 1.
  • 6. International Journal of Database Management Systems ( IJDMS ) Vol.8, No.6, December 2016 32 In order to detail the whole process of recovering, Algorithms 2 and 3 present the procedures implemented in the scripts. Let failed node database server crashed database server targeted to be recovered, master node database server from recovery process will be carried out and therefore scripts executed, PGDATA PostgreSQL database directory, Backup-mm-dd-yyyy.tar.gz file compressed with PGDATA PostgreSQL database directory with format date mm-dd-yyyy for month, day and year respectively. Once 1st Stage is accomplished and there is not active client connections, routine programmed in Algorithm 3 is executed at 2nd Stage of recovering process. Its responsibility is synchronize changes made during 1st Stage (e.g. Algorithm 2). At the end of Algorithm 3, pgpool_remote_start script is triggered. Its function is the initialization of postgresql resource (e.g. PostgreSQL process) in order to Pacemaker can monitor such process. After this, recovering process is accomplished, failed node is fully recovered and clients can resume connections to middleware level. PostgreSQL servers is where bash scripts are executed throw stored procedures invoked from pgpool-II. Because of this, pgpool-IIneeds to be set with recovery_user and recovery_password at pgpool.conf configuration file in order to connect to databases and trigger recovery stored procedure. Also it must provide as arguments the paths of bash scripts relative to PGDATA directory (e.g. location where bash scripts are placed in PostgreSQL database servers). These are recovery_1st_stage_command and recovery_2nd_stage_commandfor copy_base_backup and pgpool_recovery_pitr respectively. The script pgpool_remote_start does not need to be set, only be located in PGDATA directory. For creating recovery stored procedures please consult (Group 2014). Online recovery can be done manually allowing not only restoration of a databases server, but adding new servers since it carries out a full recovery. Moreover online recovery is designed to be executed automatically in order to avoid database manager intervention.
  • 7. International Journal of Database Management Systems ( IJDMS ) Vol.8, No.6, December 2016 33 Also none of the works surveyed that uses Heartbeat make use of Pacemaker (Felix Noel Abelardo Santana and Méndez Roldán; Pupo 2013; Rodríguez 2011; Santana 2010). Although Heartbeat contains a crm capable of giving high availability, this is a primitive version of pacemaker which is incapable of making complex operations as restriction of location and co location concerned to resources. Without this it is not possible guarantee high availability to PostgreSQL process and network services to all sites. Moreover, pgpool-II could be running and expecting client connections in one server while database services ip address is expecting database client connections on other server which could lead to unavailability of database services. To overcome this limitation co location restriction throw Pacemaker can be used. As was stated previously processes monitoring will be carried out across the system throw Heartbeat/Pacemaker, therefore it needs to be installed on all server used.LSB class process init scripts pgpool2, postgresql and networking must be added as resources in order to monitor pgpool-II and PostgreSQL processes and networking services. OCF class script IPaddr2 must be also added as resource and as argument, must be set IP address (or subnet address). Once IPaddr2 is running will publish this address in order clients can connect with middleware level. As can be seen some constraints must be defined in order to control resources behavior. Pgpool2 and IPaddr2 can only be executed on middleware level whereas postgresql on database server level. This is known as location constraints. On the other hand Pgpool2 and IPaddr2 should be running on the same middleware (e.g. the one that is active), otherwise pgpool-II could be running and expecting client connections in one middleware while IPaddr2 publishes database services ip address on other middleware which could lead to unavailability of database services. 3.1. RESULTS AND BRIEF DISCUSSION Shared nothing was selected as distributed database architecture which provides high availability and scalability; whereas replicated centralized middleware is used with synchronous multi-master replication, in order to keep databases servers fully synchronized. Some tools were used to accomplish all this. Pgpool-II was used as middleware responsible of replication of data, load balancer and framework for online recovery. On the other hand Heartbeat and Pacemaker were used for monitoring all processes involved in the solution such as pgpool-II, PostgreSQL instances, IPaddr2 (e.g. database services ip address), and restrictions between all these processes. Multiples scripts were developed in Bash in order to achieve the restoration of corrupted databases. Such restoration can be done manually and/or automatically, depending on requirements specified. 4. CONCLUSIONS 1. As a result of this investigation was designed and implemented a solution to provide high availability and load balancing to PostgreSQL databases. Therefore applications can count with a more reliable PostgreSQL database service. 2. Having multiples sites with full copy of data allows not only failover mechanism, but also, load balancing among the servers. Since every server has an entire and consistent copy of data, each of them can process all kind of reading statement. This situation avoids a breakdown of the services for overload causes. 3. Using more than one middleware (one active, at least one stand by) removes single point of failure, thus high availability can be obtained at middleware level. 4. Since distributed database architecture is shared nothing, many servers can be added or removed in a few configuration steps, because of this approach presented is flexible and scalable. 5. All tools selected are open source and developed for Linux, thus solution of this investigation is intended only for Linux based Operating System.
  • 8. International Journal of Database Management Systems ( IJDMS ) Vol.8, No.6, December 2016 34 6. Since global transactions (of writing statement)slow down connectivity speed, this approach is more suitable and recommended for having big amounts of reading statement instead writing statement. 5. FUTURE WORK 1. Designing and carrying out several of experiments in order to measuring performance of heavy reading statements. Moreover, obtain a mathematical model which describes the phenomena of performance depending on number of database servers. This model could predict performance in a manner that for scalable solutions more precise decisions can be made. 2. Designing and implementing a solution intended for big amount of writing statement.Due to the fact that no solution fits all kind of problems, heavy write statement should be also explored in order to produce an approach in such sense. 3. Approach presented in this paper suffers from the fact that it is only intended for Linux based Operating Systems thus, is needed to extend this solution to other Operating Systems. REFERENCES [1] Bettina Kemme, R. J. P. A. M. P.-M. Database Replication. Edited By M.T. Özsu.Edtion Ed.: Morgan & Claypool, 2010. Isbn 9781608453825. [2] David Taniar, C. H. C. L., Wenny Rahayu, Sushant Goel High-Performance Parallel Database Processing And Grid Databases. Edited By A. Zamoya. Edtion Ed. New Jersey: Wiley, 2008. 574 P. [3] Felix Noel Abelardo Santana, P. Y. P. P., Ernesto Ahmed Mederos, Javier Menéndez Rizo, Isuel And H. P. P. Méndez Roldán Databases Cluster Postgresql 9.0 High Performance, 7. [4] Group, P. G. D. Pgpool-Ii Manual Page. In., 2014, Vol. 2015.Http://Www.Pgpool.Net/Docs/Latest/Pgpool-En.Html [5] Jim Gray, P. H., Patrick O'neil, Dennis Sasha. The Dangers Of Replication And A Solution. In Proceeding Acm Sigmod Int. Conference On Management Of Data. Monreal, Canada: Acm, 1996, P. 10. [6] M. Tamer Özsu, P. V. Principles Of Distributed Database Systems. Edtion Ed.: Springer, 2011. 866 P. Isbn 978-1-4419-8833-1. [7] Partio, M. Ev Aluation Of Postgresql Replica Tion And Load Balancingimplement A Tions. 2007. [8] Project, L. H. Linux Ha User's Guide. In., 2014, Vol. 2015. Http://Www.Linux-Ha.Org/Doc/Users- Guide/Users-Guide.Html [9] Pupo, I. R. Base De Datos De Alta Disponibilidad Para La Plataforma Videoweb 2.0. University Of Informatics Science, 2013. [10] Raghu Ramakrishnan, J. G. Database Management Systems. Edtion Ed.: Mcgraw-Hill, 2002. 931 P. Isbn 0-07-246535-2. [11] Rodríguez, M. R. R. Solución De Clúster De Bases De Datos Para Sistemas Que Gestionan Grandes Volúmenes De Datos Utilizando Postgresql. Universidad De Las Ciencias Informáticas, 2011. [12] Salman Abdul Moiz, S. P., Venkataswamy G, Supriya N. Pal Database Replication: A Survey Of Open Source And Commercial Tools. International Journal Of Computer Applications, 2011, 13(6), 8. [13] Santana, Y. O. Clúster De Altas Prestaciones Para Medianas Y Pequeñas Bases De Datos Que Utilizan A Postgresql Como Sistema De Gestión De Bases De Datos. Universidad De Las Ciencias Informáticas, 2010. [14] Simon Riggs, H. K. Postgresql 9 Administration Cookbook. Edtion Ed. Birmingham, Uk: Packt Publishing Ltd., 2010. Isbn 978-1-849510-28-8. [15] Team, P. D. Pacemaker Main Page. In., 2015, Vol. 2015.Http://Clusterlabs.Org/Wiki/Main_Page