SlideShare une entreprise Scribd logo
1  sur  66
Télécharger pour lire hors ligne
Deploying MongoDB
sharded clusters easily
with Terraform and
Ansible
All Things Open, October 2021
Ivan Groenewold
Agenda
● Terraform 101
● Provisioning in GCP
● Ansible 101
● Deploying MongoDB
● Q&A
About me
● @igroenew
● Architect at Percona
● Based in Argentina
MongoDB sharding in a nutshell
Image © MongoDB Inc.
Target infrastructure
The plan
● Define the topology
● Provision the infrastructure using Terraform
○ instances, disks, network, buckets, etc.
● Install the software with Ansible
○ MongoDB, monitoring & backup solution
Terraform 101
● Infrastructure-as-Code
● Open Source
● Works with multiple resources and providers
● Declarative approach - state what you want
● Infrastructure converges to the desired state
Terraform syntax
● Based in HashiCorp Configuration Language (HCL)
● Basic constructs:
○ Arguments
name = “my_instance”
○ Blocks
resource “google_compute_instance” “my_instance”
{
…
}
type label 1 label 2
body
Defining variables
variable "data_disk_type" {
default = "pd-standard"
}
variable "my_instance_type" {
default = "e2-standard-2"
description = "instance type"
}
variable "my_volume_size" {
default = "100"
description = "storage size"
}
variable "centos_amis" {
description = "CentOS AMIs on each region"
default = {
northamerica-northeast1 = "centos-8-v20210316"
northamerica-northeast2 = "centos-8-v20210316"
}
}
Provisioning in GCP
resource "google_compute_disk" "cfg_disk" {
name = "mongo-cfg0-data"
type = var.data_disk_type
size = var.my_volume_size
zone = var.my_zone
}
resource "google_compute_instance" "cfg" {
name = "my_instance"
machine_type = var.my_instance_type
tags = ["mongodb-cfg"]
zone = var.my_zone
boot_disk {
initialize_params {
image = lookup(var.centos_amis, var.region)
}
}
attached_disk {
source = google_compute_disk.cfg_disk.name
}
network_interface {
network = google_compute_network.vpc-network.id
subnetwork = google_compute_subnetwork.vpc-subnet.id
}
provision a disk
provision an instance
Provisioning in GCP (2)
resource "google_compute_disk" "cfg_disk" {
name = "mongo-cfg0-data"
type = var.data_disk_type
size = var.my_volume_size
zone = var.my_zone
}
resource "google_compute_instance" "cfg" {
name = "my_instance"
machine_type = var.my_instance_type
tags = ["mongodb-cfg"]
zone = var.my_zone
boot_disk {
initialize_params {
image = lookup(var.centos_amis, var.region)
}
}
attached_disk {
source = google_compute_disk.cfg_disk.name
}
network_interface {
network = google_compute_network.vpc-network.id
subnetwork = google_compute_subnetwork.vpc-subnet.id
}
nested blocks
call lookup function
Working with Terraform
● terraform init
○ Initialize the working directory
● terraform plan
○ print the action plan
● terraform apply
○ carry out the actions
● terraform destroy
○ remove all managed resources
Working with Terraform (2)
Working with Terraform (3)
● What is a MongoDB server?
○ Instance + Persistent disk (except mongos servers)
○ Firewall rules
○ Init scripts
■ mount the volumes, OS tweaks, etc
Working with Terraform (4)
● Create .tf files for each component
■ mongos router
■ mongod shard
■ Config server
■ anything else?
● Use a separate variables file
Provisioning the infrastructure
● Servers
○ cfg-server.tf
○ shard-server.tf
○ mongos-server.tf
○ pmm-server.tf
● variables.tf
● network.tf
● backup.tf
Configuring the network
● Define the region
● Configure a VPC
● Define the subnets
Configuring the network (2)
data "google_compute_zones" "available" {
status = "UP"
}
resource "google_compute_network" " vpc-network" {
name = "my-vpc"
auto_create_subnetworks = false
}
resource "google_compute_subnetwork" "vpc-subnet" {
name = "mongodb-subnet"
ip_cidr_range = "10.1.0.0/16"
region = var.region
network = google_compute_network. vpc-network.id
}
query data source
Creating the instances
resource "google_compute_instance" "server" {
count = 6
name = "server ${count.index}"
zone = data.google_compute_zones.available.names[ count.index % 3]
● Use count.index to shuffle the instances between AZ’s
Configuring network access
resource "google_compute_firewall" "mongodb-cfgsvr-firewall" {
name = "mongodb-cfgsvr-firewall"
network = google_compute_network.vpc-network.name
direction = "INGRESS"
target_tags = ["mongodb-cfg"]
allow {
protocol = "tcp"
ports = ["22", "27019"]
}
}
Preparing the backup infrastructure
● Create a Cloud Storage bucket
● Allow the instances to read/write from it
● Objects lifecycle policy
Preparing the backup infrastructure (2)
● Steps are cloud-specific
● For GCP we need:
○ Cloud Storage bucket
○ Service account
○ HMAC key-pair for the service account
○ Grant storage-admin role to the service account
Preparing the backup infrastructure (3)
resource "google_storage_bucket" "mongo-backups" {
name = "mongo-backups"
location = var.region
force_destroy = true
uniform_bucket_level_access = true
resource "google_service_account" "mongo-backup-service-account" {
account_id = "mongo-backup-service-account"
display_name = "Mongo Backup Service Account"
}
resource "google_storage_hmac_key" "mongo-backup-service-account" {
service_account_email = google_service_account.mongo-backup-service-account.email
}
resource "google_storage_bucket_iam_binding" "binding" {
bucket = google_storage_bucket.mongo-backups.name
role = "roles/storage.admin"
members = [
"serviceAccount:${google_service_account.mongo-backup-service-account.email}",
]
}
● PMM client
○ run locally on each server
○ pushes metrics
● PMM server
○ Performance metrics history
○ Query analytics
○ Integrated alerting
○ Integrated backups (WIP)
https://pmmdemo.percona.com
Monitoring
What’s next?
● We have the servers
● We have the network configured
● We have the backup infrastructure
● We need to deploy the software
Ansible 101
● Automation engine
● SSH-based
● Open source
● Web interface: AWX project
Why Ansible?
● Easy to deploy
● No agent required
● No firewall rules required
● YAML syntax
● Secure
Installing Ansible
● Control machine
○ Can be your laptop
○ Acts as the Ansible “server”
○ Only needed when running Ansible code
● Managed nodes
Inventory
● Inventory options
○ Static
■ ini or YML format
○ Dynamic
■ Scripts available for most cloud providers
■ Write your own plugin
● The default inventory is /etc/ansible/hosts
Inventory (2)
● Static inventory example
[webservers]
www.myhost.com
www.example.com
[databases]
db-[a:f].example.com
[atlanta]
dba.example.com http_port=80 maxRequestsPerChild=808
[atlanta:vars]
ntp_server=ntp.atlanta.example.com
Modules
● Ansible building blocks
● Should be idempotent
Examples:
$ ansible example -m ping
www.example.com | SUCCESS => {
"changed": false,
"ping": "pong"
}
$ ansible example -m service -a "name=httpd state=started"
Playbooks
● Orchestrate steps
● Composed of one or more plays
● Each play runs a number of tasks in order on a group of servers
○ e.g. call a module to do something
● YML format
Playbooks (2)
● Inventory example:
[webservers]
web[01:10].example.com
[databases]
db[01:10].example.com
---
- hosts: webservers
tasks:
- name: ensure apache is at the latest version
yum:
name: httpd
state: latest
- name: ensure apache is started
service:
name: httpd
state: started
- hosts: databases
tasks:
- name: ensure postgresql is at the latest version
yum:
name: postgresql
state: latest
play 1
play 2
task 1
task 2
● Playbook example:
Playbooks (3)
Playbooks (4)
Play 1:
---
- hosts: webservers
tasks:
- name: ensure apache is at the latest version
yum:
name: httpd
state: latest
...
module
host groups as per inventory
Playbooks (5)
● Run with ansible-playbook command
$ ansible-playbook all my_pb.yml [--limit *example.com]
Playbooks (6)
PLAY [all]
***************************************************************************
TASK [check if specified os user exists]
***************************************************************************
changed: [mysql1]
ok: [mysql2]
PLAY RECAP
***************************************************************************
mysql1 : ok=1 changed=1 unreachable=0 failed=0
mysql2 : ok=1 changed=0 unreachable=0 failed=0
Variables
● Simple variables
foo: bar
● List
datacenter:
- us-east
- us-west
● Dictionary
foo:
field1: one
field2: two
Automating MongoDB deployment
1. Create an Ansible inventory file
2. Edit the variables file
3. Run the ansible-playbook
Automating MongoDB deployment
1. Create an Ansible inventory file
2. Edit the variables file
3. Run the ansible-playbook
Inventory file for a sharded cluster
● One group per shard (“shardN”)
● A group for the config servers (“cfg”)
● A group for the routers (“mongos”)
[shard1]
host1.example.com mongodb_primary=True
host2.example.com
host3.example.com
[shard2]
host4.example.com mongodb_primary=True
host5.example.com
host6.example.com
[cfg]
host7.example.com mongodb_primary=True
host8.example.com
host9.example.com
[mongos]
host10.example.com
Inventory file for a sharded cluster (2)
Generating the inventory file with Terraform
● Use the local_file Terraform resource
● Use templates to dynamically create the groups
● How to generate an Ansible inventory from Terraform
Automating MongoDB deployment
1. Create an Ansible inventory file
2. Edit the variables file
3. Run the ansible-playbook
Variables
● Copy the MongoDB files / Install from repository
● Ports for mongod, mongos
● Define the paths for data, logs, etc.
● Authentication mechanism
● Encryption
● Backup
Things we need done
● Install packages
● Create config files
● Start/stop processes
● Initialize replica sets
● Create users
● Configure backup job
● Add hosts to monitoring
● Add the shards to the cluster
Installing packages
packages:
- percona-server-mongodb
- percona-backup-mongodb
- pmm2-client
- name: install rpm from repo
package:
name: "{{ item }}"
state: present
with_items: "{{ packages }}"
dynamic variable
loop
Creating configuration files
● Generate files dynamically
● Include/exclude different sections
● Variables are not enough
● Solution: Ansible Templates
Ansible Templates
● Built-in module
● Create file with dynamic content
● Jinja2 engine
● Store them in /templates subdirectory
Creating templated config files
mongod.conf.j2 template:
...
security:
{% if use_tls %}
clusterAuthMode: x509
{% else %}
keyFile: {{ keyFile_loc }}
{% endif %}
...
Variables file:
use_tls: false
keyFile_loc: /var/lib/mongo/rskeyfile
Creating templated config files (2)
Task:
- name: copy mongod.conf
become: yes
template:
src: templates/mongod.conf.j2
dest: /etc/mongod.conf
owner: root
group: root
mode: 0644
Starting/stopping processes
- name: start mongod on rs member
become: yes
service:
name: mongod
state: started
[cfg]
host1.example.com mongodb_primary=True
host2.example.com
host3.example.com
[shard1]
host4.example.com mongodb_primary=True
host5.example.com
host6.example.com
[shard2]
host6.example.com mongodb_primary=True
host8.example.com
host9.example.com
[mongos]
host10.example.com
Initialize replica sets
● rs.initiate()
Our inventory file:
group_names array
for “host1.example.com”:
group_names = [ cfg ]
Initialize replica sets (2)
init-rs.js.j2:
rs.initiate(
{
_id: "{{ group_names[0] }}",
members: [
{% for h in groups[ group_names[0] ] %}
{ _id : {{ loop.index }}, host : "{{ h }}:
{%
if hostvars[inventory_hostname].group_names[0].startswith('shard') %}
{{ shard_port }}
{% else %}
{{ cfgserver_port }}
{% endif %}",
priority: 1 } {% if not loop.last %}
,{% endif %}
{% endfor %}
] });
the first group a host appears in
all hosts part of the first group
Initialize replica sets (3)
init-rs.js:
rs.initiate(
{
_id: "cfg",
members: [
{ _id : 0 , host : ”host1.example.com:
27018
",
priority: 1 } ,
...
] });
Initialize replica sets (4)
- name: render the template for the init command
template:
src: templates/init-rs.js.j2
dest: /tmp/init-rs.js
mode: 0644
when: mongodb_primary is defined and mongodb_primary
- name: run the init command for the replica set
shell: mongo --host localhost --port {{ mongo_port }} <
/tmp/init-rs.js
when: mongodb_primary is defined and mongodb_primary
runs only once per replica-set
Create users
createUser.j2:
db.getSiblingDB("admin").createUser({
user: "{{ mongodb_pmm_user }}",
pwd: "{{ mongodb_pmm_user_pwd }}",
roles: [
{ role: "explainRole", db: "admin" },
{ role: "clusterMonitor", db: "admin" },
{ role: "read", db: "local" }
]
});
Create users (2)
- name: prepare the command to create pmm user
template:
src: templates/createUser.js.j2
dest: /tmp/createUser.js
mode: 0644
when: mongodb_primary is defined and mongodb_primary
- name: run the command to create the user
shell: mongo admin -u {{ root_user }} -p{{ mongo_root_password }}
--port {{ mongo_port }} < /tmp/createUser.js
when: mongodb_primary is defined and mongodb_primary
Configure backup
- name: set up backup cron job
cron:
name: pbm backup
minute: 3
hour: 0
user: pbm
job: /usr/bin/pbm backup --mongodb-uri "mongodb://{{ pbmuser }}:{{ pbmpwd }}@
{{ ansible_fqdn }}:{{ mongo_port }}"
cron_file: pbm_daily_backup
Configure monitoring
- name: point pmm-client to the PMM server
become: true
shell: pmm-admin config --server-url=https://{{ pmm_server_user }}:
{{ pmm_server_pwd }}@{{ pmm_server }}:443 --server-insecure-tls --force
- name: add mongodb metrics exporter
become: true
shell: pmm-admin add mongodb --username={{ mongodb_pmm_user }} --password={{
mongodb_pmm_user_pwd }} --host={{ ansible_fqdn }} --port={{ cfg_server_port
if
('cfg' in group_names) else shard_port }}
Add the shards
- name: add the shards
hosts: shard*
tasks:
- name: add the shards to the cluster
shell: mongo admin -uroot -p{{ mongo_root_password }} --port {{ mongos_port }}
--eval "sh.addShard('{{ group_names[0] }}/{{ ansible_fqdn }}:{{ shard_port }}')"
delegate_to: "{{ groups.mongos | first }}"
when: mongodb_primary is defined and mongodb_primary
Automating MongoDB deployment
1. Create an Ansible inventory file
2. Edit the variables file
3. Run the ansible-playbook
ansible-playbook main.yml -i inventory.ini --ask-become-pass
Putting it all together
● Define the topology
● Create the infrastructure using Terraform
● Generate the inventory file for Ansible
● Install the software with Ansible
Putting it all together (2)
● Define the variables
○ variables.tf
○ Ansible vars file
● Run terraform apply
● Run ansible-playbook
Benefits
● Define a process
● Save time
● Reuse code
● Streamline deployments
● Ensure resources are monitored (and backed up)
Q&A
Thank you for attending!
https://www.percona.com/blog/author/ivan-groenewold/

Contenu connexe

Tendances

High Performance MongoDB Clusters with Amazon EBS Provisioned IOPS
High Performance MongoDB Clusters with Amazon EBS Provisioned IOPS High Performance MongoDB Clusters with Amazon EBS Provisioned IOPS
High Performance MongoDB Clusters with Amazon EBS Provisioned IOPS
Amazon Web Services
 

Tendances (20)

Seastore: Next Generation Backing Store for Ceph
Seastore: Next Generation Backing Store for CephSeastore: Next Generation Backing Store for Ceph
Seastore: Next Generation Backing Store for Ceph
 
Aerospike: Key Value Data Access
Aerospike: Key Value Data AccessAerospike: Key Value Data Access
Aerospike: Key Value Data Access
 
MySQL GTID 시작하기
MySQL GTID 시작하기MySQL GTID 시작하기
MySQL GTID 시작하기
 
Linux tuning to improve PostgreSQL performance
Linux tuning to improve PostgreSQL performanceLinux tuning to improve PostgreSQL performance
Linux tuning to improve PostgreSQL performance
 
BlueStore, A New Storage Backend for Ceph, One Year In
BlueStore, A New Storage Backend for Ceph, One Year InBlueStore, A New Storage Backend for Ceph, One Year In
BlueStore, A New Storage Backend for Ceph, One Year In
 
Twitter의 snowflake 소개 및 활용
Twitter의 snowflake 소개 및 활용Twitter의 snowflake 소개 및 활용
Twitter의 snowflake 소개 및 활용
 
PostgreSQL High Availability in a Containerized World
PostgreSQL High Availability in a Containerized WorldPostgreSQL High Availability in a Containerized World
PostgreSQL High Availability in a Containerized World
 
Best Practices in Security with PostgreSQL
Best Practices in Security with PostgreSQLBest Practices in Security with PostgreSQL
Best Practices in Security with PostgreSQL
 
An Introduction To NoSQL & MongoDB
An Introduction To NoSQL & MongoDBAn Introduction To NoSQL & MongoDB
An Introduction To NoSQL & MongoDB
 
Count min sketch
Count min sketchCount min sketch
Count min sketch
 
promgen - prometheus managemnet tool / simpleclient_java hacks @ Prometheus c...
promgen - prometheus managemnet tool / simpleclient_java hacks @ Prometheus c...promgen - prometheus managemnet tool / simpleclient_java hacks @ Prometheus c...
promgen - prometheus managemnet tool / simpleclient_java hacks @ Prometheus c...
 
Streaming Operational Data with MariaDB MaxScale
Streaming Operational Data with MariaDB MaxScaleStreaming Operational Data with MariaDB MaxScale
Streaming Operational Data with MariaDB MaxScale
 
MongoDB Management & Ansible
MongoDB Management & AnsibleMongoDB Management & Ansible
MongoDB Management & Ansible
 
Introduction to MongoDB
Introduction to MongoDBIntroduction to MongoDB
Introduction to MongoDB
 
Compression Options in Hadoop - A Tale of Tradeoffs
Compression Options in Hadoop - A Tale of TradeoffsCompression Options in Hadoop - A Tale of Tradeoffs
Compression Options in Hadoop - A Tale of Tradeoffs
 
High Performance MongoDB Clusters with Amazon EBS Provisioned IOPS
High Performance MongoDB Clusters with Amazon EBS Provisioned IOPS High Performance MongoDB Clusters with Amazon EBS Provisioned IOPS
High Performance MongoDB Clusters with Amazon EBS Provisioned IOPS
 
PostgreSQL HA
PostgreSQL   HAPostgreSQL   HA
PostgreSQL HA
 
Revisiting CephFS MDS and mClock QoS Scheduler
Revisiting CephFS MDS and mClock QoS SchedulerRevisiting CephFS MDS and mClock QoS Scheduler
Revisiting CephFS MDS and mClock QoS Scheduler
 
Basics of MongoDB
Basics of MongoDB Basics of MongoDB
Basics of MongoDB
 
[Pgday.Seoul 2021] 1. 예제로 살펴보는 포스트그레스큐엘의 독특한 SQL
[Pgday.Seoul 2021] 1. 예제로 살펴보는 포스트그레스큐엘의 독특한 SQL[Pgday.Seoul 2021] 1. 예제로 살펴보는 포스트그레스큐엘의 독특한 SQL
[Pgday.Seoul 2021] 1. 예제로 살펴보는 포스트그레스큐엘의 독특한 SQL
 

Similaire à Deploying MongoDB sharded clusters easily with Terraform and Ansible

A3Sec Advanced Deployment System
A3Sec Advanced Deployment SystemA3Sec Advanced Deployment System
A3Sec Advanced Deployment System
a3sec
 
Lightweight Virtualization with Linux Containers and Docker I YaC 2013
Lightweight Virtualization with Linux Containers and Docker I YaC 2013Lightweight Virtualization with Linux Containers and Docker I YaC 2013
Lightweight Virtualization with Linux Containers and Docker I YaC 2013
Docker, Inc.
 

Similaire à Deploying MongoDB sharded clusters easily with Terraform and Ansible (20)

Troubleshooting MySQL from a MySQL Developer Perspective
Troubleshooting MySQL from a MySQL Developer PerspectiveTroubleshooting MySQL from a MySQL Developer Perspective
Troubleshooting MySQL from a MySQL Developer Perspective
 
Ansiblefest 2018 Network automation journey at roblox
Ansiblefest 2018 Network automation journey at robloxAnsiblefest 2018 Network automation journey at roblox
Ansiblefest 2018 Network automation journey at roblox
 
A3Sec Advanced Deployment System
A3Sec Advanced Deployment SystemA3Sec Advanced Deployment System
A3Sec Advanced Deployment System
 
Designate Install and Operate Workshop
Designate Install and Operate WorkshopDesignate Install and Operate Workshop
Designate Install and Operate Workshop
 
Introduction to ansible
Introduction to ansibleIntroduction to ansible
Introduction to ansible
 
The Accidental DBA
The Accidental DBAThe Accidental DBA
The Accidental DBA
 
MongoDB: Advantages of an Open Source NoSQL Database
MongoDB: Advantages of an Open Source NoSQL DatabaseMongoDB: Advantages of an Open Source NoSQL Database
MongoDB: Advantages of an Open Source NoSQL Database
 
Automating with ansible (Part A)
Automating with ansible (Part A)Automating with ansible (Part A)
Automating with ansible (Part A)
 
Automating with ansible (part a)
Automating with ansible (part a)Automating with ansible (part a)
Automating with ansible (part a)
 
XPDDS17: NoXS: Death to the XenStore - Filipe Manco, NEC
XPDDS17:  NoXS: Death to the XenStore - Filipe Manco, NECXPDDS17:  NoXS: Death to the XenStore - Filipe Manco, NEC
XPDDS17: NoXS: Death to the XenStore - Filipe Manco, NEC
 
6 Months Sailing with Docker in Production
6 Months Sailing with Docker in Production 6 Months Sailing with Docker in Production
6 Months Sailing with Docker in Production
 
Automating complex infrastructures with Puppet
Automating complex infrastructures with PuppetAutomating complex infrastructures with Puppet
Automating complex infrastructures with Puppet
 
Introduction of unit test on android kernel
Introduction of unit test on android kernelIntroduction of unit test on android kernel
Introduction of unit test on android kernel
 
PGConf APAC 2018 - High performance json postgre-sql vs. mongodb
PGConf APAC 2018 - High performance json  postgre-sql vs. mongodbPGConf APAC 2018 - High performance json  postgre-sql vs. mongodb
PGConf APAC 2018 - High performance json postgre-sql vs. mongodb
 
Network Automation: Ansible 101
Network Automation: Ansible 101Network Automation: Ansible 101
Network Automation: Ansible 101
 
Linux device drivers
Linux device drivers Linux device drivers
Linux device drivers
 
OSMC 2021 | pg_stat_monitor: A cool extension for better database (PostgreSQL...
OSMC 2021 | pg_stat_monitor: A cool extension for better database (PostgreSQL...OSMC 2021 | pg_stat_monitor: A cool extension for better database (PostgreSQL...
OSMC 2021 | pg_stat_monitor: A cool extension for better database (PostgreSQL...
 
Lightweight Virtualization with Linux Containers and Docker | YaC 2013
Lightweight Virtualization with Linux Containers and Docker | YaC 2013Lightweight Virtualization with Linux Containers and Docker | YaC 2013
Lightweight Virtualization with Linux Containers and Docker | YaC 2013
 
Lightweight Virtualization with Linux Containers and Docker I YaC 2013
Lightweight Virtualization with Linux Containers and Docker I YaC 2013Lightweight Virtualization with Linux Containers and Docker I YaC 2013
Lightweight Virtualization with Linux Containers and Docker I YaC 2013
 
Cobbler, Func and Puppet: Tools for Large Scale Environments
Cobbler, Func and Puppet: Tools for Large Scale EnvironmentsCobbler, Func and Puppet: Tools for Large Scale Environments
Cobbler, Func and Puppet: Tools for Large Scale Environments
 

Plus de All Things Open

Open Source and Public Policy
Open Source and Public PolicyOpen Source and Public Policy
Open Source and Public Policy
All Things Open
 
Weaving Microservices into a Unified GraphQL Schema with graph-quilt - Ashpak...
Weaving Microservices into a Unified GraphQL Schema with graph-quilt - Ashpak...Weaving Microservices into a Unified GraphQL Schema with graph-quilt - Ashpak...
Weaving Microservices into a Unified GraphQL Schema with graph-quilt - Ashpak...
All Things Open
 
How to Write & Deploy a Smart Contract
How to Write & Deploy a Smart ContractHow to Write & Deploy a Smart Contract
How to Write & Deploy a Smart Contract
All Things Open
 
Scaling Web Applications with Background
Scaling Web Applications with BackgroundScaling Web Applications with Background
Scaling Web Applications with Background
All Things Open
 
Build Developer Experience Teams for Open Source
Build Developer Experience Teams for Open SourceBuild Developer Experience Teams for Open Source
Build Developer Experience Teams for Open Source
All Things Open
 
Sudo – Giving access while staying in control
Sudo – Giving access while staying in controlSudo – Giving access while staying in control
Sudo – Giving access while staying in control
All Things Open
 
Fortifying the Future: Tackling Security Challenges in AI/ML Applications
Fortifying the Future: Tackling Security Challenges in AI/ML ApplicationsFortifying the Future: Tackling Security Challenges in AI/ML Applications
Fortifying the Future: Tackling Security Challenges in AI/ML Applications
All Things Open
 
Securing Cloud Resources Deployed with Control Planes on Kubernetes using Gov...
Securing Cloud Resources Deployed with Control Planes on Kubernetes using Gov...Securing Cloud Resources Deployed with Control Planes on Kubernetes using Gov...
Securing Cloud Resources Deployed with Control Planes on Kubernetes using Gov...
All Things Open
 

Plus de All Things Open (20)

Building Reliability - The Realities of Observability
Building Reliability - The Realities of ObservabilityBuilding Reliability - The Realities of Observability
Building Reliability - The Realities of Observability
 
Modern Database Best Practices
Modern Database Best PracticesModern Database Best Practices
Modern Database Best Practices
 
Open Source and Public Policy
Open Source and Public PolicyOpen Source and Public Policy
Open Source and Public Policy
 
Weaving Microservices into a Unified GraphQL Schema with graph-quilt - Ashpak...
Weaving Microservices into a Unified GraphQL Schema with graph-quilt - Ashpak...Weaving Microservices into a Unified GraphQL Schema with graph-quilt - Ashpak...
Weaving Microservices into a Unified GraphQL Schema with graph-quilt - Ashpak...
 
The State of Passwordless Auth on the Web - Phil Nash
The State of Passwordless Auth on the Web - Phil NashThe State of Passwordless Auth on the Web - Phil Nash
The State of Passwordless Auth on the Web - Phil Nash
 
Total ReDoS: The dangers of regex in JavaScript
Total ReDoS: The dangers of regex in JavaScriptTotal ReDoS: The dangers of regex in JavaScript
Total ReDoS: The dangers of regex in JavaScript
 
What Does Real World Mass Adoption of Decentralized Tech Look Like?
What Does Real World Mass Adoption of Decentralized Tech Look Like?What Does Real World Mass Adoption of Decentralized Tech Look Like?
What Does Real World Mass Adoption of Decentralized Tech Look Like?
 
How to Write & Deploy a Smart Contract
How to Write & Deploy a Smart ContractHow to Write & Deploy a Smart Contract
How to Write & Deploy a Smart Contract
 
Spinning Your Drones with Cadence Workflows, Apache Kafka and TensorFlow
 Spinning Your Drones with Cadence Workflows, Apache Kafka and TensorFlow Spinning Your Drones with Cadence Workflows, Apache Kafka and TensorFlow
Spinning Your Drones with Cadence Workflows, Apache Kafka and TensorFlow
 
DEI Challenges and Success
DEI Challenges and SuccessDEI Challenges and Success
DEI Challenges and Success
 
Scaling Web Applications with Background
Scaling Web Applications with BackgroundScaling Web Applications with Background
Scaling Web Applications with Background
 
Supercharging tutorials with WebAssembly
Supercharging tutorials with WebAssemblySupercharging tutorials with WebAssembly
Supercharging tutorials with WebAssembly
 
Using SQL to Find Needles in Haystacks
Using SQL to Find Needles in HaystacksUsing SQL to Find Needles in Haystacks
Using SQL to Find Needles in Haystacks
 
Configuration Security as a Game of Pursuit Intercept
Configuration Security as a Game of Pursuit InterceptConfiguration Security as a Game of Pursuit Intercept
Configuration Security as a Game of Pursuit Intercept
 
Scaling an Open Source Sponsorship Program
Scaling an Open Source Sponsorship ProgramScaling an Open Source Sponsorship Program
Scaling an Open Source Sponsorship Program
 
Build Developer Experience Teams for Open Source
Build Developer Experience Teams for Open SourceBuild Developer Experience Teams for Open Source
Build Developer Experience Teams for Open Source
 
Deploying Models at Scale with Apache Beam
Deploying Models at Scale with Apache BeamDeploying Models at Scale with Apache Beam
Deploying Models at Scale with Apache Beam
 
Sudo – Giving access while staying in control
Sudo – Giving access while staying in controlSudo – Giving access while staying in control
Sudo – Giving access while staying in control
 
Fortifying the Future: Tackling Security Challenges in AI/ML Applications
Fortifying the Future: Tackling Security Challenges in AI/ML ApplicationsFortifying the Future: Tackling Security Challenges in AI/ML Applications
Fortifying the Future: Tackling Security Challenges in AI/ML Applications
 
Securing Cloud Resources Deployed with Control Planes on Kubernetes using Gov...
Securing Cloud Resources Deployed with Control Planes on Kubernetes using Gov...Securing Cloud Resources Deployed with Control Planes on Kubernetes using Gov...
Securing Cloud Resources Deployed with Control Planes on Kubernetes using Gov...
 

Dernier

CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Service
giselly40
 

Dernier (20)

Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreter
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day Presentation
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonets
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 
CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Service
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 

Deploying MongoDB sharded clusters easily with Terraform and Ansible

  • 1. Deploying MongoDB sharded clusters easily with Terraform and Ansible All Things Open, October 2021 Ivan Groenewold
  • 2. Agenda ● Terraform 101 ● Provisioning in GCP ● Ansible 101 ● Deploying MongoDB ● Q&A
  • 3. About me ● @igroenew ● Architect at Percona ● Based in Argentina
  • 4. MongoDB sharding in a nutshell Image © MongoDB Inc.
  • 6. The plan ● Define the topology ● Provision the infrastructure using Terraform ○ instances, disks, network, buckets, etc. ● Install the software with Ansible ○ MongoDB, monitoring & backup solution
  • 7. Terraform 101 ● Infrastructure-as-Code ● Open Source ● Works with multiple resources and providers ● Declarative approach - state what you want ● Infrastructure converges to the desired state
  • 8. Terraform syntax ● Based in HashiCorp Configuration Language (HCL) ● Basic constructs: ○ Arguments name = “my_instance” ○ Blocks resource “google_compute_instance” “my_instance” { … } type label 1 label 2 body
  • 9. Defining variables variable "data_disk_type" { default = "pd-standard" } variable "my_instance_type" { default = "e2-standard-2" description = "instance type" } variable "my_volume_size" { default = "100" description = "storage size" } variable "centos_amis" { description = "CentOS AMIs on each region" default = { northamerica-northeast1 = "centos-8-v20210316" northamerica-northeast2 = "centos-8-v20210316" } }
  • 10. Provisioning in GCP resource "google_compute_disk" "cfg_disk" { name = "mongo-cfg0-data" type = var.data_disk_type size = var.my_volume_size zone = var.my_zone } resource "google_compute_instance" "cfg" { name = "my_instance" machine_type = var.my_instance_type tags = ["mongodb-cfg"] zone = var.my_zone boot_disk { initialize_params { image = lookup(var.centos_amis, var.region) } } attached_disk { source = google_compute_disk.cfg_disk.name } network_interface { network = google_compute_network.vpc-network.id subnetwork = google_compute_subnetwork.vpc-subnet.id } provision a disk provision an instance
  • 11. Provisioning in GCP (2) resource "google_compute_disk" "cfg_disk" { name = "mongo-cfg0-data" type = var.data_disk_type size = var.my_volume_size zone = var.my_zone } resource "google_compute_instance" "cfg" { name = "my_instance" machine_type = var.my_instance_type tags = ["mongodb-cfg"] zone = var.my_zone boot_disk { initialize_params { image = lookup(var.centos_amis, var.region) } } attached_disk { source = google_compute_disk.cfg_disk.name } network_interface { network = google_compute_network.vpc-network.id subnetwork = google_compute_subnetwork.vpc-subnet.id } nested blocks call lookup function
  • 12. Working with Terraform ● terraform init ○ Initialize the working directory ● terraform plan ○ print the action plan ● terraform apply ○ carry out the actions ● terraform destroy ○ remove all managed resources
  • 14. Working with Terraform (3) ● What is a MongoDB server? ○ Instance + Persistent disk (except mongos servers) ○ Firewall rules ○ Init scripts ■ mount the volumes, OS tweaks, etc
  • 15. Working with Terraform (4) ● Create .tf files for each component ■ mongos router ■ mongod shard ■ Config server ■ anything else? ● Use a separate variables file
  • 16. Provisioning the infrastructure ● Servers ○ cfg-server.tf ○ shard-server.tf ○ mongos-server.tf ○ pmm-server.tf ● variables.tf ● network.tf ● backup.tf
  • 17. Configuring the network ● Define the region ● Configure a VPC ● Define the subnets
  • 18. Configuring the network (2) data "google_compute_zones" "available" { status = "UP" } resource "google_compute_network" " vpc-network" { name = "my-vpc" auto_create_subnetworks = false } resource "google_compute_subnetwork" "vpc-subnet" { name = "mongodb-subnet" ip_cidr_range = "10.1.0.0/16" region = var.region network = google_compute_network. vpc-network.id } query data source
  • 19. Creating the instances resource "google_compute_instance" "server" { count = 6 name = "server ${count.index}" zone = data.google_compute_zones.available.names[ count.index % 3] ● Use count.index to shuffle the instances between AZ’s
  • 20. Configuring network access resource "google_compute_firewall" "mongodb-cfgsvr-firewall" { name = "mongodb-cfgsvr-firewall" network = google_compute_network.vpc-network.name direction = "INGRESS" target_tags = ["mongodb-cfg"] allow { protocol = "tcp" ports = ["22", "27019"] } }
  • 21. Preparing the backup infrastructure ● Create a Cloud Storage bucket ● Allow the instances to read/write from it ● Objects lifecycle policy
  • 22. Preparing the backup infrastructure (2) ● Steps are cloud-specific ● For GCP we need: ○ Cloud Storage bucket ○ Service account ○ HMAC key-pair for the service account ○ Grant storage-admin role to the service account
  • 23. Preparing the backup infrastructure (3) resource "google_storage_bucket" "mongo-backups" { name = "mongo-backups" location = var.region force_destroy = true uniform_bucket_level_access = true resource "google_service_account" "mongo-backup-service-account" { account_id = "mongo-backup-service-account" display_name = "Mongo Backup Service Account" } resource "google_storage_hmac_key" "mongo-backup-service-account" { service_account_email = google_service_account.mongo-backup-service-account.email } resource "google_storage_bucket_iam_binding" "binding" { bucket = google_storage_bucket.mongo-backups.name role = "roles/storage.admin" members = [ "serviceAccount:${google_service_account.mongo-backup-service-account.email}", ] }
  • 24. ● PMM client ○ run locally on each server ○ pushes metrics ● PMM server ○ Performance metrics history ○ Query analytics ○ Integrated alerting ○ Integrated backups (WIP) https://pmmdemo.percona.com Monitoring
  • 25. What’s next? ● We have the servers ● We have the network configured ● We have the backup infrastructure ● We need to deploy the software
  • 26. Ansible 101 ● Automation engine ● SSH-based ● Open source ● Web interface: AWX project
  • 27. Why Ansible? ● Easy to deploy ● No agent required ● No firewall rules required ● YAML syntax ● Secure
  • 28. Installing Ansible ● Control machine ○ Can be your laptop ○ Acts as the Ansible “server” ○ Only needed when running Ansible code ● Managed nodes
  • 29. Inventory ● Inventory options ○ Static ■ ini or YML format ○ Dynamic ■ Scripts available for most cloud providers ■ Write your own plugin ● The default inventory is /etc/ansible/hosts
  • 30. Inventory (2) ● Static inventory example [webservers] www.myhost.com www.example.com [databases] db-[a:f].example.com [atlanta] dba.example.com http_port=80 maxRequestsPerChild=808 [atlanta:vars] ntp_server=ntp.atlanta.example.com
  • 31. Modules ● Ansible building blocks ● Should be idempotent Examples: $ ansible example -m ping www.example.com | SUCCESS => { "changed": false, "ping": "pong" } $ ansible example -m service -a "name=httpd state=started"
  • 32. Playbooks ● Orchestrate steps ● Composed of one or more plays ● Each play runs a number of tasks in order on a group of servers ○ e.g. call a module to do something ● YML format
  • 33. Playbooks (2) ● Inventory example: [webservers] web[01:10].example.com [databases] db[01:10].example.com
  • 34. --- - hosts: webservers tasks: - name: ensure apache is at the latest version yum: name: httpd state: latest - name: ensure apache is started service: name: httpd state: started - hosts: databases tasks: - name: ensure postgresql is at the latest version yum: name: postgresql state: latest play 1 play 2 task 1 task 2 ● Playbook example: Playbooks (3)
  • 35. Playbooks (4) Play 1: --- - hosts: webservers tasks: - name: ensure apache is at the latest version yum: name: httpd state: latest ... module host groups as per inventory
  • 36. Playbooks (5) ● Run with ansible-playbook command $ ansible-playbook all my_pb.yml [--limit *example.com]
  • 37. Playbooks (6) PLAY [all] *************************************************************************** TASK [check if specified os user exists] *************************************************************************** changed: [mysql1] ok: [mysql2] PLAY RECAP *************************************************************************** mysql1 : ok=1 changed=1 unreachable=0 failed=0 mysql2 : ok=1 changed=0 unreachable=0 failed=0
  • 38. Variables ● Simple variables foo: bar ● List datacenter: - us-east - us-west ● Dictionary foo: field1: one field2: two
  • 39. Automating MongoDB deployment 1. Create an Ansible inventory file 2. Edit the variables file 3. Run the ansible-playbook
  • 40. Automating MongoDB deployment 1. Create an Ansible inventory file 2. Edit the variables file 3. Run the ansible-playbook
  • 41. Inventory file for a sharded cluster ● One group per shard (“shardN”) ● A group for the config servers (“cfg”) ● A group for the routers (“mongos”)
  • 42. [shard1] host1.example.com mongodb_primary=True host2.example.com host3.example.com [shard2] host4.example.com mongodb_primary=True host5.example.com host6.example.com [cfg] host7.example.com mongodb_primary=True host8.example.com host9.example.com [mongos] host10.example.com Inventory file for a sharded cluster (2)
  • 43. Generating the inventory file with Terraform ● Use the local_file Terraform resource ● Use templates to dynamically create the groups ● How to generate an Ansible inventory from Terraform
  • 44. Automating MongoDB deployment 1. Create an Ansible inventory file 2. Edit the variables file 3. Run the ansible-playbook
  • 45. Variables ● Copy the MongoDB files / Install from repository ● Ports for mongod, mongos ● Define the paths for data, logs, etc. ● Authentication mechanism ● Encryption ● Backup
  • 46. Things we need done ● Install packages ● Create config files ● Start/stop processes ● Initialize replica sets ● Create users ● Configure backup job ● Add hosts to monitoring ● Add the shards to the cluster
  • 47. Installing packages packages: - percona-server-mongodb - percona-backup-mongodb - pmm2-client - name: install rpm from repo package: name: "{{ item }}" state: present with_items: "{{ packages }}" dynamic variable loop
  • 48. Creating configuration files ● Generate files dynamically ● Include/exclude different sections ● Variables are not enough ● Solution: Ansible Templates
  • 49. Ansible Templates ● Built-in module ● Create file with dynamic content ● Jinja2 engine ● Store them in /templates subdirectory
  • 50. Creating templated config files mongod.conf.j2 template: ... security: {% if use_tls %} clusterAuthMode: x509 {% else %} keyFile: {{ keyFile_loc }} {% endif %} ... Variables file: use_tls: false keyFile_loc: /var/lib/mongo/rskeyfile
  • 51. Creating templated config files (2) Task: - name: copy mongod.conf become: yes template: src: templates/mongod.conf.j2 dest: /etc/mongod.conf owner: root group: root mode: 0644
  • 52. Starting/stopping processes - name: start mongod on rs member become: yes service: name: mongod state: started
  • 53. [cfg] host1.example.com mongodb_primary=True host2.example.com host3.example.com [shard1] host4.example.com mongodb_primary=True host5.example.com host6.example.com [shard2] host6.example.com mongodb_primary=True host8.example.com host9.example.com [mongos] host10.example.com Initialize replica sets ● rs.initiate() Our inventory file: group_names array for “host1.example.com”: group_names = [ cfg ]
  • 54. Initialize replica sets (2) init-rs.js.j2: rs.initiate( { _id: "{{ group_names[0] }}", members: [ {% for h in groups[ group_names[0] ] %} { _id : {{ loop.index }}, host : "{{ h }}: {% if hostvars[inventory_hostname].group_names[0].startswith('shard') %} {{ shard_port }} {% else %} {{ cfgserver_port }} {% endif %}", priority: 1 } {% if not loop.last %} ,{% endif %} {% endfor %} ] }); the first group a host appears in all hosts part of the first group
  • 55. Initialize replica sets (3) init-rs.js: rs.initiate( { _id: "cfg", members: [ { _id : 0 , host : ”host1.example.com: 27018 ", priority: 1 } , ... ] });
  • 56. Initialize replica sets (4) - name: render the template for the init command template: src: templates/init-rs.js.j2 dest: /tmp/init-rs.js mode: 0644 when: mongodb_primary is defined and mongodb_primary - name: run the init command for the replica set shell: mongo --host localhost --port {{ mongo_port }} < /tmp/init-rs.js when: mongodb_primary is defined and mongodb_primary runs only once per replica-set
  • 57. Create users createUser.j2: db.getSiblingDB("admin").createUser({ user: "{{ mongodb_pmm_user }}", pwd: "{{ mongodb_pmm_user_pwd }}", roles: [ { role: "explainRole", db: "admin" }, { role: "clusterMonitor", db: "admin" }, { role: "read", db: "local" } ] });
  • 58. Create users (2) - name: prepare the command to create pmm user template: src: templates/createUser.js.j2 dest: /tmp/createUser.js mode: 0644 when: mongodb_primary is defined and mongodb_primary - name: run the command to create the user shell: mongo admin -u {{ root_user }} -p{{ mongo_root_password }} --port {{ mongo_port }} < /tmp/createUser.js when: mongodb_primary is defined and mongodb_primary
  • 59. Configure backup - name: set up backup cron job cron: name: pbm backup minute: 3 hour: 0 user: pbm job: /usr/bin/pbm backup --mongodb-uri "mongodb://{{ pbmuser }}:{{ pbmpwd }}@ {{ ansible_fqdn }}:{{ mongo_port }}" cron_file: pbm_daily_backup
  • 60. Configure monitoring - name: point pmm-client to the PMM server become: true shell: pmm-admin config --server-url=https://{{ pmm_server_user }}: {{ pmm_server_pwd }}@{{ pmm_server }}:443 --server-insecure-tls --force - name: add mongodb metrics exporter become: true shell: pmm-admin add mongodb --username={{ mongodb_pmm_user }} --password={{ mongodb_pmm_user_pwd }} --host={{ ansible_fqdn }} --port={{ cfg_server_port if ('cfg' in group_names) else shard_port }}
  • 61. Add the shards - name: add the shards hosts: shard* tasks: - name: add the shards to the cluster shell: mongo admin -uroot -p{{ mongo_root_password }} --port {{ mongos_port }} --eval "sh.addShard('{{ group_names[0] }}/{{ ansible_fqdn }}:{{ shard_port }}')" delegate_to: "{{ groups.mongos | first }}" when: mongodb_primary is defined and mongodb_primary
  • 62. Automating MongoDB deployment 1. Create an Ansible inventory file 2. Edit the variables file 3. Run the ansible-playbook ansible-playbook main.yml -i inventory.ini --ask-become-pass
  • 63. Putting it all together ● Define the topology ● Create the infrastructure using Terraform ● Generate the inventory file for Ansible ● Install the software with Ansible
  • 64. Putting it all together (2) ● Define the variables ○ variables.tf ○ Ansible vars file ● Run terraform apply ● Run ansible-playbook
  • 65. Benefits ● Define a process ● Save time ● Reuse code ● Streamline deployments ● Ensure resources are monitored (and backed up)
  • 66. Q&A Thank you for attending! https://www.percona.com/blog/author/ivan-groenewold/