Utiliser salt pour tester son infrastructure sur open stack ou dockerLogilab
Vous pouvez accéder à cette présentation sur ce lien : http://slides.logilab.fr/2015/poss2015_salt-docker/#/
Configurer et orchestrer son infrastructure avec un outil de gestion de configuration centralisée tel que Salt comporte de nombreux avantages.
La conservation et l'historisation des fichers de configuration dans un entrepôt de source geré par un DVCS (mercurial ou git) en fait partie.
Salt permet ensuite de faire évoluer son infrastructure en la testant dans des environements isolés. Une fois la description complète, reproduire une partie de son infrastructure de production sur un environnement virtualisé tel qu'un cloud
privé (OpenStack) devient possible et automatisable avec
*salt-cloud*. L'étape suivante est de pouvoir reproduire des portions de son infrastructure dans des conteneurs légers tels que docker ou lxc directement sur son portable. Pour cela, le pilotage de docker par salt et les fonctionnalités d'orchestration de salt permettent une agilité sans précédent.
Il s'agit d'un bon complément pour le TDI : Test Driven Infrastructure. L'infrastructure est testée en mode "intégration continue" et on peut tester et débugger une partie de l'infrastructure en mode "bac à sable".
Ce modèle peut ensuite être décliné avec l'utilisation des branches dans git ou mercurial où certaines branches vont être appliquées à la partie production de l'infrastructure alors que d'autres sont appliqués a la préproduction ou aux environnements docker ou lxc en local.
Salt est un outil de gestion de configuration centralisé généralement utilisé pour configurer et orchestrer son infrastructure système en bénéficiant de la conservation et de l'historisation des fichiers de configuration dans un entrepôt source géré par mercurial ou git. Toutefois, les possibilités offertes par Salt vont beaucoup plus loin. Une fois la description Salt de l'infrastructure de production terminée, il est possible d'en reproduire automatiquement tout ou partie
avec salt-cloud dans un environnement virtualisé (cloud privé OpenStack) et ainsi de pouvoir mener des tests. En allant plus loin, il est possible de reproduire des portions d'infrastructure dans des conteneurs légers (docker, lxc) et de travailler directement sur son ordinateur portable.
Les fonctionnalités d'orchestration de Salt et son pilotage de docker amènent une agilité sans précédent dans ce processus de travail.
Dans le modèle décrit ci-dessus, excellent complément du TDI (Test-Driven Infrastructure), l'infrastructure est testée et déboguée en mode "bac à sable" puis déployée selon un mécanisme automatisé d'intégration continue. Le modèle peut être décliné en utilisant des branches dans l’entrepôt source de Salt et en choisissant quelles branches sont appliquées
à l'infrastructure en production, en pré-production ou en test dans les environnements locaux (docker, lxc). Des mécanismes de relecture et de validation peuvent alors être mis en œuvre.
SaltConf14 - Matthew Williams, Flowroute - Salt Virt for Linux contatiners an...SaltStack
This SaltConf14 talk by Matthew Williams of Flowroute shows the power of Salt Virt and Runner for creating and managing VMs and Linux containers. A demonstration of the Salt lxc module shows the simplicity with which containers and VMs can be created and configured.
Utiliser salt pour tester son infrastructure sur open stack ou dockerLogilab
Vous pouvez accéder à cette présentation sur ce lien : http://slides.logilab.fr/2015/poss2015_salt-docker/#/
Configurer et orchestrer son infrastructure avec un outil de gestion de configuration centralisée tel que Salt comporte de nombreux avantages.
La conservation et l'historisation des fichers de configuration dans un entrepôt de source geré par un DVCS (mercurial ou git) en fait partie.
Salt permet ensuite de faire évoluer son infrastructure en la testant dans des environements isolés. Une fois la description complète, reproduire une partie de son infrastructure de production sur un environnement virtualisé tel qu'un cloud
privé (OpenStack) devient possible et automatisable avec
*salt-cloud*. L'étape suivante est de pouvoir reproduire des portions de son infrastructure dans des conteneurs légers tels que docker ou lxc directement sur son portable. Pour cela, le pilotage de docker par salt et les fonctionnalités d'orchestration de salt permettent une agilité sans précédent.
Il s'agit d'un bon complément pour le TDI : Test Driven Infrastructure. L'infrastructure est testée en mode "intégration continue" et on peut tester et débugger une partie de l'infrastructure en mode "bac à sable".
Ce modèle peut ensuite être décliné avec l'utilisation des branches dans git ou mercurial où certaines branches vont être appliquées à la partie production de l'infrastructure alors que d'autres sont appliqués a la préproduction ou aux environnements docker ou lxc en local.
Salt est un outil de gestion de configuration centralisé généralement utilisé pour configurer et orchestrer son infrastructure système en bénéficiant de la conservation et de l'historisation des fichiers de configuration dans un entrepôt source géré par mercurial ou git. Toutefois, les possibilités offertes par Salt vont beaucoup plus loin. Une fois la description Salt de l'infrastructure de production terminée, il est possible d'en reproduire automatiquement tout ou partie
avec salt-cloud dans un environnement virtualisé (cloud privé OpenStack) et ainsi de pouvoir mener des tests. En allant plus loin, il est possible de reproduire des portions d'infrastructure dans des conteneurs légers (docker, lxc) et de travailler directement sur son ordinateur portable.
Les fonctionnalités d'orchestration de Salt et son pilotage de docker amènent une agilité sans précédent dans ce processus de travail.
Dans le modèle décrit ci-dessus, excellent complément du TDI (Test-Driven Infrastructure), l'infrastructure est testée et déboguée en mode "bac à sable" puis déployée selon un mécanisme automatisé d'intégration continue. Le modèle peut être décliné en utilisant des branches dans l’entrepôt source de Salt et en choisissant quelles branches sont appliquées
à l'infrastructure en production, en pré-production ou en test dans les environnements locaux (docker, lxc). Des mécanismes de relecture et de validation peuvent alors être mis en œuvre.
SaltConf14 - Matthew Williams, Flowroute - Salt Virt for Linux contatiners an...SaltStack
This SaltConf14 talk by Matthew Williams of Flowroute shows the power of Salt Virt and Runner for creating and managing VMs and Linux containers. A demonstration of the Salt lxc module shows the simplicity with which containers and VMs can be created and configured.
Second and last episode of my SaltStack series for Programmers in Padua.
This one focuses on the configuration management features of SaltStack and ends with a demo of a full solution for installing Sentry. There will be a video recording soon.
Repo for the salt states: https://github.com/unicolet/salt-sentry
Salt Air 19 - Intro to SaltStack RAET (reliable asyncronous event transport)SaltStack
Thomas Hatch, SaltStack CTO, and Sam Smith, SaltStack director of product development, introduce SaltStack RAET as a new alternative transport medium developed specifically with SaltStack infrastructure automation and configuration management in mind. SaltStack built RAET for customers needing substantial speed and scale to automate management of massive data center infrastructure environments.
SaltStack RAET is primarily an async communication layer over truly async connections, defaulting to UDP. The SaltStack RAET system uses CurveCP encryption by default. SaltStack users can now leverage substantial flexibility via either Salt SSH, ØMQ, or RAET to best address numerous use cases.
A user's perspective on SaltStack and other configuration management toolsSaltStack
Aurelien Geron uses SaltStack to manage a few VMs running Django web apps based on a sharded mongodb cluster. He had struggled with another configuration management tool for months but then read about Saltstack and decided to try it out. For Aurelien SaltStack just works, it's plain and simple, powerful, configurable and ultra-fast. This is his presentation.
Docker nous permet de déployer nos applications dans des conteneurs. Du coup notre infrastructure se retrouve divisée dans différents conteneurs, un pour la base de données, un pour le front, un pour le backend. Voir même une division en services lorsque l’on est dans une approche micro-services.
Mais comment faire communiquer ces différents conteneurs? Comment orchestrer un cluster de conteneurs? Kubernetes est une réponse à ces questions.
Using Mikko Koppanen's PHP ZMQ extension we will look at how you can easily distribute work to background processes, provide flexible service brokering for your next service oriented architecture, and manage caches efficiently and easily with just PHP and the ZeroMQ libraries. Whether the problem is asynchronous communication, message distribution, process management or just about anything, ZeroMQ can help you build an architecture that is more resilient, more scalable and more flexible, without introducing unnecessary overhead or requiring a heavyweight queue manager node.
Integration testing for salt states using aws ec2 container serviceSaltStack
A SaltConf16 use case talk by Steven Braverman of Dun & Bradstreet. Testing configuration changes for multiple server roles can be time consuming when real instances or legacy container systems are used. Applying configuration changes to each role in parallel can be difficult. So what's the best way to test configuration changes efficiently, quickly, and securely prior to applying them? See how an integrated test setup using AWS EC2 Container Service (ECS), AWS AutoScaling Group, and SaltStack simplifies the application of configuration changes and allows you to test configuration changes in parallel to reduce the time spent testing.
About Us:
UltraSpectra is a full-service online company dedicated to providing the services of internet marketing and
IT solutions to professionals and businesses looking to fully leverage the internet.
http://www.ultraspectra.com
http://www.ultraspectra.net
Join Our Network:
facebook.com/ultraspectra
twitter.com/ultraspectra
youtube.com/user/ultraspecra
DevoxxFR 2015 Talk http://cfp.devoxx.fr/2015/talk/WXY-1157/Scaling_Docker_with_Kubernetes
Kubernetes is an open source project to manage a cluster of Linux containers as a single system, managing and running Docker containers across multiple Docker hosts, offering co-location of containers, service discovery and replication control. It was started by Google and now it is supported by Microsoft, RedHat, IBM and Docker Inc amongst others.
Once you are using Docker containers the next question is how to scale and start containers across multiple Docker hosts, balancing the containers across them. Kubernetes also adds a higher level API to define how containers are logically grouped, allowing to define pools of containers, load balancing and affinity.
WSO2Con US 2015 Kubernetes: a platform for automating deployment, scaling, an...Brian Grant
Kubernetes can run application containers on clusters of physical or virtual machines.
It can also do much more than that.
Kubernetes satisfies a number of common needs of applications running in production, such as co-locating helper processes, mounting storage systems, distributing secrets, application health checking, replicating application instances, horizontal auto-scaling, load balancing, rolling updates, and resource monitoring.
However, even though Kubernetes provides a lot of functionality, there are always new scenarios that would benefit from new features. Ad hoc orchestration that is acceptable initially often requires robust automation at scale. Application-specific workflows can be streamlined to accelerate developer velocity.
This is why Kubernetes was also designed to serve as a platform for building an ecosystem of components and tools to make it easier to deploy, scale, and manage applications. The Kubernetes control plane is built upon the same APIs that are available to developers and users, implementing resilient control loops that continuously drive the current state towards the desired state. This design has enabled Apache Stratos and a number of other Platform as a Service and Continuous Integration and Deployment systems to build atop Kubernetes.
This presentation introduces Kubernetes’s core primitives, shows how some of its better known features are built on them, and introduces some of the new capabilities that are being added.
Traditional virtualization technologies have been used by cloud infrastructure providers for many years in providing isolated environments for hosting applications. These technologies make use of full-blown operating system images for creating virtual machines (VMs). According to this architecture, each VM needs its own guest operating system to run application processes. More recently, with the introduction of the Docker project, the Linux Container (LXC) virtualization technology became popular and attracted the attention. Unlike VMs, containers do not need a dedicated guest operating system for providing OS-level isolation, rather they can provide the same level of isolation on top of a single operating system instance.
An enterprise application may need to run a server cluster to handle high request volumes. Running an entire server cluster on Docker containers, on a single Docker host could introduce the risk of single point of failure. Google started a project called Kubernetes to solve this problem. Kubernetes provides a cluster of Docker hosts for managing Docker containers in a clustered environment. It provides an API on top of Docker API for managing docker containers on multiple Docker hosts with many more features.
Second and last episode of my SaltStack series for Programmers in Padua.
This one focuses on the configuration management features of SaltStack and ends with a demo of a full solution for installing Sentry. There will be a video recording soon.
Repo for the salt states: https://github.com/unicolet/salt-sentry
Salt Air 19 - Intro to SaltStack RAET (reliable asyncronous event transport)SaltStack
Thomas Hatch, SaltStack CTO, and Sam Smith, SaltStack director of product development, introduce SaltStack RAET as a new alternative transport medium developed specifically with SaltStack infrastructure automation and configuration management in mind. SaltStack built RAET for customers needing substantial speed and scale to automate management of massive data center infrastructure environments.
SaltStack RAET is primarily an async communication layer over truly async connections, defaulting to UDP. The SaltStack RAET system uses CurveCP encryption by default. SaltStack users can now leverage substantial flexibility via either Salt SSH, ØMQ, or RAET to best address numerous use cases.
A user's perspective on SaltStack and other configuration management toolsSaltStack
Aurelien Geron uses SaltStack to manage a few VMs running Django web apps based on a sharded mongodb cluster. He had struggled with another configuration management tool for months but then read about Saltstack and decided to try it out. For Aurelien SaltStack just works, it's plain and simple, powerful, configurable and ultra-fast. This is his presentation.
Docker nous permet de déployer nos applications dans des conteneurs. Du coup notre infrastructure se retrouve divisée dans différents conteneurs, un pour la base de données, un pour le front, un pour le backend. Voir même une division en services lorsque l’on est dans une approche micro-services.
Mais comment faire communiquer ces différents conteneurs? Comment orchestrer un cluster de conteneurs? Kubernetes est une réponse à ces questions.
Using Mikko Koppanen's PHP ZMQ extension we will look at how you can easily distribute work to background processes, provide flexible service brokering for your next service oriented architecture, and manage caches efficiently and easily with just PHP and the ZeroMQ libraries. Whether the problem is asynchronous communication, message distribution, process management or just about anything, ZeroMQ can help you build an architecture that is more resilient, more scalable and more flexible, without introducing unnecessary overhead or requiring a heavyweight queue manager node.
Integration testing for salt states using aws ec2 container serviceSaltStack
A SaltConf16 use case talk by Steven Braverman of Dun & Bradstreet. Testing configuration changes for multiple server roles can be time consuming when real instances or legacy container systems are used. Applying configuration changes to each role in parallel can be difficult. So what's the best way to test configuration changes efficiently, quickly, and securely prior to applying them? See how an integrated test setup using AWS EC2 Container Service (ECS), AWS AutoScaling Group, and SaltStack simplifies the application of configuration changes and allows you to test configuration changes in parallel to reduce the time spent testing.
About Us:
UltraSpectra is a full-service online company dedicated to providing the services of internet marketing and
IT solutions to professionals and businesses looking to fully leverage the internet.
http://www.ultraspectra.com
http://www.ultraspectra.net
Join Our Network:
facebook.com/ultraspectra
twitter.com/ultraspectra
youtube.com/user/ultraspecra
DevoxxFR 2015 Talk http://cfp.devoxx.fr/2015/talk/WXY-1157/Scaling_Docker_with_Kubernetes
Kubernetes is an open source project to manage a cluster of Linux containers as a single system, managing and running Docker containers across multiple Docker hosts, offering co-location of containers, service discovery and replication control. It was started by Google and now it is supported by Microsoft, RedHat, IBM and Docker Inc amongst others.
Once you are using Docker containers the next question is how to scale and start containers across multiple Docker hosts, balancing the containers across them. Kubernetes also adds a higher level API to define how containers are logically grouped, allowing to define pools of containers, load balancing and affinity.
WSO2Con US 2015 Kubernetes: a platform for automating deployment, scaling, an...Brian Grant
Kubernetes can run application containers on clusters of physical or virtual machines.
It can also do much more than that.
Kubernetes satisfies a number of common needs of applications running in production, such as co-locating helper processes, mounting storage systems, distributing secrets, application health checking, replicating application instances, horizontal auto-scaling, load balancing, rolling updates, and resource monitoring.
However, even though Kubernetes provides a lot of functionality, there are always new scenarios that would benefit from new features. Ad hoc orchestration that is acceptable initially often requires robust automation at scale. Application-specific workflows can be streamlined to accelerate developer velocity.
This is why Kubernetes was also designed to serve as a platform for building an ecosystem of components and tools to make it easier to deploy, scale, and manage applications. The Kubernetes control plane is built upon the same APIs that are available to developers and users, implementing resilient control loops that continuously drive the current state towards the desired state. This design has enabled Apache Stratos and a number of other Platform as a Service and Continuous Integration and Deployment systems to build atop Kubernetes.
This presentation introduces Kubernetes’s core primitives, shows how some of its better known features are built on them, and introduces some of the new capabilities that are being added.
Traditional virtualization technologies have been used by cloud infrastructure providers for many years in providing isolated environments for hosting applications. These technologies make use of full-blown operating system images for creating virtual machines (VMs). According to this architecture, each VM needs its own guest operating system to run application processes. More recently, with the introduction of the Docker project, the Linux Container (LXC) virtualization technology became popular and attracted the attention. Unlike VMs, containers do not need a dedicated guest operating system for providing OS-level isolation, rather they can provide the same level of isolation on top of a single operating system instance.
An enterprise application may need to run a server cluster to handle high request volumes. Running an entire server cluster on Docker containers, on a single Docker host could introduce the risk of single point of failure. Google started a project called Kubernetes to solve this problem. Kubernetes provides a cluster of Docker hosts for managing Docker containers in a clustered environment. It provides an API on top of Docker API for managing docker containers on multiple Docker hosts with many more features.