My Gluecon presentation about hybrid infrastructure and container orchestration deployment. I talk about why composability matters and how AWS sets the standard.
OpenStack Preso: DevOps on Hybrid Infrastructurerhirschfeld
Discusses the approach for making hybrid DevOps workable including what obstacles must be overcome. Includes demo of multiple OpenStack clouds & Kubernetes deploy on AWS, Google and OpenStack
Hybrid is normal! Be more like AWS! Decompose DevOps tasks!
Short presentation addressing the challenges of building and operating hybrid infrastructure.
Narration! https://www.youtube.com/watch?v=uorHrvMgwc0
Advanced dev ops governance with terraformJames Counts
Jim Counts specializes in helping enterprises transition to cloud-native architectures. He focuses on making infrastructure management repeatable, reliable and sustainable through automation with Terraform. Large organizations face challenges of "DevOps project sprawl" as they have many teams with different responsibilities. This can lead to overuse of shared credentials and resources if not properly governed. Jim discusses how to establish "launch pads" and "landing zones" using Terraform to automate the management of environments, projects, credentials and other resources to bring order to this "sprawl" and make governance scalable.
Presentation on how developer roles change when meeting cloud infrastructure, and how a a "role driven"/template based VM deployment model helps this separation
Composable architectures The Lego of IT - Alessandro DavidCodemotion
Composable architectures allow for the flexible placement and management of workloads across hybrid cloud environments. This approach involves defining infrastructure through software and APIs rather than physical hardware, allowing resources to be allocated on demand. The key benefits are enabling DevOps practices through containers, treating all infrastructure as code, and providing a unified API and interface to flexibly compose and recompose resources as needed.
Pedal to the metal: Red Hat CloudForms for workload & infrastructure managementAlex Baretto
Enterprise IT professionals have unique cloud resource challenges. To deploy and manage an enterprise application today, you need a solution that ensures compliance with corporate IT governance requirements and has predictable and repeatable performance and costs. Plus, business users want solutions that can be deployed quickly.
In this session, you’ll learn how to overcome these enterprise-class cloud deployment challenges. See how Red Hat CloudForms can automate OpenStack reference architecture design creation, deployment, and management for workloads and infrastructures.
Learn how to visually inventory deployed OpenStack reference architectures and monitor OpenStack usage, including how to budget for platform usage by project, department, or program, and track and allocate costs in a similar way.
OpenStack Preso: DevOps on Hybrid Infrastructurerhirschfeld
Discusses the approach for making hybrid DevOps workable including what obstacles must be overcome. Includes demo of multiple OpenStack clouds & Kubernetes deploy on AWS, Google and OpenStack
Hybrid is normal! Be more like AWS! Decompose DevOps tasks!
Short presentation addressing the challenges of building and operating hybrid infrastructure.
Narration! https://www.youtube.com/watch?v=uorHrvMgwc0
Advanced dev ops governance with terraformJames Counts
Jim Counts specializes in helping enterprises transition to cloud-native architectures. He focuses on making infrastructure management repeatable, reliable and sustainable through automation with Terraform. Large organizations face challenges of "DevOps project sprawl" as they have many teams with different responsibilities. This can lead to overuse of shared credentials and resources if not properly governed. Jim discusses how to establish "launch pads" and "landing zones" using Terraform to automate the management of environments, projects, credentials and other resources to bring order to this "sprawl" and make governance scalable.
Presentation on how developer roles change when meeting cloud infrastructure, and how a a "role driven"/template based VM deployment model helps this separation
Composable architectures The Lego of IT - Alessandro DavidCodemotion
Composable architectures allow for the flexible placement and management of workloads across hybrid cloud environments. This approach involves defining infrastructure through software and APIs rather than physical hardware, allowing resources to be allocated on demand. The key benefits are enabling DevOps practices through containers, treating all infrastructure as code, and providing a unified API and interface to flexibly compose and recompose resources as needed.
Pedal to the metal: Red Hat CloudForms for workload & infrastructure managementAlex Baretto
Enterprise IT professionals have unique cloud resource challenges. To deploy and manage an enterprise application today, you need a solution that ensures compliance with corporate IT governance requirements and has predictable and repeatable performance and costs. Plus, business users want solutions that can be deployed quickly.
In this session, you’ll learn how to overcome these enterprise-class cloud deployment challenges. See how Red Hat CloudForms can automate OpenStack reference architecture design creation, deployment, and management for workloads and infrastructures.
Learn how to visually inventory deployed OpenStack reference architectures and monitor OpenStack usage, including how to budget for platform usage by project, department, or program, and track and allocate costs in a similar way.
Getting Started with Infrastructure as Code (IaC)Noor Basha
Are you looking to automate your infrastructure but not sure where to start? View this presentation on Getting started with Infrastructure as code to learn how to leverage IaC to deploy and manage resources on Azure. You will learn:
• Introduction to IaC
• Develop a simple IaC using Terraform
• Manage the deployed infrastructure using Terraform
9 - Making Sense of Containers in the Microsoft CloudKangaroot
Everyone is talking about Containers, but what is this really about what are the benefits of Containers for your customers? You probably think you know, but there is more! And did you know you can run and manage Containers in the Microsoft Cloud? This session will go in to the benefits of Containers for your customers and what Microsoft is offering to facilitate in all your needs. We will touch on technologies like Kubernetes, Docker and we will elaborate on the strong partnerships Microsoft has built with true Open Source companies like Red Hat.
Building A Diverse Geo-Architecture For Cloud Native Applications In One DayVMware Tanzu
Presenter: Ben Laplanche, Product Manager, Pivotal Cloud Foundry
Companies turn to PaaS and Cloud Native Applications to gain agility and speed. To provide customer value, a fault tolerant infrastructure is essential. But what happens if an entire data center, region, or even country should go offline? Cassandra holds the key to keeping application state in sync through replication, whilst Pivotal Cloud Foundry provides easy deployment to multiple IaaS providers. It also comes complete with a managed service offering for DataStax Enterprise. This talk will discuss how this setup can be deployed in one day, including demonstrations and a walkthrough of the key concepts, approaches, and considerations.
I Segreti per Modernizzare con Successo le Applicazioni (Pivotal Cloud-Native...VMware Tanzu
This document discusses strategies for modernizing applications to run successfully on cloud platforms like Pivotal Cloud Foundry. It outlines key principles like the Twelve Factor App methodology and establishing clear objectives and metrics. The document also presents a maturity model for applications and an incremental approach to migrating and optimizing existing applications over time. It analyzes which aspects of the Twelve Factors usually require more or less effort during modernization. Finally, it proposes starting the journey by identifying suitable applications and pushing some all the way to production to establish best practices.
Webinar: Cutting Time, Complexity and Cost from Data Science to Productioniguazio
Imagine a system where one collects real-time data, develops a machine learning model… Runs analysis and training on powerful GPUs… Clicks on a magic button and then deploys code and ML models to production… All without any heavy lifting from data and DevOps engineers. Today, data scientists work on laptops with just a subset of data and time is wasted while waiting for data and compute.
It’s about efficient use of time! Join Iguazio and NVIDIA so that you can get home early today! Learn how to speed up data science from development to production:
- Access to large scale, real-time and operational data without waiting for ETL
- Run high performance analytics and ML on NVIDIA GPUs (Rapids)
- Work on a shared, pre-integrated Kubernetes cluster with - - Jupyter notebook and leading data science tools
- One-click (really!) deployment to production
Speakers: Yaron Haviv, CTO at Iguazio, Or Zilberman, Data Scientist at Iguazio and Jacci Cenci, Sr. Technical Marketing Engineer at NVIDIA
Better Software is Better than Worse Software - Michael Coté (Cape Town 2019)VMware Tanzu
This document discusses the benefits of a consistent platform and product process for building cloud native applications. It provides examples from various companies that illustrate how adopting these practices can increase developer productivity, reduce costs, speed up release cycles, and improve software quality. Maintaining a consistent platform with tools like Pivotal Application Service, Pivotal Container Service, and services from the Pivotal marketplace allows companies to focus on building applications rather than infrastructure.
Cloud-based Linked Data Management for Self-service Application DevelopmentPeter Haase
Peter Haase and Michael Schmidt of fluid Operations AG presented on developing applications using linked open data. They discussed the increasing amount of linked open data available and challenges in building applications that integrate data from different sources and domains. Their Information Workbench platform aims to address these challenges by allowing users to discover, integrate, and customize applications using linked data in a no-code environment. Key components of the platform include virtualized integration of data sources and the vision of accessing linked data as a cloud-based data service.
Continium | DevOps Management for IT ExecutivesBerk Dülger
The document is an agenda for a presentation on DevOps management for IT executives. It includes sections on Waterfall and Agile development, DevOps terminology, engineering practices, culture transformation, return on investment, maturity levels, case studies, and popular DevOps tools like Git, Jenkins, Docker, Kubernetes, Puppet, Terraform, and Cypress. It also lists contact information for the company's London and Istanbul offices.
The document summarizes DockerCon 2017, including keynotes on secure orchestration, LinuxKit, Moby Project, improving the developer experience, and integrating Docker with desktops and clouds. It lists top sessions on topics like bare metal cloud services and provides links to view sessions on YouTube.
Azure Cosmos DB Kafka Connectors | Abinav Rameesh, MicrosoftHostedbyConfluent
The document discusses Kafka connectors for Cosmos DB that allow for seamless integration between the two services without requiring complex application code. It provides an overview of Kafka Connect and connectors, use cases for integrating Cosmos DB and Kafka, and the architecture of source and sink connectors that can read from and write to Cosmos DB and Kafka. It also previews a demo of the connectors and suggests ways to take integration further.
This document discusses microservices architecture using Spring Cloud and related technologies. It provides an overview of microservices and cloud native applications. It then covers Spring Boot, Spring Cloud, and Netflix OSS projects that can be used to build microservices. Specific Spring Cloud features like service registration, circuit breakers, and API gateways are demonstrated. The role of Pivotal in contributing to open source projects and providing Spring Cloud services is also mentioned.
Pivotal Container Service il modo più semplice per gestire Kubernetes in azie...VMware Tanzu
Pivotal Container Service il modo più semplice per gestire Kubernetes in azienda (Pivotal Cloud-Native Workshop: Milan)
Fabio Marinelli & Mattia Gandolfi
7 February 2018
Bringing DevOps to Routing with evolved XR: an overviewCisco DevNet
A session in the DevNet Zone at Cisco Live, Berlin. This session is a fresh perspective on the routing world, focused on the growing influence of DevOps style workflows in routing deployments across Web scale service providers. With the adoption of a 64-bit linux OS, support for Linux containers (LXC/Docker) and an open architecture that enables automated configuration management off the bat, the evolution of IOS-XR has placed it right in the midst of DevOps and SDN. In this session we dive deep into the application-hosting infrastructure, Modular software delivery techniques and support for zero touch provisioning and configuration management tools that integrate seamlessly with the M2M interfaces exposed by IOS XR. We look at deployment techniques of web scale service providers that is gradually influencing the rest of the market and introduce a variety of use cases around automated NetOps, traffic-engineering, Telemetry and data-center cluster schedulers that showcase the power of an open, automatable network operating system.
This document discusses treating infrastructure as code by defining all components needed to run services (e.g. servers, software, configurations, etc.) in text files that can be version controlled, shared, and programmatically deployed to ensure all infrastructure is consistently configured and deployed. Doing so allows for automation, removes inconsistencies between environments, and prevents human errors that can cause outages. Examples are given of firms that experienced major outages due to manual processes and inconsistencies between environments.
Downtime is not an option - day 2 operations - Jörg SchadCodemotion
The document discusses container orchestration and microservices on Apache Mesos and DC/OS. It introduces concepts like microservices, containers, container orchestration and scheduling. It then summarizes key Mesos and DC/OS capabilities like running various workloads, multiplexing resources for higher utilization and running distributed applications. It also touches on day 2 operations for containerized workloads like monitoring, maintenance and troubleshooting.
MongoDB-as-a-Service on Pivotal Cloud FoundryVMware Tanzu
SpringOne Platform 2016'
Speakers: Mallika Iyer; Principal Software Engineer, Pivotal & Sam Weaver; Product Manager, MongoDB
The ability to provide your organization with multiple data services on a platform like Pivotal Cloud Foundry is very powerful, and increases the agility of the organization as a whole, when developers are able to provision data services on demand, and all of this is completely transparent to the system operators. This session will cover a very brief overview of Pivotal Cloud Foundry, and will then deep dive into running MongoDB as a managed service on this platform. The MongoDB service for Pivotal Cloud Foundry leverages the capabilities of Bosh 2.0 for on-demand-dynamic provisioning for services while maintaining an integration with MongoDB's Cloud Ops Manager, to provide the best of both - Pivotal Cloud Foundry and MongoDB.
Large Scale Cloud Infrastructure Using Shared ComponentsEficode
This document discusses Unity's approach to building a large scale Kubernetes infrastructure by distributing ownership of shared components across development teams. Key aspects include:
1. Dividing over 200 microservices across 20 development teams, with each team owning deployment of their services through shared build pipelines, Terraform modules, and Helm charts.
2. Using shared infrastructure components like cloud resources, networking, monitoring tools, and databases that are developed and maintained through an "internal open source model" with clear ownership.
3. Standardizing on tools like Terraform, Kubernetes, Helm, Jenkins, and GitLab CI to ensure consistent environments and enable independent deployments by each team.
In this very concise talk, I touched the base of the nanotechnology in a 20-minute talk during the #MakerFaireCairo. Of course nanotechnology needs more deep explanation, but this presentation was a very basic introduction.
#WeAreAllMakers #Nanotechnology #ZewailCity
Heat and the City - David Hawkey, University of Edinburgh (http://www.heatand...JISC GECO
Presentation on the Heat and the City project given by David Hawkey, University of Edinburgh (http://www.heatandthecity.org.uk/)at the JISC GECO/STEEV Green Energy Tech Event (#e3vis) on Thursday 13th October 2011.
Getting Started with Infrastructure as Code (IaC)Noor Basha
Are you looking to automate your infrastructure but not sure where to start? View this presentation on Getting started with Infrastructure as code to learn how to leverage IaC to deploy and manage resources on Azure. You will learn:
• Introduction to IaC
• Develop a simple IaC using Terraform
• Manage the deployed infrastructure using Terraform
9 - Making Sense of Containers in the Microsoft CloudKangaroot
Everyone is talking about Containers, but what is this really about what are the benefits of Containers for your customers? You probably think you know, but there is more! And did you know you can run and manage Containers in the Microsoft Cloud? This session will go in to the benefits of Containers for your customers and what Microsoft is offering to facilitate in all your needs. We will touch on technologies like Kubernetes, Docker and we will elaborate on the strong partnerships Microsoft has built with true Open Source companies like Red Hat.
Building A Diverse Geo-Architecture For Cloud Native Applications In One DayVMware Tanzu
Presenter: Ben Laplanche, Product Manager, Pivotal Cloud Foundry
Companies turn to PaaS and Cloud Native Applications to gain agility and speed. To provide customer value, a fault tolerant infrastructure is essential. But what happens if an entire data center, region, or even country should go offline? Cassandra holds the key to keeping application state in sync through replication, whilst Pivotal Cloud Foundry provides easy deployment to multiple IaaS providers. It also comes complete with a managed service offering for DataStax Enterprise. This talk will discuss how this setup can be deployed in one day, including demonstrations and a walkthrough of the key concepts, approaches, and considerations.
I Segreti per Modernizzare con Successo le Applicazioni (Pivotal Cloud-Native...VMware Tanzu
This document discusses strategies for modernizing applications to run successfully on cloud platforms like Pivotal Cloud Foundry. It outlines key principles like the Twelve Factor App methodology and establishing clear objectives and metrics. The document also presents a maturity model for applications and an incremental approach to migrating and optimizing existing applications over time. It analyzes which aspects of the Twelve Factors usually require more or less effort during modernization. Finally, it proposes starting the journey by identifying suitable applications and pushing some all the way to production to establish best practices.
Webinar: Cutting Time, Complexity and Cost from Data Science to Productioniguazio
Imagine a system where one collects real-time data, develops a machine learning model… Runs analysis and training on powerful GPUs… Clicks on a magic button and then deploys code and ML models to production… All without any heavy lifting from data and DevOps engineers. Today, data scientists work on laptops with just a subset of data and time is wasted while waiting for data and compute.
It’s about efficient use of time! Join Iguazio and NVIDIA so that you can get home early today! Learn how to speed up data science from development to production:
- Access to large scale, real-time and operational data without waiting for ETL
- Run high performance analytics and ML on NVIDIA GPUs (Rapids)
- Work on a shared, pre-integrated Kubernetes cluster with - - Jupyter notebook and leading data science tools
- One-click (really!) deployment to production
Speakers: Yaron Haviv, CTO at Iguazio, Or Zilberman, Data Scientist at Iguazio and Jacci Cenci, Sr. Technical Marketing Engineer at NVIDIA
Better Software is Better than Worse Software - Michael Coté (Cape Town 2019)VMware Tanzu
This document discusses the benefits of a consistent platform and product process for building cloud native applications. It provides examples from various companies that illustrate how adopting these practices can increase developer productivity, reduce costs, speed up release cycles, and improve software quality. Maintaining a consistent platform with tools like Pivotal Application Service, Pivotal Container Service, and services from the Pivotal marketplace allows companies to focus on building applications rather than infrastructure.
Cloud-based Linked Data Management for Self-service Application DevelopmentPeter Haase
Peter Haase and Michael Schmidt of fluid Operations AG presented on developing applications using linked open data. They discussed the increasing amount of linked open data available and challenges in building applications that integrate data from different sources and domains. Their Information Workbench platform aims to address these challenges by allowing users to discover, integrate, and customize applications using linked data in a no-code environment. Key components of the platform include virtualized integration of data sources and the vision of accessing linked data as a cloud-based data service.
Continium | DevOps Management for IT ExecutivesBerk Dülger
The document is an agenda for a presentation on DevOps management for IT executives. It includes sections on Waterfall and Agile development, DevOps terminology, engineering practices, culture transformation, return on investment, maturity levels, case studies, and popular DevOps tools like Git, Jenkins, Docker, Kubernetes, Puppet, Terraform, and Cypress. It also lists contact information for the company's London and Istanbul offices.
The document summarizes DockerCon 2017, including keynotes on secure orchestration, LinuxKit, Moby Project, improving the developer experience, and integrating Docker with desktops and clouds. It lists top sessions on topics like bare metal cloud services and provides links to view sessions on YouTube.
Azure Cosmos DB Kafka Connectors | Abinav Rameesh, MicrosoftHostedbyConfluent
The document discusses Kafka connectors for Cosmos DB that allow for seamless integration between the two services without requiring complex application code. It provides an overview of Kafka Connect and connectors, use cases for integrating Cosmos DB and Kafka, and the architecture of source and sink connectors that can read from and write to Cosmos DB and Kafka. It also previews a demo of the connectors and suggests ways to take integration further.
This document discusses microservices architecture using Spring Cloud and related technologies. It provides an overview of microservices and cloud native applications. It then covers Spring Boot, Spring Cloud, and Netflix OSS projects that can be used to build microservices. Specific Spring Cloud features like service registration, circuit breakers, and API gateways are demonstrated. The role of Pivotal in contributing to open source projects and providing Spring Cloud services is also mentioned.
Pivotal Container Service il modo più semplice per gestire Kubernetes in azie...VMware Tanzu
Pivotal Container Service il modo più semplice per gestire Kubernetes in azienda (Pivotal Cloud-Native Workshop: Milan)
Fabio Marinelli & Mattia Gandolfi
7 February 2018
Bringing DevOps to Routing with evolved XR: an overviewCisco DevNet
A session in the DevNet Zone at Cisco Live, Berlin. This session is a fresh perspective on the routing world, focused on the growing influence of DevOps style workflows in routing deployments across Web scale service providers. With the adoption of a 64-bit linux OS, support for Linux containers (LXC/Docker) and an open architecture that enables automated configuration management off the bat, the evolution of IOS-XR has placed it right in the midst of DevOps and SDN. In this session we dive deep into the application-hosting infrastructure, Modular software delivery techniques and support for zero touch provisioning and configuration management tools that integrate seamlessly with the M2M interfaces exposed by IOS XR. We look at deployment techniques of web scale service providers that is gradually influencing the rest of the market and introduce a variety of use cases around automated NetOps, traffic-engineering, Telemetry and data-center cluster schedulers that showcase the power of an open, automatable network operating system.
This document discusses treating infrastructure as code by defining all components needed to run services (e.g. servers, software, configurations, etc.) in text files that can be version controlled, shared, and programmatically deployed to ensure all infrastructure is consistently configured and deployed. Doing so allows for automation, removes inconsistencies between environments, and prevents human errors that can cause outages. Examples are given of firms that experienced major outages due to manual processes and inconsistencies between environments.
Downtime is not an option - day 2 operations - Jörg SchadCodemotion
The document discusses container orchestration and microservices on Apache Mesos and DC/OS. It introduces concepts like microservices, containers, container orchestration and scheduling. It then summarizes key Mesos and DC/OS capabilities like running various workloads, multiplexing resources for higher utilization and running distributed applications. It also touches on day 2 operations for containerized workloads like monitoring, maintenance and troubleshooting.
MongoDB-as-a-Service on Pivotal Cloud FoundryVMware Tanzu
SpringOne Platform 2016'
Speakers: Mallika Iyer; Principal Software Engineer, Pivotal & Sam Weaver; Product Manager, MongoDB
The ability to provide your organization with multiple data services on a platform like Pivotal Cloud Foundry is very powerful, and increases the agility of the organization as a whole, when developers are able to provision data services on demand, and all of this is completely transparent to the system operators. This session will cover a very brief overview of Pivotal Cloud Foundry, and will then deep dive into running MongoDB as a managed service on this platform. The MongoDB service for Pivotal Cloud Foundry leverages the capabilities of Bosh 2.0 for on-demand-dynamic provisioning for services while maintaining an integration with MongoDB's Cloud Ops Manager, to provide the best of both - Pivotal Cloud Foundry and MongoDB.
Large Scale Cloud Infrastructure Using Shared ComponentsEficode
This document discusses Unity's approach to building a large scale Kubernetes infrastructure by distributing ownership of shared components across development teams. Key aspects include:
1. Dividing over 200 microservices across 20 development teams, with each team owning deployment of their services through shared build pipelines, Terraform modules, and Helm charts.
2. Using shared infrastructure components like cloud resources, networking, monitoring tools, and databases that are developed and maintained through an "internal open source model" with clear ownership.
3. Standardizing on tools like Terraform, Kubernetes, Helm, Jenkins, and GitLab CI to ensure consistent environments and enable independent deployments by each team.
In this very concise talk, I touched the base of the nanotechnology in a 20-minute talk during the #MakerFaireCairo. Of course nanotechnology needs more deep explanation, but this presentation was a very basic introduction.
#WeAreAllMakers #Nanotechnology #ZewailCity
Heat and the City - David Hawkey, University of Edinburgh (http://www.heatand...JISC GECO
Presentation on the Heat and the City project given by David Hawkey, University of Edinburgh (http://www.heatandthecity.org.uk/)at the JISC GECO/STEEV Green Energy Tech Event (#e3vis) on Thursday 13th October 2011.
The document outlines the process and requirements for an international business plan project. Students will work in teams to choose a product/service and country, research the market opportunity, and develop a business plan. The plan will include an executive summary, introduction, competitor analysis, company structure selection, risk management, pricing strategy, financial projections, measures of success, and conclusion. Students must present their plan to the client and submit a written report following APA format. The project aims to develop research, planning, writing, and presentation skills for international business.
Containers, orchestration and security, oh my!rhirschfeld
This document provides an overview of containers, orchestration, and security as it relates to deploying container applications in production using Kubernetes. It discusses what Kubernetes is and its key design elements. It then outlines the reference layers needed for Kubernetes cluster operations including prerequisites, control services, worker nodes, cluster add-ons, and user applications. Finally, it discusses some of the challenges of operating Kubernetes in production including networking complexity, ensuring high availability, and integrating security.
Santo Tomás de Aquino is a school with classrooms, a playground, and other facilities to support students. It has spaces for learning as well as outdoor recreation areas and additional resources to aid the students' education. The document provides a brief overview of some of the key areas and amenities available at the Santo Tomás de Aquino school.
1) Change management provides a structured process and tools to lead people through change to achieve desired outcomes. It allows organizations to quickly implement changes to meet market needs.
2) There are various types of resistance to change, from those who are openly resistant to undercover resistors. Principles of change management include sponsorship, planning, measurement and engagement.
3) Implementing change involves defining the process, receiving requests, planning changes, implementing and monitoring, evaluating results, and modifying the plan if needed. Responsibility for change falls on all levels from envisioning it to celebrating results.
Social Media & It’s Effect On Advertising Agencies.Jason Inasi
Social Media & It’s Effect On Advertising Agencies. The Factory Interactive's presentation for the 4AAF space coast. This presentation covers Social Media definitions, integrating Social Media into your campaigns. 5 Examples of successful Social Media campaigns.
Tips & Tools you can start using today.
The container revolution, and what it means to operators bay lisa - july 2016Robert Starmer
With containers becoming a key technology for developers and dev/ops practitioners, it's important for operators to understand the basics of the technology, and how it relates to datacenter operations.
Presentation given at seminar "Biological nutrient removal, operation management, and troubleshooting at wastewater treatment plant" in Pietari 13.12.2012
La esterilización elimina toda forma de vida microscópica a través del calor, las radiaciones o los gases. Se logra a través del calor seco o húmedo mediante el uso de hornos, autoclaves o tindalización, o mediante radiaciones como rayos ultravioleta o gamma. También se puede lograr mediante gases como el óxido de etileno, formaldehído o glutaraldehído, siendo este último el único líquido utilizado para la esterilización.
This document discusses different types of scales of measurement used in environmental data analysis. It describes nominal, ordinal, interval, and ratio scales. Nominal scales involve non-numeric labels to identify attributes. Ordinal scales allow ranking in addition to nominal properties. Interval scales express intervals between observations quantitatively. Ratio scales have all interval properties and allow meaningful ratios between values. Examples for each scale involving water quality parameters like BOD are provided.
Future Sat Africa - Gondwana - Affordable MobilityMyles Freedman
This document discusses the affordability and viability of satellite connectivity in Africa. It summarizes that the company operates regional ISPs under two brands, has a support team in Nairobi, and acquired regional ISP assets in 2013. It states that satellite provides alternatives for mobile backhaul and last mile connectivity. The customer wants availability, accessibility, and affordability of services. Satellite connectivity costs have reduced over time and are now viable outside of cities. New high throughput satellites are expected to further lower costs and increase uptake of satellite services.
Day 2 C2C - Infrastructure sharing Mott MacdonaldMyles Freedman
This document discusses infrastructure sharing, which refers to the joint use or development of telecommunications infrastructure between operators to efficiently deliver services. It defines infrastructure sharing, outlines the strategic drivers like cost reduction, discusses benefits such as optimized resource use and reduced costs, and provides examples of successful infrastructure sharing initiatives. The document also covers considerations for infrastructure sharing and provides recommendations, emphasizing the need to carefully study technical, geographical and commercial fit based on a long term vision and partnerships built on trust.
Nanotechnology can be used to improve drug delivery through targeted and untargeted methods. Targeted drug delivery seeks to concentrate medication in tissues of interest while reducing it in other tissues to improve efficacy and reduce side effects. This can be done through passive or active targeting. Common drug delivery vehicles in nanotechnology include liposomes, dendrimers, and polymeric micelles. For example, liposomes are spherical vesicles made of lipids that can encapsulate aqueous core medications and reduce toxicity like Doxil, a liposomal formulation of doxorubicin used to treat cancer. Overall, nanotechnology provides efficient drug delivery by creating small, biodegradable drug carriers like lipids and polymers.
2016 - Open Mic - IGNITE - Open Infrastructure = ANY Infrastructuredevopsdaysaustin
The document discusses the need for hybrid infrastructure and hybrid DevOps to manage different cloud platforms and physical infrastructure in a consistent way. It notes that while no single API or platform can meet all needs, AWS dominance means its operational patterns have become the benchmark. The key is developing composable infrastructure modules that can be orchestrated together to provide portability across environments using a common operational process.
[Capitole du Libre] #serverless - mettez-le en oeuvre dans votre entreprise...Ludovic Piot
Tout comme le Cloud IaaS avant lui, le serverless promet de faciliter le succès de vos projets en accélérant le Time to Market et en fluidifiant les relations entre Devs et Ops.
Mais sa mise en œuvre au sein d’une entreprise reste complexe et coûteuse.
Après 2 ans à mettre en place des plateformes managées de ce type, nous partagons nos expériences de ce qu’il faut faire pour mettre en œuvre du serverless en entreprise, en évitant les douleurs et en limitant les contraintes au maximum.
Tout d’abord l’architecture technique, avec 2 implémentations très différentes : Kubernetes et Helm d’un côté, Clever Cloud on-premise de l’autre.
Ensuite, la mise en place et l’utilisation d’OpenFaaS. Comment tester et versionner du Function as a Service. Mais aussi les problématiques de blue/green deployment, de rolling update, d’A/B testing. Comment diagnostiquer rapidement les dépendances et les communications entre services.
Enfin, en abordant les sujets chers à la production : * vulnerability management et patch management, * hétérogénéïté du parc, * monitoring et alerting, * gestion des stacks obsolètes, etc.
SkyBase - a Devops Platform for Hybrid CloudVlad Kuusk
Skybase system is a DevOps platform designed to be used for deployment and maintenance of Services inside all locations of an organization including Dev, QA, Prod and different clouds and geographic regions and data centers.
Kubo (Cloud Foundry Container Platform): Your Gateway Drug to Cloud-nativecornelia davis
The document discusses how Kubo can be used as a gateway to running cloud-native workloads. It outlines different types of workloads like code developed internally which may change frequently or code from third parties. For internally developed code, Kubo allows maintaining existing processes while deploying container images instead of infrastructure. For external code and data-centric workloads, Kubo provides benefits like health management, multi-cloud support, and operating system/Kubernetes upgrades without affecting applications. The document calls developers to run workloads on Cloud Foundry Container Runtime and share experiences.
Kubo (Cloud Foundry Container Platform): Your Gateway Drug to Cloud-nativeVMware Tanzu
You’re at the Cloud Foundry Summit, which means you are by definition a cloud-native enthusiast. There’s no question that building apps in this architectural style will produce resilient, scalable software in an agile manner, and allow you to operate it far more efficiently than you’ve been able to in the past. But you’ve also got a whole lot of software in your company’s portfolio that isn’t there yet. Do you have to resign yourself to the pains of managing those applications the old way until you can finally refactor them to be cloud-native? Kubo to the rescue.
You can run legacy applications on Kubo without significant refactoring – pure and simple. As an added bonus, it allows you to satisfy the CIO mandate of running containers (check). But it’s far more than that – running those workloads on Kubo offers advantages over running them on traditional virtualized infrastructure. This session covers those advantages –resource consolidation, health management, multi-cloud and more. It will also present the abstractions in Kubernetes, things like pods and stateful sets, that support running legacy workloads in the cloud environments that are far more distributed and changing than they have been in the past. It’s a first step to cloud-native.
This presentation explains what serverless is all about, explaining the context from Devs & Ops points of view, and presenting the various ways to achieve serverless (Functions a as Service, BaaS....). It also presents the various competitors on the market and demo one of them, openfaas. Finally, it enlarges the pictures, positionning serverless, combined with Edge computing & IoT, as a valuable triptic cloud vendors are leveraging on top of, to create end-to-end offers.
Red Hat OpenShift Container Platform offers enterprises a fully supported enterprise-grade Kubernetes platform that provides capabilities beyond just Kubernetes. It includes developer tools, CI/CD pipelines, service meshes, and more. OpenShift can be deployed on-premises, on any public cloud, or in a managed service offering. It provides portability, security, automation, and a full-stack developer experience. Compared to building out Kubernetes capabilities individually, OpenShift reduces costs and complexity while accelerating application development.
How to build "AutoScale and AutoHeal" systems using DevOps practices by using modern technologies.
A complete build pipeline and the process of architecting a nearly unbreakable system were part of the presentation.
These slides were presented at 2018 DevOps conference in Singapore. http://claridenglobal.com/conference/devops-sg-2018/
This document summarizes a presentation about MuleSoft operational capabilities and deployment options. It includes:
1) An overview of MuleSoft and its history as an integration platform, including its acquisition by Salesforce.
2) Details on MuleSoft's operational capabilities when deployed on CloudHub, including auto-scaling, intelligent healing, and zero-downtime updates.
3) Five use cases that demonstrate different deployment architectures using MuleSoft, including CloudHub, hybrid implementations with on-premise and cloud components, and customer-hosted options.
The document discusses moving workloads to Kubernetes and the different levels of abstraction in platforms. It notes that moving to Kubernetes requires addressing toil problems rather than just migrating applications. It also discusses how platforms abstract away complexity and how the level of abstraction impacts developer efficiency vs operator flexibility. Pivotal provides solutions like Cloud Foundry and Kubernetes to address these issues at different levels.
The DevOps paradigm - the evolution of IT professionals and opensource toolkitMarco Ferrigno
This document discusses the DevOps paradigm and tools. It begins by defining DevOps as focusing on communication and cooperation between development and operations teams. It then discusses concepts like continuous integration, delivery and deployment. It provides examples of tools used in DevOps like Docker, Kubernetes, Ansible, and monitoring tools. It discusses how infrastructure has evolved to be defined through code. Finally, it discusses challenges of security in DevOps and how DevOps works aligns with open source principles like meritocracy, metrics, and continuous improvement.
This document summarizes the DevOps paradigm and tools. It discusses how DevOps aims to improve communication and cooperation between development and operations teams through practices like continuous integration, delivery, and deployment. It then provides an overview of common DevOps tools for containers, cluster management, automation, CI/CD, monitoring, and infrastructure as code. Specific tools mentioned include Docker, Kubernetes, Ansible, Jenkins, and AWS CloudFormation. The document argues that adopting open source principles and emphasizing leadership, culture change, and talent growth are important for successful DevOps implementation.
The Cloud Deployment Toolkit (CDTK) project is a proposed open source project under the Eclipse Technology Project.
This proposal is in the Project Proposal Phase (as defined in the Eclipse Development Process) and is written to declare its intent and scope.
We solicit additional participation and input from the Eclipse community. Please send all feedback to the CDTK forum.
APIdays Paris 2018 - Will Serverless kill Containers and Operations? Stéphane...apidays
Will Serverless kill Containers and Operations?
Stéphane Woillez, Technical Sales Lead South EMEA, Docker
Apply to be a speaker here - https://apidays.typeform.com/to/J1snsg
ServerLess technology analysis, state of the technology as of December 2018, what needs to be done to build a complete, operational serverless platform for production
What is Digital Rebar Provision (and how RackN extends)?rhirschfeld
Walks through how Digital Rebar Provision rethinks bare metal automation beyond simple O/S install into an integrated workflow system for building data center underlay.
INCLUDES VIDEO OF PRESO
Short presentation about how RackN is creating bare metal data center automation for enterprise and edge infrastructure at the most basic level.
Includes a video of Rob giving the presentation
This document discusses zero-configuration provisioning of Kubernetes clusters on unmanaged infrastructure. It describes using immutable bootstrapping to provision operating systems and install Docker and Kubernetes (using Kubeadm) across nodes without requiring centralized orchestration or SSH access. The document also discusses potential future directions for the Kubernetes community regarding node admission controls and dynamic Kubelet configuration to further reduce external configuration requirements during cluster provisioning.
Preview of my Immutable Infrastructure presentation. Talks about what it is and why immutable is important. Also covers options on creating immutable deployments.
Open Patterns for Day 2 Ops [Gluecon 2017]rhirschfeld
Short presentation talking about how to create shared open best practices for upgrades and ongoing operations. Includes a demo of four upgrade patterns.
This document provides a diagram of the Kubernetes architecture. It shows that Kubernetes is made up of a control plane consisting of components like etcd, API server, and scheduler that run on master nodes. It also shows that Kubernetes manages pods running on worker nodes through kubelets and container managers. The diagram further illustrates add-ons like DNS, monitoring, and networking that are commonly used with Kubernetes.
OpenStack on Kubernetes (BOS Summit / May 2017 update)rhirschfeld
This document discusses using Kubernetes as an underlay platform for OpenStack. Some key points:
1. Kubernetes is becoming more widely used and understood by operators compared to OpenStack. Using Kubernetes as an underlay could improve simplicity, stability, and upgrade processes for OpenStack.
2. There are still many technical challenges to address, such as networking, storage, tooling to manage OpenStack on Kubernetes, and ensuring containers meet Kubernetes' immutable infrastructure requirements.
3. Using Kubernetes as an underlay risks further confusing the messaging around OpenStack by implying Kubernetes is more stable or a replacement target. Clear communication will be important to avoid undermining OpenStack.
The developer rebellion against infrastructurerhirschfeld
My DevOpsDays 2017 Lightening Talk covering why developers don't want to do operations work and are making platforms so they don't have to do it anymore
IBM Interconnect: Think you can Out Innovate Open Sourcerhirschfeld
Joint Presentation with Rob Hirschfeld and Chris Ferris at IBM Interconnect. We cover what makes open source projects succeed and struggle based on our experience with numerous projects. [video pending]
CHECK OUT THE MARCH 17 UPDATE > https://www.slideshare.net/rhirschfeld/joint-openstack-kubernetes-environment-march-17-update/
Presented at the OpenStack summit, this presentation discusses the practical reality & timing of using Kubernetes as an underlay for OpenStack.
This document lists DefCore Co-Chairs Rob Hirschfeld and Egle Sigler. It then lists several OpenStack APIs under categories like Required, Advisory, Deprecated, Removed, and their status. Key APIs listed include Compute, Object, Auth-token, Compute-servers-metadata from projects like nova and keystone.
The document discusses how a "layer cake" approach to operations and architecture is flawed and advocates for a more functional programming approach. Some key points:
- A layer cake assumes clean boundaries and dependencies between services that don't reflect reality, where dependencies exist between layers and connectivity/proximity matter.
- Taking a functional programming approach treats operations as mathematical functions with defined inputs/outputs and no side effects, allowing operations to be decomposed, automated generically, and scaled dynamically.
- Examples given include defining database configuration and node setup as independent functions that can work for different database/vendor types rather than being tied to specific layers/abstractions.
- Embracing functional and orchestration approaches helps make
The document discusses the DefCore process, which defines the minimum standards required for products to be labeled "OpenStack". It aims to drive interoperability through common validation testing and implementation of core code sections. The process balances community and vendor input, with the community writing tests and vendors performing self-testing. It will approve guidelines every few months delineating required capabilities validated by tests as well as designated core code sections. It seeks community review of the 2015A guidelines before they are approved by the OpenStack Board to govern what functionality and code is required for OpenStack products and platforms.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
2. Open Ops: User Choice & App Portability
Goal: run a reference workload (Kubernetes) on any
infrastructure using the same operational process.
Execution: a single command to run Kubernetes on
OpenStack, Amazon, Google and Metal (via Packet.
net) with SDN & O/S choices.
3. Demo
Multi-Kubernetes
To make things portable,
we need to be able a
repeatable experience
between multiple clouds.
We want to be able to run
the same workload on
multiple clouds from
different vendors.
4. Bonus Demo
Docker Swarm
Because…
We need alternatives to
Docker Machine that
actually build clusters in
and open composable
way.
Set up speed should not
be our primary measure
of production readiness!
6. Hybrid is an
overloaded term!
Multiple Hybrid Dimensions:
● Different Vendors
● Different Platforms
● Different APIs
● Different DevOps Tools
● Different Operating Systems
We’re talking about using
infrastructure in change tolerant way.
The only predictable thing about
infrastructure is that is will change.
Hybrid acknowledges that you will be
using old and new and new new.
7. And Infrastructure choice is increasing
and
many
others...
and
many
others...
AWS
GCE Azure RackSpac
e
and
many
others...
“Bare Metal”
On average, large enterprises are using about two dozen cloud services from nine providers (Gartner)
VMware OpenStack OpenStack
Public
Private
8. Look, Ma!
I Can Haz Hybrid!
Many Silos ≠ Hybrid
IT cannot afford
infrastructure silos!
We need to be able
to mix on-premises
AND cloud.
IT Silo
Cloud
Platform
IT Silo
Physical
Platform
IT Silo
Physical
Platform
IT Silo
Cloud
Platform
IT Silo
Cloud
Platform
C.Foundry
MesosKubernetes
OpenStack
9. Tools do not manage Hybrid IT - not just cloud, but ALL Infrastructure
Cross-Platform Orchestration (aka Hybrid DevOps) fills gaps left by current ops tools
and
many
others...
and
many
others...
AWS
GCE Azure RackSpac
e
and
many
others...
“Bare Metal”
VMware OpenStack OpenStack
“Why is it so hard
to scale up this
infrastructure?”
“We need clawback
our apps from AWS”
“Data locality means I
need data centers all
over the world”
“I need to consolidate data
centers. How do I simplify
management too?”
12. Infrastructures have unique requirements
Platforms
WorkloadCloud, Physical & NetworkPhysical Infrastructures
Step
2
Step
6
Step
7
Step
1
Step
3
Step
4
Step
2
Step
7
Step
1
Step
3
Step
4
Step
8
Step
10
Step
2
Step
1
Step
3
Step
5
Step
9
Step
11
Step
6
Step
9
Step
11
Step
6
Step
7
Step
4
Step
8
Step
9
Step
10
Step
11
Step
2
Step
6
Step
7
Step
1
Step
3
Step
4
Step
5
Step
8
Step
9
Step
10
Step
11
ApplicationWorkloads
“Bare Metal”
Ops need to create a system-wide control fabric by composing lots of individual actions in sequence
Orchestration
Step
5
Step
5
Step
8
Step
10
Step
2
Step
3
Step
4
Step
1
Step
2
Step
1
Step
3
Step
4
Step
2
Step
1
Step
3
Step
6
Step
7
Step
7
Step
8
Step
10
Step
5
Func
Role
Func
Role
Func
Role
Func
Role
Func
Role
Func
Role
Func
Role
Func
Role
Func
Role
Func
Role
Func
Role
13. To Vendors:
AWS Drives
Operational
Patterns
AWS Azure
GCE
IBM
RAX
DO
Amazon is so dominant in
infrastructure that their patterns (API
and Implementation) must be factored
into any operational discussion. Even
if it is a physical only deployment.
Our hybrid DevOps objective is simple:
We need multi-infrastructure Amazon
equivalence for ops automation.
This trend will accelerate an AWS
competitor work to reduce switching
friction off AWS. It is easier to recruit
cloud users from AWS than IT Ops.
14. To Enterprise IT:
AWS is disruptive
but not only choice
While AWS dominates the market,
individual companies have a much
more mixed infrastructure. They are
starting from existing workloads.
There are many factors for IT in
infrastructure vendor choice including
relationships, control and cost.
When, mono-infrastructure is dead
then portability becomes critical.
AWS still sets the operations standard
and that ultimately influences back
into internal IT.
AWS
Alternate
Public
Vendor
Cloud
Private
Cloud
Internal
IT
16. It may not be
pretty, but working
Ops is not wrong
There are many ways to run
infrastructure. Just because it’s
different (or last generation) does
mean that it’s wrong.
Burning down your data center is not
an effective option.
Most operators would happily migrate
to new tools if it was less disruptive.
The alternative is to create more
operational silos.
17. Operations drives
Infrastructure
Software
Hardware
Ops
When I worked for Dell, we thought we
could sell Scale Cloud and Big Data by
just bundling them with some servers.
Scale platforms have very high
operational requirements and require
automation.
This is especially true because the
platforms have sub-six month release
cycles.
Selling hardware or software without
and operational story will frustrate
customers.
18. Hybrid DevOps
This is not just technology! Good
hybrid design is about process,
discipline and culture.
We cannot rely on Configuration Mgmt
to create portability. The current
patterns create brittle towers of
vertically wired automation.
Robust designs require a composable
modular design.
Composable designs require
orchestration for action chaining.
Gets Most Focus
Biggest Gap
19. Data Center Ops
APP
Hybrid Needs
Composable Parts
Deployments are always composed of
a lot of moving parts. They are both
integrated both vertically and
horizontally (not shown). So
incremental changes will disrupt the
whole stack.
Everything is always changing.
Robust deployments must be build
with composable modules so that they
can be fault tolerant and resilient to
change.
It is very expensive to add
composition afterwards!
Mgmt Tools
Logical Net
Operating System
Infrastructure
Provisioning
APP
Mgmt
Tools
Logical
Net
Operating
System
Infra-
structure
Provision-
ing
DC Ops
Fragile Mono-
Integration
Interchangeable
Composition
Mgmt
Tools
Logical
Net
Operating
System
Infra-
Structure
as a
Service
20. In Summary:
● Hybrid Infrastructure is new normal
● Operations can work Hybrid
● Amazon is the Ops benchmark
● Implementation choices matter
● Invest in making DevOps composable
My blog http://RobHirschfeld.com
@zehicle on Twitter