10. Self Managed Source Control
• Self Installed EC2
Instance
• Use Community
AMIs
• AWS Marketplace
10
11. • Relevant AWS Services
• Source Control
• Development Environments
• Test
• Agile Theory: Continuous Development,
Integration & Deployment
11
12. Development Environment via
CloudFormation
• Virtual Private Cloud (VPC)
• Template Related Resources
• Integrate with Configuration Management Tools
(Puppet & Chef)
• Provide CloudFormation Templates Internally to
Developers
• RDS Example VPC Example
12
13. Replicating Production
Environments in Development
Why How
• Accurate Performance • Adopt Infrastructure as
Testing Code Strategy
• Empower Developers to • Leverage AWS APIs
Experiment
• Utilise Amazon
• Production Debugging Relational Database
• Improved Code Quality Service (RDS) and Point
in Time Snapshots
13
14. • Relevant AWS Services
• Source Control
• Development Environments
• Test
• Agile Theory: Continuous Development,
Integration & Deployment
14
15. Test Scenarios
Unit Tests
Smoke Test
User Acceptance Testing (UAT)
Integration Test
Load & Performance Test
Blue / Green Test (A/B)
15
17. Testing in the Cloud
Unit
Integration
Functional
Performanc
e
17
18. Testing Approach
• Use either an AMI or CloudFormation Template
matching Production
• Leverage Continuous Integration Server
Pipeline (see next section)
• Automate and repeat process using the AWS
APIs
18
42. Next Steps
• Talk to your Account Manager
• Access our Solution Architects
• Check out our Webinars and Pod Casts
• Reference our SlideShare Presentations
42
47. amazon
web services
http://aws.amazon.com
Joe Ziegler, Technical Evangelist
zieglerj@amazon.com
Please Fill out the @jiyosub
Feedback Form
47
Notes de l'éditeur
S3cmd backupJungle Disk
CloudFormer template creation tool
MANY TYPES OF TESTING - NOT ALL AUTOMATEABLE.DIAGRAM FROM BRIAN MARICKSTATES WHAT SHOULD BE AUTOMATED.---Testing is a wide and open church. And there are many different types of testing which can be freely automated. In fact, some types of testing can only be done in an automated fashion.This diagram is from noted Agile tester, Brian Marick, and it breaks common types of testing that Agile teams do into 4 quadrants... I like this diagram because it does a good job of identifying lots of different types of testing that teams do, but also gives an opinion on what types of tests are best suited to automation or manual testing.
SUBSET OF AUTOMATEABLE TESTS FOR AWSALL CAN - NOT ALL SHOULDUNIT TESTS - NO: DEV LOCAL, TEST LOCALINTEGRATION TESTS - PERHAPSFUNCTIONAL TESTS - YES, ESP. FOR PARALLELISATIONPERFORMANCE TESTS - DEFINITELYPERF TESTING NOT 24/7 - SCALE DOWN AGAIN AS WELL---But there’s only a subset of those types of automated test are well suited for cloud platforms like AWS. Obviously everything which can be automated can run happily on EC2 instances inside AWS, but if we consider the elastic, pay-as-you-go model that AWS supports, there are a couple of specific types of automated testing that are really in the sweet spot.Unit tests are not a great choice to run solely within the cloud. Their principal consumer is the developer who writes them who wants instant feedback that the changes they are working on are all correct and haven’t caused any regressions. Assuming the developer is coding locally, then the unit tests should run locally as well instead of the cloud.Integration tests can benefit from running in a cloud environment, especially if you have a better chance of replicating the production environment than running locally. Functional tests are even better suited to AWS because the elastic nature of the platform provides the capacity to scale up your testing infrastructure to quickly execute large numbers of tests in parallel. Functional testing for web applications tend to be slow in nature, so many teams turn to parallelisation to ensure their test suite continues to run quickly. This can be especially useful if you are doing testing across multiple browser types.The broad category of Performance tests is also an ideal candidate for cloud-based testing, again because of the ability to cheaply spin up large numbers of test clients to simulate traffic against your application. And because Performance and Functional testing is not a 24/7 activity, AWS provides the ability to quickly scale back the testing fleet when the test cycles are over, reducing the need to keep capacity available to cover the maximum load: this is what would be needed if you were doing this testing out of your own data centres.
CI -> ROBOT TESTERNOT COVERED BY NAME...INTEGRATION = BIG RISK IN S/W DEVUSUALLY LATE, LONG, HARD TO ESTIMATE, MANY DEFECTSAGILE -> “HURTS -> MORE OFTEN”CULTURAL CONVENTION + SOFTWAREMANY TIMES/DAYINC. FREQUENCY -> DEC. BATCH SIZE -> DEC. RISK---At it’s simplest, you can think of Continuous Integration (or CI as it is popularly abbreviated) as a robot tester who’s sole job is to monitor changes to the codebase and run quality checks each time changes are detected.Although this definition isn’t really conveyed by the name “CI”. The “integration” aspect comes from the fact that one of the biggest risks of software development at scale is when streams of code developed in isolation are integrated. Traditionally, this integration happened late in the piece, especially if different teams were tasked with building different parts of a larger system. These delayed integration activities were usually lengthy, difficult to estimate and often produced many defects fundamental to the design of the software, requiring expensive and time consuming fixes.The Agile perspective on these sorts of things is “if it hurts, do it more often”, hence the notion of Continuous Integration. By using a combination of cultural convention with the development team and software support, even large teams can integrate disparate code streams multiple times a day. By integrating daily, hourly or even more frequently, the size of each change to be integrated is much smaller than when you are integrating weeks or months worth of work at a time. Smaller change sets means quicker and simpler integration and faster feedback should any problems occur.
QUALITY: RUNS, TESTS, INTERNAL QUALITYNEED BINARY PASS/FAIL STATUS---So the basic elements of a Continuous Integration environment are quite simple. You need:- A version control system which is the truth source of all changes to your codebase- Some form of continuous integration software. There are lots of options here, some free, some commercial, some suited for small scale CI, some targeting large scale environments. The one we’ll see later in the demo section is called Jenkins which is a well-known, mature, free CI server with an active community supporting it.- Some notion of what “quality” means with respect to the codebase being integrated. At it’s most basic level, you can consider an application to have passed CI if the application runs in any sense. For most teams though, they will at least have a suite of automated tests to run against their application. Usually these tests will provide a combination of white box, grey box and black box testing. Beyond that, many teams also attempt to make statements about the internal quality of their code by using quality metrics tools to ensure the amount of complexity and duplication within the codebase is kept to a minimum. All of these tests for quality can be invoked within a CI environment and combined to provide a binary “pass” or “fail” report for each build of the software.
LOTS OF OPTIONS FOR PROG PROV.AWS -> CLOUD FORMATIONCUSTOM LANGUAGES; CHEF, CFENGINE, PUPPETFIGHTS CONFIGURATION DRIFT---There are a variety of different ways to do programmatic provisioning. AWS has it’s own macro provisioning service called CloudFormation, which uses XML to declare how a group of AWS components should be created and configured. At a lower level, there are a number of popular languages which specialise in environment specification. One of these is Puppet which is one we’ll look at in detail fairly shortly. At this point, let’s just concentrate on some of the high operations of puppet:Here’s the basic workflow of a Puppet managed environment:- Each machine managed by Puppet will have it’s configuration codified using the declarative ruby-based language. Typically, there will be groups of machines serving the same role (e.g., web server, application server, etc).- When each of these machines connects to Puppet, the Puppet instance will determine whether the machine’s current configuration is correct according to the declaration. If the configuration is lacking in any form, Puppet will adjust the configuration accordingly, abstracting the details of exactly “how” the configuration is done.- Over time there is the risk that the configuration will drift against the specification, usually because people have been manually updating the server. Puppet will continue to monitor the configuration of each of it’s machines and as soon as it notices that the configuration is inconsistent, it will automatically re-apply the specified configuration.So let’s have a look at how that will happen in practice.
DIFFERENT WAYS PROVISIONING HAS HAPPENEDMANUAL - LOTS OF HUMAN DECISIONSSCRIPTED - PIECEMEAL, NOT SHAREDPROGRAMMATIC - S/W ENG DISCIPLINE: VERSION CONTROL, TESTED---Because environments and services have always needed to be provisioned, historically there have been a number of ways this has happened. At the most basic level, there is manual provisioning which is still using computers but also involves a large amount of human decision making and input, even if there are written instructions to follow.In all but the most basic of environments, some form of scripting is applied to remove some of the human error risk from the deployment process. Typically, these scripts will be patched together using a variety of languages and approaches and often kept safe and sound by the person who wrote them.Full infrastructure-as-code programmatic provisioning takes the discipline agile engineers apply to their source code and transfers that to the code used to specify infrastructure. The languages used for this coding are generally customised specifically for infrastructure. The scripts built with these languages are maintained in version control and many of them can be the subject of automated testing, just like application code.And as you move further and further along this path of maturity, the speed of your provisioning increases, likewise the repeatability and the reliability of the same process also increases.
And when we look at the lower level activities that are typically part of the provisioning process, we can start to see where some of the benefits of running on a platform like AWS in terms of the amount of provisioning work that is required, irrespective of whether you’re doing this in a manual or programmatic fashion.Typical bootstrapping activities that occur as part of provisioning an environment include work around the hardware components (racking and stacking servers), configuration of the network elements, storage and compute components and the base operating system installation and configuration for each of the servers.Running on AWS removes the need to do most of that work, certainly at a detailed level. Leveraging the existing compute, storage and network capabilities of AWS means your provisioning activities tend to start higher up the pyramid.