A talk by Martin J Logan (@martinjlogan) about how to simply implement the technical and cultural changes required for continuous software delivery. These tools include VCS, config management, continuous integration, packaging and a few more.
1. WebOps, Into the Cloud, Agile,
DevOps, etc...
Modern Software Delivery
By
Martin J. Logan
@martinjlogan
martinjlogan@devops.com
2. Who is Martin J. Logan
Author. This is my book
Happily employed here
I Founded this because I love DevOp
Created this because I love Erlang
Run this (meetup.com/devops)
Package Management built here
7. Lessons of Agile
Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan
12. The Tool Chain
Version Control Artifact Repo Config Management
Packages Monitoring
Virtualization/ CI Simple Deployment
Cloud Scripts
13. Simple Happy CD
• Write code on for a couple hours
• Check-in. The process runs unit tests and flags the code for review
• Code is reviewed and merged with the main line
• Mainline commit triggers a full build and a full unit test run. Code is then
packaged and pushed to staging.
• Staging sees a new instance(s) provisioned for the artifact to be installed on
• Tests are run (performance, UI, API) in staging and if all passes the package is
installed into production
This talk is only for people that operate AND develop websites or online services\n\nWebsites need to change quickly != every 6 months\nif you don’t leverage virtualization/cloud, CI, config management and a host of other advancements your competitor will. \n\nThese techniques however don't just apply to the big shop. The apply to small opensource shops and projects as well. \nBarrier to entry is larger for a large shop with legacy (but doable for sure)\n
\n
Without knowing your history it is hard to create your future. \nSome of you still in school may want to evaluate which companies you go to based on their process\n
Software just like other complex things like buildings or bridges.\nThis was doomed to be not very competitive because of efficiencies that the software medium could give us.\nmetal is hard to iterate on\nBut that type of engineering was what we knew so we build our processes on top of it\n
The process consisted\n-consisted of phases and gates\n Start out and figure out what customer wants, then produce detailed specs, then develop, then test, then bug fix, then release.\n\nAnyone ever heard of RUP (tell story of Ron the CFO/CTO) \n\n-turning around was like turning a ship around in a canal\n-going back required a change request (pain)\n-Everything had to be just right. And we all know just right takes a long time.\n
Imagine trying to run a business this way. \nWe will design it and then open it. \nAnd we will not change no matter what the customer says. \nThe reaction/advancement over all this was the agile movement\n
We recognize the qualities of our medium and we exploit it. \nComplex systems require empirical process control\nYour competitors are changing quickly.\nAgain, you may want to interview companies on this stuff. If they are not agile, or have lots of shared, siloed teams you may want weigh that into your decision.\n
Break down the walls and put everyone on the same team. \nSay that done meant that working software was tested not that some document was created. We all own done.\nBut what is missing (next slide)\n\n
We could not figure out during the agile period how to get ops worked into the equasion. (next slide is why?)\n\n
Agile closed the loop between design development and testing. The full circle however includes operations as well.\nWhy didn’t we do this in the first place?\ntechnical bridge. Not just process. \nmajor cultural hurdles\nand misaligned incentives and local optimization. (ops rewarded for keeping the site up, protecting it from change, dev seeks to change the site)\n\n
\n
\n
Reduce work in progress WIP. Push everything out\nWhen you are starting out don't make this all too complex. \nNext we will talk about what it takes to get each of these steps going and what open source tools can support you.\n
Version control should house all your code. This includes your infrastructure (server settings and so on)\nVersion control should not house binaries (build them once only)\nHow to use: You want to commit frequently not work on branches for a long time.\nUse a commit script, or CI extension, to ensure the running of your unit tests before sub-mainline commits.\nExplain erlware methodology \n
You want everything to install the same way. Consistency is key. \nJerry, a smart guy back at my first job concluded that the best package manager for a given system is the “native” one i.e. deb for debian/ubuntu rpm for redhat, ports for BSD, etc...\nyou build a package and then store it in a binary package repo system not version control.\nFor erlware/faxien this is just a filesystem fronted by a webserver. For example /ostype/glibcvsn/hardware/vmtype\n
Big batch commit with bug, while others are committing, causing more bugs, eventually the whole thing must be frozen and the bug fixed. Bad Juju.\nSo we deploy small batches often and we get better. We also keep our inventory of work low. Unintegrated does nothing for us.\nWe need feature flags if everything is going to production. We do this at Orbitz on my ISM team.\nEverytime we find a bug we add a new test. No bugs twice.\nSet up the server, and start with a single test. Don’t use your old tests (they probably suck). Work up to TDD with new tests.\nWhen ever the build is broken, you don’t allow folks to check into CI unless they use the override comment to indicate they are fixing the build. (broken build is everyones problem). Leaving CI red all the time defeats the purpose.\n\n\n\n
Consistency first, then automation, then increase the power. (same process for all people in all environments)\nMake it work, make it beautiful, optimize where needed (if you have to)\nCopy and install packages to version specific dirs on target machines.\n
Capacity management & disaster recovery two tough nuts to crack.\nThey can be handled by the cloud and by configuration management.\nCM systems allow you to treat your infrastructure as code. Checkin you OS configurations. If you lose everything you can provision new machines to just the right specs using CM.\nProvisioning like this requires programatic control over machines. This means virtualization and the cloud. Why stop there. If we can provision machines then if we tie monitoring in to the whole thing we can provision machines on the fly as we need them using the cloud and CM.\nAmazon has free instances for you to play with. Signing up is easy.\n
Start simple. Get one or two alerts configured for paging, email, or SMS. \nSame as CI, when a page goes off, no checkins allowed for that host. (broken production is everyones problem)\n