1. Reflections on an 18-Month Federal DevOps
Transformation
August 4, 2015
Dan Craig
Director, Agile Services
Agile2015
2. A Bit About Me…
• 2001 Started working agile projects
– Exclusively commercial sector
– Held almost every role, but project management fit best
• 2009 First DevOps transformation work
– First work with Federal Government (DISA)
– Served as agile & process re-engineering SME
• 2013 Started my current engagement
– Large, Highly-Visible Office in Commerce Department
– Solidify agile practices; initiate release automation
3. Obligatory DevOps Definition
• DevOps is the natural extension of 15 years of agile thinking
• DevOps is primarily about:
– Communication/Collaboration
– Automation
• Advent of tools enables organizations to become DevOps:
– Cloud Technology
– Puppet / Ansible / Chef
4. A Great Picture
• Businesses Demand Change
– Fast time to market
– Innovation
– Differentiation
• Businesses Need Stability
– Perfect up-Time
– High quality
– Bullet-proof security
These two demands are necessary, but cause conflict using
traditional delivery mechanisms!
6. The Organization
August 13, 2015 6
• Large Federal Office
• Forward-thinking CIO
• commercial background
• history of innovation
• Leadership that already worked
well with each other
• 170 Internal/External Systems
7. The Situation
• Became visibile at height of
Recession
• Congressional funding of
NextGen
• Delays in NextGen delivery
• Visible production outages
• Issues
– Manual CM
– No Automation
8. The Team
• Small Digital Services team
– 7 cross-functional team members
– 1 Government PM
• Strong Leadership Support
• Access to All Layers of Organization
• Flexible Contract Vehicle!
9. Initial Engagement Objectives
• Create CICM Platform
– Leverage 100% Open Source Software
– Enable external user access
• Migrate 3 NextGen Programs onto CICM
– Institute CM
– Automated builds
– Automated Deploys
– Nightly testing
• Support Agile Practices
– Training/Mentoring
– Documented best practices
• Prepare for Rollout to all NextGen
Our Plan
• Months 1-3
• setup CICM
• migrate systems
• Months 4-6
• document learning
• “harden” mechanisms
10. Our Planned Approach to DevOps Transformation
Continuous
Delivery
DevOps
Practices
Application Release
Automation (ARA)
Agile Development
Configuration Management
We want to
get here >
then solidify
things!
11. The Initial Platform
August 13, 2015 11
Jenkins
Continuous Integration (CI) tool that automates and manages
building and testing of software
Subversion
Version control system that manages files and directories and tracks
changes made to them over time.
Nexus Sonatype
Repository Management tool stores binary software components
used during development, build and deployment
SonarQube
Reporting Dashboard that publishes Continuous Inspection metrics
gathered during build and analysis of software
12. How the Platform Worked
Delivery
Teams
CICM
Platform
SIT FQT PVT
Publish
Commit
Deploy
Shell
Test
Selenium
PROD
Checkout
NextGen
Dependencies
Artifacts
13. Early Transformation
• Migrated 3 large programs
• Modernized CM practices
• Automated builds & SIT deployments
• Proof-of-concept Selenium testing
• Focused teams on Sonar metrics
• Passed several internal audits
• Established initial development standards
• Starting include more organizations in the effort
Showed the organization that this was actually possible.
Confidence was building!
August 13, 2015 13
14. Early Problems
• No two projects looked the same
• Operations not participating in automation
• Deployments not addressing middleware, system or DB
• Test organization in “analysis paralysis” on test tools
• Audit organization not convinced of robust traceability
• Procurement saying automation demands out-of-scope
August 13, 2015 14
15. The “Perfect Storm” Upsets our Path
• Increasing rumors of “silver
bullet” solution on CICM
project
• Two upgrade-related outages
on critical Legacy system
costing millions
• Demands from business for
more focus on Legacy vice
NextGen
August 13, 2015 15
• Pressure from Congress to move faster on NextGen
• CIO desire to eliminate license costs
16. A Critical Decision Point…
• Option #1 - Stay the Course
– Continue focus on NextGen
– Spend 3-6 months “hardening gains”
– Preparing for enterprise use
- OR –
• Option #2 - Full Steam Ahead
– Expand focus to include Legacy
– Defer “hardening” in favor of onboarding
– “Build the plane as we fly”
August 13, 2015 16
What do you think they chose to do?
17. Going into Battle!
New objectives included:
• Begin migration of all
projects onto new platform
• Assess their viability for
automation
• Ensure limited deployment-
related issues
• Fix problems with testing
• Sunset IBM Rational Suite
• Support new operations
puppet initiative
August 13, 2015 17
Oh Yeah, and…
No additional staff!!
18. Enhance & Extend the Platform
Delivery
Teams
CICM
Platform
SIT FQT PVT
Publish
Commit
Deploy
Ansible
Test
Selenium, TestComplete, SoapUI,
LoadRunner, WebInspect
PROD
50+ Jenkins Slaves
Checkout
OSS
ALM
Admin
Features
NextGen Legacy
• Automated Upgrades
• Jenkins Templates
• LDAP Groups
• Labels
19. Our Ansible Decision
Ansible best suited our need for application
deployments:
• Agentless
• Intended for non-Developers/System Admins
• Shallower learning curve
• Provides full Orchestration
• Clean Division of Interests:
– Engineering maintains Playbook (the “what”)
– PSB maintains Inventory (the “where”)
– CICM maintains Platform (the “how”)
20. Creating the Transformation “Playbook”
How to handle flood of projects:
• Formalize platform processes
• Sure-up agile processes
• Align organization
• Get the word out
21. Focus on Process
• Migration to the platform
– Vetting Projects for “fit”
– On-boarding
• Administration of Platform
– Enhancement Requests
– Frequency of Deployments
• Knowledge transfer
– Audits to ensure continued
maturity
– Internal “certification”
August 13, 2015 21
22. Fine-Tune the Release Process
• Introduced Bundling
– “Build Once – Deploy
Anywhere” (BODA)
– All config defined up front
• Orchestration of non-automated
components
– DB
– Middleware
– System
• Defined metrics to gather & retention policy
August 13, 2015 22
23. Sure-up Agile Practices
• Mandate continuous
integration
• Report on & review Sonar
metrics
• Focus on Testing
– Unit
– Functional
• Tailored best practices
24. Align the Organization
• Vision alignment
– Quarterly leadership
– Monthly stakeholder
– Weekly platform team
• Metrics-based reviews to reinforce organizational
vision
• Project maturity scale
• Updating requirements for new procurements
August 13, 2015 24
25. Get the Word Out
• Marketing Campaigns
– Newsletter
– Platform Knowledge Center
– DevOps Days etc.
• Education
– Brown Bags
– Classroom & Online Training
– Targeted Audit Reviews
• Procurement
– New Demands on
Contractors
– Measurable Results
August 13, 2015 25
26. Starting to Make Serious Impact
August 13, 2015 26
• 70+ projects migrated to the Platform
– All portfolios represented
– Heterogeneous technologies
• Clear Transition from Pilot to
Enterprise
– 1500+ Automated Builds Daily
– 10 Projects with Ansible
Deployments
• No Reported Issues with CM or
Deployments from Platform
• Cutting costs moving onto Open
Source Platform
But Operations Still Not
Comfortable with Automated
Deployments to Post-
Development Environments!
27. A Second Perfect Storm??
• Legacy outage forces
Operations to recover using
Ansible
• First ever “organic” DevOps
deployment occurs
• Growing pressure from
Business and Engineering to
leverage new automation
• Gene Kim and other thought leaders come speak about taking
risks in DevOps
Permission granted to automated deployments to production!
28. Its been 1 Year Since That Breakthrough…
• All 170 Active Systems Under CM
• Most popular technologies supported
• Standards & best practices defined
• Application Release Automation (ARA) in place
Production deployments aren’t continuous – but they are
common!
• DevOps Culture taking root
• Production outages increasingly rare
• Organization nearly able to staff CICM
roles as O&M not high-end consulting
29. What We Are Working on Now
Platform
– Introducing tools to
mature platform
– Introduction of a
Pipeline View
– Central
Administration
– Self-provisioning of
cloud resources
– Audit reporting
– CMDB
30. What We Are Working on Now
• People
– Work with Procurement on Agile contracting
– Continued training; focus on Government employees
– Focused mentoring specific to functional test quality
& regression
• Process
– Reinventing release process
• Orchestrate hybrid Puppet & Ansible deployments
• Addressing database
– Institutionalize metrics-based reviews (“virtual ORR”)
– Satifsy audit & oversight requirements automatically
– Determine “how far” we want to go towards CD
31. What You Should Expect on Your DevOps Journey
• DevOps Is Hard!
– Particularly hard in Federal sector
– Takes vision, courage & endurance
– You’ll have to be a salesman
– Everybody is bought in…until it impacts them
• You will have problems with:
– Automated Testing
– Getting Operations to take that first leap
• Contracts & outside SMEs can help, but nothing works without strong,
consistent support of leadership
• You need the proper foundation before DevOps works
• What works today will NOT work tomorrow…Plan for it.
• People, Process & Tools must mature in Lockstep
• ARA with good DevOps may be the finish line for Federal Enterprises
Just so you know who you are listening to:
been working on commercial agile projects since the early 2000s.
- In fact, In 2000 I left AOL to join a small startup and live live “firsthand” the new light-weight process I had heard about (XP)
- I had worked on many projects before I came across books like “extreme programming explained” by Kent Beck and “Agile Software Development” by Robert Martin
- Exited to see like-minded people in the industry and it began to put a name and framework around what we were doing.
- I saw the community grow and find it exciting that it now is well accepted everywhere (nearly)
In 2009 I got a call from a friend who had just started a company called Steel Thread. He wanted some help at DISA to do some cutting edge automated deployments
- It was here that I first became acclimated with tools such as puppet and began working towards DevOps practices (though I didn’t know the term at the time)
- In 2010 the Continuous Delivery book by Jez Humble/David Farley came out and put a name to what we were trying to do. I felt the same feeling as I did 10 years earlier reading Beck and Martin. I knew I had found the next major shift in how IT should work.
In 2013 I accepted a role to transform a large, innovative office within Commerce Department
- Not a professional speaker you would see on these types of circuits, but I thought it worthwhile to speak about my learning as it is 1) large enterprise and 2) Federal Sector
Before I jump in, Everyone loves to give their definition of DevOps these days and I’m no different!
In General, I believe DevOps is about COMMUNICATION & AUTOMATION.
Devops is a concept that shines a light on communication issues much the same as agile did in the early 2000’s. Back then, the effort was to get the business talking with the Development/testing team. Today we are trying to bridge the communication gap on that “last mile” between Dev and Ops.
Just as with the agile movement, I think that the rise of certain tools and technologies has greatly aided this communication. While the appearance and maturation of tools such as Hudson and Subversion, certainly helped bridge the gap between business-development-test and allowed a metrics based conversation on agile teams, so too is the advent of Cloud Technologies, Puppet, Ansible giving a common language to our DevOps discussion.
But what IS the communication problem?
This is a Great picture I saw in an article by Niek Bartholomeus.
Historically, Business owners push development teams for innovation, speed and agility. Revenue and upside depends upon this. Years of pressure to move fast and deliver is baked into the DNA of development teams
They also require stability and quality. 5 9’s, no security breaches, no risk or embarassing hacks. Security and stability are mandatory. Years of pressure to move with caution and protect assets is baked into ops teams
If I were to whisper in the horses ear deliver fast and tell the other move stable and safe. What happens when I say GO?
This is what we do with our businesses and its only through great effort and heros that things stay afloat.
DevOps – to me – is an acknowledgement of these contrary demands for stability and speed. It’s understanding that the only way you can get both is to get teams heading in the same direction and at the same cadence. Things must be predictable and repeatable.
I’ve come to believe it is even harder in the federal world though….
Though they share the same name “DevOps”, the landscape of the Federal Government makes growing a DevOps culture a very different problem
They have all the problems of a commercial organization:
* Finding quality staff (unicorns)
* Breaking down silos
Years of finger pointing creating a “great divide” between dev and ops
Concerns over control, security & compliance
But the Federal Government – or at least where I have served -- has some unique issues that make it a very different problem:
Federal Procurement Cycle
Contracted Staff (stuck in roles, transient, self-serving)
Unions amplify resistence from disbelievers
layers of oversight
Shifting administrations and funding priorities
People
Contracted staff locked into long term contracts
Low labor rates = body shop vs. unicorns
Contractors can often act as a bottleneck or hamper devops initiatives as they often focus on profitability and “protect their turf” as aggressively as any employee
Labor Unions often support cries from employees who disagree with the changes and the new roles that they bring
Regular shifts in administration brings unpredictable funding for large, innovative initiatives.
Process
Procurement Law
Layers of Oversight (e.g., office, agency, DHS, OMB…)
Tools
Antiquated Systems (like a walk through technology museum)
Licensing vendor “lock”
Slow adoption of Cloud Technologies
SO the main takeaway from this talk is DevOps is hard anywhere…but particularly in the Federal Government. Even under the best of cases.
My current engagement –- by my estimation – IS the best possible case and we’ve learned PLENTY of lessos.
I want to take you on a walk through our first 18 months at our Federal client. Describe our approach and highlight the bumps we had along the way.
I think there is value to conversations like this, so we in the IT world can gain the same “pattern recgnition” we gained in the early days of agile. We can help evolve a DevOps concept which is now an artform…into something that has commonly accepted standards and best practices.
The engagement occurred at a very visible office in the Commerce Department
Particularly visible at the height of the Recession when Obama administration named it as a key to getting businesses moving again
Quite a bit of new funding (and Congressional oversight!) went that direction to create the next generation of customer facing and internal applications
The CIO was very forward thinking and had experience with modern tools and technologies in the commercial sector
170 systems in some form of active development
- 140 legacy applications with a very wide variety of technologies and platforms
- 30 NextGen applications fairly homogenized to Java/Maven/Jboss, but quite a few differences in the middleware utilized
So what was the situation before we engaged?
Initial small contract to do an assessment and to develop a 3 year road map for continuous delivery
When presented to the CIO, he said “this is nice”, but how are you going to stop my hemorrhaging NOW.
So here was the situation… not all that uncommon in the federal space –
Main driver - Highly Visible Legacy Blow-Ups (CM, Deployment). Took days to get back online. Couldn’t immediately reproduce it. No chaos monkey to test with…. yet
Pressure to Deliver Suite of NextGen Applications – Legacy modernization initiatives
Manual infrastructure configuration
Various methodologies/technologies
No central source for CM, Dev, Test or Deployment of Systems
Silo’d organization – conflicting goals and performance objectives – very important . I’ll talk more about this later
In terms of process maturity…
Some nextgen development teams were “closet using” Jenkins and SVN, but most had never used those tools
Some Scrum with waterfall testing and release processes – infrequent releases 1x to 2x per year.
Ops teams generally supportive - has help fast track processes to get us the resources we need
A lot of infrastructure variances do to manual processes
Established a small Digital Services team – CICM
7 Steel Thread SMEs of various talents – PM, Architect Sys Admin, Test Automation, Agile Coach, 2 automation engineers
1 Govt PM and our Tack Order Manager. These two roles to proved absolutely critical to our success.
Must stress the role of a strong, seasoned PM in the early days. Ours was fantastic at navigating the existing organization and process and wasn’t afraid to ““break some dishes” as she puts it.
While our team has grown a bit since the initial pilot – both on Steel Thread and Govy side – ths initial small size really worked
Also must stress the need for High level of executive buy-in and attention.
In fact, in early days we had regular (semi-weekly) meetings with executives to discuss organizational, policy, process roadblocks.
During these meetings we gave them 1-2 “homework” assignments (clearing the path) and alerted them to which organizations were about to complain (based on what we were doing)
A quick note on the initial contract engagement:
We had very good collaboration going into the contract award. Not knowing exactly what the outcomes would be at the start, the contract was written in such a way that allowed us to pivot, but also provided objective results based on continuous planning with the Govt stakeholders. Even though this was first attempt at this type of contract, it worked quite well.
Contract was for 2 years.
Ended up doing completing in a little over 1 year
Phase 1 – 1 year – pilot several projects
Phase 2 – 1 year – onboard all nextgen and some number of legacy
3 projects –
1 mistake was the selection of 3 of the largest projects from the most visible portfolios
~15 projects/~50 components
One project drove the financial interaction with general public so particularly sensitive to audit and traceability
pilot mentality
agile development with nightly pushes to SIT for automated testing
platform to be fully open source
access to outside
CI – establish the automation and then increase build frequency and unit test coverage
CD – move testing to the left and increase functional test coverage, perform nightly regression, load and security testing
Team enablement - so the theme was technology was needed, but it’s not a tool problem. We had to change the way developer and testers and PMs for that matter worked.
We had very good collaboration going into the contract award. Not knowing exactly what the outcomes would be at the start, the contract was written in such a way that allowed us to pivot, but also provided objective results based on continuous planning with the Govt stakeholders. Even though this was first attempt at this type of contract, it worked quite well.
Our approach to DevOps & CD is seeped in our background as software developers
configuration management
- common source management
- standardization around commit, branching and tagging practices
Agile Development
- SCRUM Management framework and XP developer practices
- Build Automation > Continuous Integration & Inspection
Application Release Automation (ARA)
- packaging/bundling for deployment
- Shift testing and deployment to the left
- release to post-development environment (system, middleware, application, db)
DevOps Practices
- it takes cycles to get good at DevOps
- With initial ARA in place, can really exercise teamwork and communication required.
Continuous Delivery
Requires extremely mature processes and tools
We do this in our in house development on very small teams, but very different at scale
True CD (e.g., release trains, features toggles and automation all the way to production) is rarely the goal within Government organizations due to policy and oversight restrictions. This may change, but it doesn’t appear to have changed yet.
Subversion
considered GIT, but too advanced
wanted to standardize entire organization
Jenkins
Optimized for Java/Maven via plugin
To be leveraged for builds, deploys and tests
Nexus
Manage dependencies & store build artifacts
Follow Maven build process using Snapshot & Release Repos
Sonar
Custom dashboard highlighting a few important metrics (unit test, SA)
Leverage out-of-box quality profile
All used LDAP for authentication/Access Control
All managed individually – would turn into a nightmare later!
Explain Process
Deployments were simple SSH In using Jenkins System User with access to all SIT boxes
Deployments Application Only; no middleware, System or DB
Tests were written by our team as example of how they should work
No attention yet to performance, security, api or other forms of testing
Ops Disallowing deployments to post-SIT environments (even though we had access to do it)
There was a lot of good news and our stock was high in the organization.
no two projects looked the same
Each with nuanced differences in their POM structure causing different build
Two using custom shell deploys; one using a jenkins plugin approach
One respecting the “BODA” concept, but the other two requiring distinct builds per environment
Branching and Tagging using different convention
Operations
Shell scripts not transferable between projects
…also not transferable across environments
Didn’t want liability of automation
Test
Govies not deep in testing; esp tools
Over 130 contracted testers; but little automation experience
In assessment of various licensed tools not well suited for CD Automation (looking for easiest one)
Audit didn’t like “Jenkins System User” concept
procurement some teams complaining about burden; oos of existing contracts
Though there were LOTS of issues to work through; we have months to get things rolling before more projects migrate to the platform.
But…
Silver bullet
Lots of myth in halls
spent ½ time selling; ½ time talking down euphoria
CIO on outage
can cicm handle it?
At the end of the pilot initiative, we had learned a lot and had many opportunities to close gaps
At that point, howeve,r there was a convergence of several events that shifted our direction
For better or worse, we needed to take on Legacy Project
Not exactly true, they allowed us to claim victory on our 1 year goal and pull some funding in from year 2.
We also got help from legacy CM team who moved projects across for us.
Introduced & Integrated ALM Capabilities
Rally for NextGen teams
Open Source ALM for Legacy Teams
Integrated with Subversion/GIT to provide traceability of commits to stories/requirements
Expanded SCM Capabilities
Added GIT to support Puppet Based Dev activities of Operations teams
Added SCM Manager to provide convenient RBAC and Administrative control
Extended & Improved Jenkins & Sonar
Support additional technologies (.Net, NodeJS, C#, Javascript, PHP) via plugin
Added concept of Portfolios + Projects to improve user experience and maintainability
Added over 50 Jenkins slaves to serve traffic + execute TestComplete/LoadRunner etc which not well suited for CI Automation
Retrofit existing deploys to leverage Ansible
Shell not scaling and no easy standard
Puppet required coding apptitude and not easily understood by Ops team
Ansible:
Agentless
Orchestration right out of the box
Recognized division of labor (ops created inventory file, engineering creates playbook)
Still leveraging Puppet for system deployments
Added Custom Administrative features
Monitoring
Labeling & automatic deployment of slaves using swarm plugin
Move towards LDAP groups for scale
scans for standards on job definition
Still not allowed to deploy past SIT due to discomfort with the concept of automation!!!
With the decision made to We had spent time with mature teams and focused primarily on tools. It was time to get serious about the People and Processes Involved with this major culture shift
When we started, about 10 total people know what CICM was, so we had a huge sales and marketing effort to do. So lot’s of meeting at all levels from project teams to executives. This was an iterative process because some of these folks liked the idea of automation but had no idea what we were talking about.
Next was the onboarding processes. With a focus just on NextGen teams, the first couple team were relatively easy. We still had to work around project schedules, but they were already closet building with Jenkins so it was just a matter of moving code and getting them up and running and tweaking processes. Others, obviously needed some work, Then we started to migrate legacy systems. These were just a lot of work to get there code in a state that’s CI friendly. “buildable” by Jenkins. So there was some refactoring that had to be done first.
So if you’ve ever worked in this type of environment, telling a team they have to refactor their code is like calling their baby ugly. Refactoring was a bad word. We got some bumps and bruises until we got our way. We have since move past that and now it’s beginning to be part of the process, even though technical debt isn’t generally managed well yet. It’s funny, we didn’t think anything about the sensitivity. We refactor all the time, probably to the point where we don’t like we know what we’re doing to the untrained eye. both technology and processes. So we assumed all agile teams did.
We had spent time with mature teams and focused primarily on tools. It was time to get serious about the People and Processes Involved with this major culture shift
Co-Hosted Migration Kicoff meetings (eng, ops, cicm)
all negotiated with Process & Policy teams to ensure in-line with other parts of the organization
Initially we met with the Nextgen teams to understand the architecture and tech stacks. From here we began to develop the platform backlog of required plugins and other capabilities.
From here we developed roughly objectives to get alignment with the stakeholders and to get alignment with other teams. This included both platform and project priorities.
Then we do weekly planning where we task out the objectives. Additionally, we hold seimi-annual “summits” for the CIO and his direct reports. The goal of these meetings were to provide an update on progress and to elevate and organization blockers. This venue has actually been quite effective and engaging with the CIO.
The next part of getting alignment is Communication. First obviously is our kanban wall in the team room. This display all projects in flight on a 4x6 card, what stages they are , as well as some meta data. This has been quite useful when engagement with new teams and management. There has also been an adoption of the kanban walls in peoples offices now as well.
The other big communication mechanism is the bi-weekly news letter. This turned out to used as a status report and stick for the teams onboarding. In it, we show highlights, new platform capabilities. We also include the status of each project that has onboarded or is in the process of onboarding. Essentially a visual of our kanban wall. This turned out to be the bible on how executives evaluated to team were adopting the new ways of doing things.
You’d think it real time data is provided in a dashboard automatically, people would strive to use it. It was actually very difficult to get management and teams to look at Sonar! Dev teams didn’t necessarily like it because it objectively showed the health of their application, especially unit test coverage. Ultimately, I think it’s just a little bit to far in the weeds for management outside of unit test coverage. That said it’s becoming a measure of performance for the teams and some teams do embrace it for the fast feedback value it provides. Good stuff.
We have had a continual stream of projects onboarding for nearly 18 months, of all types of technologies. If you’re familiar with the open source tools used, it’s very maven centric. So getting applications to begin using the platform standards is very demanding and time consuming. However, we do now have better enforcement of technologies required by the teams. The benefit is now auditing is simplified because we know exactly how applications are build and deployed. We now have nearly all applications on the platform and can begin to retire some of the legacy ALM tools.
Even the rogue team using Jenkins really did not practice CI.
The level of agile maturity was all over the place. So we needed to get a baseline of at least what were considered the most mature teams. So we did some agile assessments and provided reports and recommendations.
We created a formal raining curriculum and put over 400 students through it. This also helped with the PR campaign at the developer and tech lead level. And gave them a change to ask questions.
With that, we introduced XP practices, which CI is part of. It was a foreign concept for most to commit and build frequently. AND then to have the dev quality results in Sonar.
Enforcement of standards. So obviously automation provides a high level of enforcement. However, giving teams the ability to manage and create their own Jenkins jobs has proved interesting.
So quick story. I ran into a Portfolio Manager one day in the cafeteria and asked why all the unit test results in Sonar were grey? I don’t know I said I’ll check. Here to find out, the unit tests were and they didn’t want them to be exposed in Sonar a Red. So we enabled them and the team turned them off. A few times. So it’s a balance of keeping the teams moving forward and enforcing the best practices .
------------
Refactoring
Initially a four-letter word
Dependency management
Externalizing environment variables for BODA
Deployable units – release dependencies
Accounting for technical debt
No time for technical debt, just new features
Testing
Paradigm shift for traditional test organizations
Must make scripts CI friendly
If code changes it must be in SVN and built by jenkins. No direct uploading nexus for locally modified code
Initially we met with the Nextgen teams to understand the architecture and tech stacks. From here we began to develop the platform backlog of required plugins and other capabilities.
From here we developed roughly objectives to get alignment with the stakeholders and to get alignment with other teams. This included both platform and project priorities.
Then we do weekly planning where we task out the objectives. Additionally, we hold seimi-annual “summits” for the CIO and his direct reports. The goal of these meetings were to provide an update on progress and to elevate and organization blockers. This venue has actually been quite effective and engaging with the CIO.
The next part of getting alignment is Communication. First obviously is our kanban wall in the team room. This display all projects in flight on a 4x6 card, what stages they are , as well as some meta data. This has been quite useful when engagement with new teams and management. There has also been an adoption of the kanban walls in peoples offices now as well.
The other big communication mechanism is the bi-weekly news letter. This turned out to used as a status report and stick for the teams onboarding. In it, we show highlights, new platform capabilities. We also include the status of each project that has onboarded or is in the process of onboarding. Essentially a visual of our kanban wall. This turned out to be the bible on how executives evaluated to team were adopting the new ways of doing things.
You’d think it real time data is provided in a dashboard automatically, people would strive to use it. It was actually very difficult to get management and teams to look at Sonar! Dev teams didn’t necessarily like it because it objectively showed the health of their application, especially unit test coverage. Ultimately, I think it’s just a little bit to far in the weeds for management outside of unit test coverage. That said it’s becoming a measure of performance for the teams and some teams do embrace it for the fast feedback value it provides. Good stuff.
We have had a continual stream of projects onboarding for nearly 18 months, of all types of technologies. If you’re familiar with the open source tools used, it’s very maven centric. So getting applications to begin using the platform standards is very demanding and time consuming. However, we do now have better enforcement of technologies required by the teams. The benefit is now auditing is simplified because we know exactly how applications are build and deployed. We now have nearly all applications on the platform and can begin to retire some of the legacy ALM tools.
We had spent time with mature teams and focused primarily on tools. It was time to get serious about the People and Processes Involved with this major culture shift
There was a lot of good news, but I don’t really want to focus on that. We had:
- Implemented a platform that gave everyone access (internal and external)
- Moved 3 largest projects onto a common CM environment
- Automated builds
- Proven automated deployments were possible
- Given POC around Selenium and other testing tools
Brought a sense of the possible to many parts of the organization. Buisiness as usual for other parts
Major issues at this point:
Operations wouldn’t let us do deployments to post development environments
Testing still not nearly at the level required for full automated releases
Platform
Now that we are clearly out of pilot, customer is willing to spend a bit on license fees. Tools like cloudbees and NexusPro will allow us to
Platform
Now that we are clearly out of pilot, customer is willing to spend a bit on license fees. Tools like cloudbees and NexusPro will allow us to
DevOps
You can not buy it (services or Products)
You are never done
You’ll need a thick skin