The document discusses how to unlock faster product development cycles without sacrificing quality. It outlines three keys: optimizing the development pipeline by identifying and reducing lag, intelligently managing change by understanding impact and controlling changes, and boosting visibility of project data through a single source of truth and improved access and accuracy. The presenter advocates for establishing repeatable processes, keeping people informed of activities, understanding change, and controlling change. Adopting an ALM platform like Helix can help by modeling workflows, notifying teams, automatically linking artifacts, and requiring association of code changes to other items to optimize engineering processes.
37. Visit us on Facebook and LinkedIn, and follow @perforce on Twitter.
Catch up on our latest blog posts.
Notes de l'éditeur
Thank you Mellissa.
Are you happy with the status quo? Probably not or you wouldn’t be watching this webinar. Today I’m going to talk about tools you can use to improve the efficiency of product development without negatively impacting quality. In fact, you should see an improvement in quality.
There are three key areas I’ve found where companies can improve to be more efficient.
Optimizing the development pipeline
Intelligently managing change
Boosting visibility of key project data
Now, these may sound simple or obvious, but trust me, they are often overlooked, and they have significant time and cost implications.
I like to start every discussion regarding application development efficiency with an overview of the problem. It helps because my audience typically represents a range of roles, departments, and levels in an organization. We all have our work to do and the further away from our specific set of responsibilities one gets, the blurrier (and maybe a tad less important) some of the other roles and information others rely on becomes.
So let’s frame the problem first. On any large project we have people, information, and process. I’m going to assume you’ve vetted your people and they are awesome. With awesome people, the delivery time and quality problems lie in creating and managing so much information with efficiently. And that is with a process.
Many people are surprised by the amount of information that is generated and managed during product development. Some of this has a short half-life, while other lives on past the useful life of the product—sometimes for legal reasons.
Here we see 36 different data sets. Some are large documents, others are created by external stakeholders and there may be thousands of data points. In short though, there is a lot of information created and managed during R&D.
This data has attributes beyond the information it contains. Two important attributes are relationships and stakeholders—how is this piece of information related to others. For example, requirements result in designs, test cases, etc. So those are some important relationships that need to be managed. Requirements are created by business analysts, product managers, and others. And they are used by those people, plus engineers, testers, and still others. So we can see that just, one artifact, a requirement, has a lot of key relationships and interested stakeholders.
Now consider the number of artifacts and instances generated during R&D, plus the relationships involved. Pretty overwhelming, but not impossible to manage with good people and process.
I mentioned people, and they are the awesome kind, and we looked at information, and there is a lot of it. That leaves us with process. If you are going to be efficient, process is most important.
#1 (and this is most important) - do you have a process? If not, get one and document it. If not, that’s the most important.
#2 - Is it being used and is it repeatable?
#3 - do you have a way of enforcing, measuring for efficiency, encouraging, verifying?
#4 - are you evolving it based on measuring?
I’m not here to sell the capability maturity model (CMMI), but if you are not operating at level 2 or 3 at a minimum, then you are going to see big improvements in predictability, cost control, quality, and efficiency just by getting your process defined, documented, and used.
It should be clear that product development is more than jotting down some requirements, writing code, and shipping product. But, I’ve seen a lot of that or what is effectively that kind of situation. I’m much less likely to walk into a situation where they company is following repeatable processes, communicating efficiently, and otherwise optimized.
That is good for you though. Optimizing and being more efficient is a competitive advantage.
So now that we have an appreciation for the scale of information we are managing. Let’s talk about the first key to efficiency. Optimizing the Development Pipeline.
When I talk about optimizing the development pipeline, there are two key attributes I focus on:
Lag – efficiency
Concurrency – collaborating
What is lag? It’s that gap of time between one task completing and the next dependent task beginning. A key word being “dependent”. If tasks aren’t dependent, then they have the potential to be worked on concurrently. One way of visualizing lag is to think about your driving experience. Consider being 8th in line at a red light. The light turns green and it seems to take forever for the car in front of you to move. If I did a show of hands here, almost 100% of you would say you have experienced this. That lag is the time it takes for the next car in line to recognize the motion of the preceding car, stop texting (hopefully that’s not the case), and being moving at a safe speed and distance. Consider how much faster it would be if all cars started moving at one time. The chance of an accident is no greater, but the lag is significantly reduced down the line. There’s an interesting study of this by a Chinese team published in IEEE (“Modeling the lag of heading vehicle's startup at intersections with mixed traffic”, Liang, Mao, Chen)
Now that’s a simple concept for sure, but also a tangible one. Consider the DevOps pipeline. There are numerous opportunities for lag to creep in. Most companies today are focused on automation to keep PARTS of the pipeline flowing:
Build – build automation and continuous integration
Testing – test automation
Deployment – roll out automation (controlled)
But automation is typically focused on the late stage parts of product development – build-test-deploy. I find a lot less automation used in the people-heavy part of the pipeline, the R&D part, which is the front end. So let’s focus on the front part of the pipeline since our goal is to get to deployment faster with better quality.
Consider R&D at your company. How often are you running into the following situations?
Features are complete, but tests are not running against them due to test team being unaware?
Issues are found, but no one is working on them due to dev team being unaware?
Requirements or designs are in review, but someone is hold in up the show?
These are all collaboration problems. You can automate builds, tests, deployments, and other software-driven activities, but you also need to automate your team.
People don’t set out to fail or fall behind. Most often the day gets away or the amount of data is so large that what’s next is lost in the crush. That’s where notifications and escalations bring order to the chaos.
Remember the example of the cars at the stop light not moving? The first step to removing that lag is to tell all cars at once that the light has changed. This is notifications.
What is your process to retest an issue that has been fixed? In a lag-less environment the software would be tested automatically during the nightly build and QA would be notified to verify all fixes. If you don’t have a mechanism in place to notify someone that work has shifted to their task list, then you have a great opportunity to reduce lag.
Sometimes the person waiting for the work to be completed has also lost track of what he or she is waiting on. To close the gap, notify that person after some elapsed amount of time so they can follow up with the task owner. We’ve used that here to great effect. The key keeping the team happy is only using that notification when necessary.
Notifications are great and a big step forward, but what if the recipient is busy on other work, on vacation, or just ignoring them? That’s where notification’s big brother, Escalations comes in handy. Consider how many of you issue requirement or design documents for review and then wait for all comments or approvals to arrive. There’s always one or two people who don’t respond for various reasons and sometimes their response is necessary. I’ve been guilty of this. Maybe you have too.
Automated escalation is a great tool for keeping a project on track by reminding people of their overdue tasks. You can also remind the person waiting on the feedback. The key takeaways are:
Don’t wait to fall behind
Determine why the lag was introduced and a mechanism to keep it from happening again.
Let’s look at the second key to development efficiency – managing change. What I’ve learned in my over 30 years creating software, managing teams, and running and selling a company is change is constant and desirable.
Change takes many forms in product development: requirements, design docs, code, tests, people, even the process can all change. Change is necessary to make something better—from the status quo to better than the status quo.
When we look at some of the changes that impact efficiency on projects, a few big hitters come to mind. How many of these have you experienced?
You changed a requirement, but that generated a bunch of unanticipated downstream work?
Source code changed, but you don’t have a good handle on what needs to be tested?
A requirement changed, but the test cases didn’t change with it?
Some features were added to the product that weren’t in the requirements?
As a manager, I really love the last one. It added cost, complexity, and time, without approval.
Most companies doing application development a version control solution. 100% of Perforce customers do. So there is a basic understanding or appreciation that managing change to source code is a good thing. Or conversely, if you don’t manage source code changes, bad things might happen. The need for change management applies to the other development artifacts—requirements, test assets, issues, and so on.
Make sure your tools support the following:
Understanding the impact of change—what is the cost of this change upstream (if I change code) or downstream (if I change this requirement)?
Change control—changes can only be made if they are tied to a requirement, issue, feature, etc. Something agreed upon. And measure this. Don’t let changes creep into the product. They bring bugs and delays.
Change notifications—make sure all appropriate stakeholders are aware of changes to any artifact. Not just code and not well after the fact. Change notifications is related to reducing lag and improving visibility and the lack of it creates a lot of friction across the product development team.
You know you need to reduce lag and take charge of change. The third key to improving efficiency is boosting visibility of key project data.
There are so many symptoms of poor data visibility that we could spend a longer webinar on them alone. A short list would include:
Project prioritization – are you working on the right things in the right order?
How close are you to delivering?
What have you tested and more importantly, what have you not tested?
What’s the most current version of the requirements? Is that what the team is building? Is that what the team is testing?
What’s the impact of this code change on the schedule? Testing? Other code?
I think you get the idea.
Consider how popular Agile methodologies have become. Smaller teams, more integration between stakeholders, including a blurring of roles, and shorter meetings—all yield better information sharing across the team. A big benefit is better visibility of data—one brain, people stay in sync.
Two key factors that affect efficiency of product development are access and accuracy of data. Remember a few slides back when we looked at the number of artifacts we manage and the key stake holders? That’s a lot of information. On large projects, it can overwhelm you and the team.
If information is power and data is information, then not having access to data makes a team member powerless.
Do your testers have access to the requirements they are writing test cases for?
Do the engineers working on designs have access to the requirements?
Do the test engineers or maintenance engineers have access to the designs?
Enabling ready access to key project data unlocks the ability of team members to be self-sufficient, work in parallel more efficiently, and feel better about their role on the team—which has its own positive effects.
Where is the source of truth for requirements, designs, code, tests, results…the project status? When we look at how accurate project artifact information is, we need to assess two attributes: the data itself and the relationships between the data. Consider requirements. If you are managing your requirements in Microsoft Office documents, then you have the ability to have different versions of the document under review. So that’s one problem. The second attribute is relationships. How up-to-date are the relationships between artifacts? How strongly are they linked? So if I have a requirement and I produce a test case for it. How do I know I’ve done that? How do I know if I change a requirement, I change the appropriate test cases?
If you have multiple systems in play to track artifacts—one for requirements, one for testing, another for designs, etc. You have multiple silos and how or whether you link these silos together has a major impact on accuracy of project data.
Good visibility and accuracy of project data can be the difference between building the right thing, testing the right thing, and delivering efficiently.
Now the three keys I just talked about:
Optimizing the development pipeline
Intelligently managing change
Boosting visibility of key project data
Boil down to having the following in place: good repeatable processes to manage accurate information, making that information readily available to your team. The question then becomes, “how do I give my team intelligent notifications, escalations, the ability to measure the impact of changes, visibility across the lifecycle and up and down the artifact tree?
Fortunately, this is the problem Modern ALM, and specifically Helix ALM, exists to solve.
Let’s take a look.
First, it’s important to know that Modern ALM is about NOT having a lot of silos. The more application silos your team is using, the less likely the artifacts are to be linked together intelligently AND the more likely the status of items will not be accurate. So Helix ALM is the single source of truth for product development at speed and scale. Key development artifacts from requirements to tests and downstream issues are all managed in one place.
Second, I’ve talked about how critical repeatable processes are. Even if you start with the wrong process. The fact that it is repeatable and measurable gets you one step closer to making it the right process. Within Helix ALM, you can model your processes and control who can move artifacts through the states and when.
So now your team is following your process using Helix ALM. Well now you can also notify them of important activities. Remember, removing lag is about notifying people when something needs their attention and escalating the notifications when something is not getting the attention it needs. These are core capabilities in Helix ALM that are easy to setup and manage.
We talked about the a few of various kinds of changes that can occur on a project. For example, what is the impact of changing a requirement or code on related artifacts, such as test cases? Helix ALM automatically links artifacts for the team as they work. We call this transparent traceability. So before you change a requirement, you can perform impact analysis to see what other requirements and test cases may also need to change. Perhaps this requirement change is too big for the time we’ve allotted in this release cycle. Take a moment to consider how you would do that today with your current tools.
Another time waster is changes coming into the product that aren’t tied to a task or requirement. Helix ALM, paired with Helix VCS or Surround SCM, can require engineers to associate code changes with the requirements, defects, tasks, or other artifacts that necessitated the change. No task, no change, and fewer surprises.
Over the years, we’ve had thousands of customers realize these development efficiency gains with Helix ALM, but one example hits closer to home than the other. It has been us. We experienced firsthand, life before and after Helix ALM and it actually occurred when we were developing the requirements management capabilities (called Helix RM today). The first part of this may sound familiar. Pre-Helix RM, we managed our requirements documents in Microsoft Word—sadly, like much of the world today. It really slowed down the review process and getting buy-in. It also resulted in multiple versions of the requirements documents floating around and someone had to merge all the comments and changes together and redistribute. And, we had no idea what tests were written for the requirements. In fact, we didn’t start on them until the document was pretty close to approve. Those were kind of our dark ages of requirements capture and management.
Once we completed and started using Helix RM, we had a central location to create and manage requirements documents. Since these documents are comprised of individual requirements that can be worked on by multiple people simultaneously, we now have concurrency in the requirements capture, review, and approval process. QA can start writing test cases much earlier as individual sets of requirements are approved. And bonus. If a requirement changes, QA is notified so they can make test case adjustments before they have the chance to run an incorrect test.
In summary. Helix ALM helps you bring products to market more efficiently by removing gap, enabling better change management, and improving visibility.
Specifically, it enables
Workflow automation for managing dev process from requirements through testing and deloyment
Quality improvements by focusing development on fulfilling requirements
Efficiency and quality improvements by knowingly focusing testing on new code
Dev team efficiency through 360-degree visibility of requirements, history and test cases
Early visibility into risks through end-to-end workflow tracking
Instant audit evidence