19. #4 Visibility
• It’s far easier to measure real progress. If
a ticket is large, then all the update you
get at standup is “yes, I’m still working on
it, I’ll be finished in [insert random guess
of amount of time left]“. With small tickets,
it’s more like, “I started it yesterday and I
expect to be finished by lunch time”.
• With small batch sizes you can see work
move through the lifecycle with certainty,
spot problems early, and make ongoing
adjustments to optimize the flow of
delivery. The inefficiencies are clearly seen
and you get more chances to improve
20. #5 Estimation
0
50
100
150
200
250
300
0 1 2 3 4 5 6 7 8 9
DeliveryCycleTime(Hours)
Story Points
Delivery Cycle Times vs Story Points (no UAT)
Delivery Cycle Time
Linear (Delivery Cycle Time)
21. #6 Risk
• Build the right thing - product risk
• Deployment risk
• Build it right risk
25. Why small batches?
• Accelerate Feedback
• Easier to Debug
• Visualise Progress
• De-risk
• Estimation improves
• Allows more pivoting
• Gives more chances to inspect and adapt
• Efficient delivers value faster
Notes de l'éditeur
#3 Efficiency
Learnings – faster release of value
Agility – more easily able to pivot onto higher value items and have delivered some value
Likelyhood of interrupt is lower because the amount of time we spend completing things is smaller more likely to be able to finsh
#1 Statistics
#2 Fast Feedback Accelerate Feedback – Longer to find out if it’s the right thing or if you did the thing right. It’s easier to make technical decisions and easier to recover from a mistake.
example of our UI team prepping mock-ups for their development team. Should they spend a month doing an in-depth set of specifications and then hand them off? I don't think so. Give the dev team your very first sketches and let them get started. Immediately they'll have questions about what you meant, and you'll have to answer them. You may surface assumptions you had about how the project was going to go that are way off. If so, you can immediately evolve the design to take the new facts into account. Every day, give them the updated drawings, always with the proviso that everything is subject to change.
This means the larger the batch the longer you wait to find out if you did it right. It’s easier to make business and technical decisions and easier to recover from a mistake if you are working on shorter time horizons.
The costs in product development are exponential – the later we find a problem then its exponentially more expensive
There are many different types of feedback
Unit tests – feedback
Dev – code review
Test – Acceptance tests
does it work
PO – is it right for the users
Users – all of the above, you find out about product and market fit
Product development may be like walking on the picture if the path there are loads of paths we can go down and each path has many forks off it. There is exponential growth of decisions. The sooner we can close of some of those paths or get feedback that the path we are on is the best one or at least works the better. Think about where teams or even individual devs working with the same codebase
We have high levels of complexity Converging too quickly on a solution is the enemy of complexity. Having small batches allows us to test hypothesis and fail as fast as possible. We do this all the time, just take auth handover the ticket the retail team were working on last week there were numerous hypothesis and options the team tried.
Feedback motivates getting good feedback energizes us and pushes us on. Motivates us
It’s also demotivating to spend a large amount of effort and have gone down the wrong path
#3 Efficiency
It’s more time efficient. The amount of time needed to code review and reliably test large changes is non-linearly longer than for small changes, and the chance of detecting a fault is lower due to the increased cognitive burden. You are much less likely to understand the context of why the change is being made.
The larger the scope of the batch, the more complexity the individual has to deal with.
Size of batch vs number of comments
Lower complexity means easier debug, which means more easily found problems which means we get things across the board and done faster
Mozo bank CTO smaller changes have a smaller “surface area”
Small batches mean problems are instantly localized. This is easiest to see in deployment. When something goes wrong with production software, it's almost always because of an unintended side-effect of some piece of code. Think about the last time you were called upon to debug a problem like that. How much of the time you spent debugging was actually dedicated to fixing the problem, compared to the time it took to track down where the bug originated?
The larger the scope of the batch, the more complexity the individual has to deal with.
For a large batch of changes, especially those made to an even larger system, the handoff to the next step in the process is going to be highly inefficient for the receiving party to deal with (think: Development to Operations “toss if over the wall” handoff of a major release). And if something goes wrong, the time between when the error was introduced and when it will be discovered is so long that it is no longer fresh in the mind of the person who introduced the error. Small batches also have been proven to actually reduce transaction costs because of a curious fact of human nature… people get better at and find ways to increasingly improve the things they are forced to do more often.
Rocks in a jar – large ones with small rocks and sand around it
Internet packet switching network. We have many small packets. If we had large packets we would have large queues much like our Kanban boards our systems one of the advantages of using these is we can visualise queues in our systems(Teams process) Because we have smaller queues we get better utilisation and a reduction in cycle time. Time to get things across the board and done ready to release. Request from me is to start to track and use our cycle time more.
As teams we work within a queuing system our team process has multiple queues. Why on Kanban boards you see in and done columns something from lean manufacturuing is to visualise those queues. Reduces queues – we have in and done to show where work is. The bigger the batch generally the bigger the queue.
Example – going to Phil Potts or Juici Sushi– large queue if we all go at once slow delivery times
Transaction cost the cost of moving things from work station to work station of hand offs between people and process
As an industry we seek to lower the transaction cost as much as possible. This is why continuous delivery and deployment are so good because they force us to lower our transaction cost as low as possible and enable us to put things into production with as little effort as possible. Meaning lower cycle times, quicker feedback, quicker release of value.
We need to relentlessly look to reduce our transaction cost the amount of process and time to get things into production or as far down the delivery pipeline as possible.
Holding cost is the cost of holding inventory
More instrumentation points by which to inspect and adpt
Large batches mean we have little or no warning that items may or may not be done in time or have problems
Little opportunity to optimize, triage, help
Improves management visibility and control – Reducing batch sizes gives you a greater number of instrumentation points by which you can visualize and measure the flow of work through your organization. It’s notoriously difficult to accurately determine progress of in-flight work. You are largely going to be limited to the subjective analysis of project managers and the biased opinion of the person doing the work. The only points where you can have certainty is either when the work has just started or when the work has just completed (and accepted by the next step in the process). With large batch sizes you have to wait long periods of time between those start and completion points, making it difficult to see how things are flowing, providing little guarantee that you will have adequate warning if things are going wrong, and allowing for few opportunities to make adjustments to optimize or triage. With small batch sizes you can see work move through the lifecycle with certainty, spot problems early, and make ongoing adjustments to optimize the flow of delivery.
Waiting long periods of time to see if things are going wrong with little guarantee that you have time to adjust
Estimation is easier and less risky
Large batch sizes also often lead to compounding schedule delays and cost overruns. The larger the batch, the more likely it is that a mistake was made in estimating or during the work itself. The chance and potential impact of these mistakes compounds as the batch size grows… increasing the delay in being able to get that all important feedback from the users and increasing your product risk.
Agile is the best risk mitigation framework we have for product development
Deployment Risk
rolling back a small change is much easier than rolling back several months' worth of stuff. On the technical front, the number of components affected is much smaller; on the business front, it's usually a much easier conversation to persuade the team to roll back one small feature than twenty big features the marketing team is relying on as part of a launch.
Product Risk
This builds on the idea of faster feedback. The sooner you can put an individual feature in front of your target audience, the sooner you will know if you’ve achieved the right product and market fit. The larger the batch size, the greater the product risk when you finally release that batch. Statistics shows us that it’s beneficial to decompose a large risk into a series of small risks
Continuous delivery – multiple releases per day
For example, bet all of your money on a single coin flip and you have a 50% chance of losing all of your money. Break that bet into 4 smaller bets and it would take 4 sequential bets to result in financial ruin (1 in 16 or 6.25% chance of losing all of your money).
Large projects allow tolerance for large monoliths
Small batches naturally limit wip or segment of code infra
Naturally what we will end up doing is looking for ways to increasingly isolate and decouple
Encourages decoupled architectures with less dependency issues – Smaller batch sizes can also have a positive impact on architecture. Most IT systems are built from within the context of large projects. Large projects create them and then large projects are undertaken to change them. The result is a built-in tolerance for monolithic architectures with complex dependencies. As you move to small batch sizes you are naturally limiting the work in progress on a particular segment of your code/infrastructure. While initially this might seem like it will slow the organization down, the principles of flow show that this will actually give you greater throughput over time. But in order to speed things up even further, you will end up looking for ways to increasingly decouple and isolate (including making fault tolerant) your architecture to allow for greater parallelization of work.