A header image showing a graph on increasing financial values over a 1.300 workday period of time — Stefan Willuda
Simulation result for different pull policies over a 1.300 workday period.

The magic of ‘just’ a policy change

Predictably Speed up Your Product Delivery

Changing the timing for starting new work significantly increases speed, predictability, and (financial) throughput. How? Find out!

Stefan Willuda
Published in
26 min readMar 17, 2021

--

There are always more things on your plate than you can chew down in product development. And although this sadly is true, it doesn’t mean that you cannot do something about your chewing speed. Interestingly, many teams struggle heavily to increase their delivery speed and reliability. I write “interestingly” since the associated body of knowledge is vast and, for decades, well-proven. In this blog post, I will make the implications of this body of knowledge super-practical. Right after reading through this post, you will be able to alter policies under your control that unknowingly hold your team back. I promise that no one in your team needs to work harder to speed up product delivery.

If you don’t want to think about the reasons why so many speed attempts practically fail and want to jump right to action, scroll down to “Make it actionable — policies and boundaries”, however, I recommend thinking a bit ;).

In product development, there are always more things on your plate than you can chew down.

The tricky thing about bodies of knowledge is that they only bring you so far in a complex social system that we call an organization. Even the best knowledge is useless if teams don’t transform it into proficiency. Thus I can’t take away from you the necessity of transferring this knowledge into your working context and practicing it. However, I assure you that the results will be astonishing if you dare to act.

Building on simulation data, I’m going to unfold the vast differences between three teams in terms of reliability, speed, and (financial) throughput. We look at the differences in working policies that lead to those differences in performance.

Let’s aim for reliable speed

Assume we have three separate teams doing comparable work. We call them Team A, Team B, and Team C. Looking at the following histograms, which team seems to be reliably fastest and which slowest?

Simulation results of the Flow Time distribution of three different teams throughout 300 workdays.

Obvious right? Now contemplate for a minute what makes Team C so fast and reliable and what might lead to Team A being so slow and unreliable when it comes to delivery speed per work item?

🤔 🧠 💭

Usually, when I ask Product Owners or Agile Coaches the same question while looking at the same graphs, they come up with explanations like “Team C is way more experienced.”, “Obviously, Team A is way smaller.” or “The work items that Team A and B work on must be way bigger!”. Maybe those thoughts crossed your mind as well. Keep those hypotheses in your head. We’ll check them right away based on a simulation. But before we do, let’s briefly bring our attention to the question of why speed matters more than anything else.

Speed, flow, and cadence matter

Teams always need to act fast, deliver fast, and learn fast. That’s why teams need to increase their speed of delivery constantly. (I’m aware that this is only beneficial if that team’s increased performance directly adds to the organization’s bottom line. But in this post, I assume that the team can independently deliver value to the customer, which adds value to the bottom line.) Being fast leads to fast feedback, which enables fast learning. This is powerful. On top of that, only instantaneous feedback allows you and your team to feel the flow, a sense of accomplishment, and meaning that what you do is valuable.

An animated image showing blue dots that go round in circles having the same speed but different circle diameters indicating that while speed is the same delivery cadence is not — Stefan Willuda
Borrowed from John Cuttler

Stating the obvious, a fast team getting things done (speed and delivery cadence) gets more things done. Comparing Team A, Team B, and Team C, we recognize that Team C gets almost four times as many work items done as Team A.

Simulation results of the number of work items done for three different teams throughout 300 workdays.

Four times the quantity of work items may sound impressive. However, considering that every work item potentially adds value to the company’s bottom line, getting more things done adds more value. In the case of those three teams, the numbers look like this.

Simulation results of the created value for three different teams throughout 300 workdays.

Wait, although Team C delivers ‘only’ four times the number of work items, it generated more than 12 times the value? How is that possible? Team C clearly must work on the more valuable work items, right? Yes and no. A work item’s value may sound like a static number, but it’s not since an item’s value changes over time. Usually, the value of a product feature, idea, or change decreases over time. The opposite is true for bug fixes, refactorings, or fixed-date regulatory product change requests. The longer those issues are not resolved, the more value may get destroyed. The concept of delay costs tries to cover those positive or negative financial consequences of slow product delivery. The tricky thing about delay costs is that these can rarely be known until the work item is done (or the feature is in the market, respectively). But although they can’t be known upfront, they exist, and Team C clearly has an advantage here. Due to Team Cs’ high pace, it can react to the market changes fastest, and as you can see, it pays off massively.

What’s team Cs’ secret? Big plot twist!

Coming back to your hypotheses from some minutes ago, let’s examine the differences between Team A, Team B, and Team C. It might surprise you if I tell you that all three teams actually are the same team — but at a different time. What a plot twist!

Since all three teams are doing the same kind of work within the same team setup, having the same capacity and capabilities, something else than we have suspected above is doing the trick.

Compare those three time-lapse animations of all three teams and see if you can spot the not-so-subtle difference.

From left to right: Team A, Team B, and Team C. Do you see the difference?

Believe it or not, the only difference between those team snapshots over some time — we call them Snapshot A (formally known as Team A), Snapshot B (formally Team B), and Snapshot C (Team C) is the policy on how to start new work.

Team A (Snapshot A)

What do you think is the policy of Team A looking at their Kanban Board and their Scatterplot?

Simulation results of Team A. Left: Kanban Board with work in process (red stickies are waiting). Right: Scatterplot for 300 workdays. Time-lapse Video here.

You might have guessed it: “Start incoming work as soon as it appears.” The decision to start new work is made by the people handling the first process step and everybody handling the subsequent process steps as well for ‘their’ respective process step. Implicitly, starting new work stops when the first process step is completely overloaded. Although we now know that this way of working pays a massive toll on the teams’ performance, this policy is very much the standard mode of operation if we look in our knowledge work environments.

Starting work too soon is very much the standard mode of operation in knowledge work environments.

Team B (Snapshot B)

Looking at Team B, what do you think is their policy on starting new work?

Simulation results of Team B. Left: Kanban Board with work in process (red stickies are waiting). Right: Scatterplot for 300 workdays. Time-lapse Video here.

You might have come up with something like: “Only start new work if your process step does not violate the Work in Progress Limit (WIP Limit) of that particular process step.” This decision is also made by the people handling the first process step and again for every subsequent process step. This sounds straightforward to you if you are familiar with (column-based) WIP Limits. This pull-operation mode is relatively standard for product development teams with some Kaban experience. The simulation results indicate that this mode of operations has a clear advantage in speed, reliability, and throughput over mode A above. However, there is much more to gain by changing the policy on starting new work another bit.

Team C (Snapshot C)

Team C follows a different policy. Maybe you can guess it by looking at the Kanban Board and the Scatterplot.

Simulation results of Team C. Left: Kanban Board with work in process (red stickies are waiting, the star indicates the constraint process step). Right: Scatterplot for 300 workdays. Time-lapse Video here.

The policy on starting new work is: “Only start new work if the buffer in front of the constraint process step has capacity available.” This decision is made by the people handling the first process step, and work may be started immediately for every subsequent process step as long as not more than two work items are in process simultaneously. Technically speaking, this pull policy is called ‘Drum-Buffer-Rope’. This mode of operations is common in production environments but less established in knowledge work environments.

Let’s compare the Scatterplots of the three modes of operations.

Simulation results of Team A, B, and C. Scatterplot for 300 workdays indicating the 50th, the 85th, and the 95th Flow Time percentile.

Astonishingly, it’s only this simple policy change that increases Team Cs’ speed, reliability, and value creation significantly.

You can see this being true by looking at this short video (3 min). It shows a time-lapse simulation of a teams’ Kanban Board for 1.300 workdays. ‘Team A’ changes its policy on day 250 to column-based WIP Limits, becoming ‘Team B’. On workday 750, the team again changes its pull-policy to ‘Drum-Buffer-Rope’ and becomes ‘Team C’.

This is how the teams’ Scatterplot looks like throughout the 1.300 workdays. You can see clearly how delivery speed and throughput increase after each policy change.

A scatterplot diagram showing delivery times of a team over a 1.300 workdays period. First the delivery flow times increase steeply and then decrease over time indicating that the policy changes actually decrease the flow times of the work items — Stefan Willuda
Simulation results of ‘Team A’ becoming ‘Team B’, becoming ‘Team C’ over a 1.300 workday period. The lower each green dot’s position, the faster the work has been done for this single work item.

Make your policies explicit

Maybe this little team comparison intrigued you; perhaps you are in dire need of becoming faster, more reliable, and delivering more value. Whatever it may be, if you’ve come to this passage, you might want to know what you can do to speed up your team or even your whole organization.

As I’ve mentioned above, it all boils down to changing the policy when starting new work. In practice, this doesn’t mean introducing a completely new policy but exchanging existing policies in your team (or organization). Unfortunately, you likely are unaware of the policies that guide your daily decisions on releasing new work into your delivery system. This is quite common, so don’t you worry. Making those policies explicit is the right starting point. However, this can be more challenging than it seems. Think about it for a minute. It’s also helpful to think about it together with a colleague or your whole team.

Quite often, those policies — whether they are explicitly stated or not — sound something like this:

  • “Everyone in the team has to work on something from the backlog. So as soon as anyone has nothing to do, pull something from the backlog.”
  • “Since we all have different expertise, we work functionally separated. Every expertise is super valuable and thus should be utilized to full capacity. E.g., as soon as our requirements engineer (developer, tester, …) has the capacity, she shall start working on something new.”
  • “Waiting time is wasted time. As soon as someone on the team is waiting due to some dependencies, she shall start new work from the backlog.”

You get the idea. What policies have you identified for your team?

All those policies above — and maybe yours as well — are, to some extent, rooted in the idea that it is crucial to utilize the capacities of the team members or every single process step. Why? Because we are dealing with the underlying assumption that those people doing the work in each process step create ‘costs’ if they are idle. Increasing the Operating Expenses — a better term than costs — is something we try to avoid. And don’t get me wrong, it’s not only wise to keep an eye on your Operating Expenses, but it’s also crucial for the companies’ survival. As soon as the Operating Expenses exceed the companies’ financial throughput (sales revenue - total variable expenses), it fights an uphill battle that puts innovation at risk, and ‘tight management’ might kick in, initiating a downward spiral that could kill a company. So to run a profitable company, you have to keep Operating Expenses low. Under the assumption that underutilized ‘resources’ or process steps increase the Operating Expenses, it’s wise to utilize every process step.

An image of a partially constructed conflict resolution diagram. It says “If we want to run a profitable company we have to keep the operating expenses low. In order to keep the operating expenses low we have to utilize every process step — Stefan Willuda
The first part of a Conflict Resolution Diagram.

Moreover, in the policies above, the idea is engraved that you should and can linearly manage peoples’ time. If a workday consists of eight working hours, a common assumption is that it can be split up into eight equally valuable working hours or 16 equally valuable chunks of 30 minutes of working time. This time-chunking makes it mathematically feasible to work on several tasks in parallel simultaneously.

The combination of the often implicit assumptions concerning ‘costs’ and time leads to the premature starting of new work, as we have seen in Snapshot A. In other words: When people deal with the first process step, they tend only to take a look at themself, their particular process step, and their contribution to the whole, and since they only have partial knowledge about the actual capacity of the whole product delivery system, they have no choice but to decide based on the information at hand whether to start new work. Which, having the efficiency paradigm in mind leads to the ultimate policy: “If I have the capacity, I start new work”.

This is a super reasonable policy under these circumstances; no blame here. No one would accept the opposite policy: “If I have the capacity, I don’t start new work”. This is by far a more reasonable (implicit) policy than a random decision: “If I have the capacity, I flip a coin and only start new work if it shows heads”. But even if those policies are the best at hand when it comes to personal (local) optimization under the efficiency paradigm, they lead to a team or even a whole organization becoming like Team A. Which, as we have witnessed, is the worst of all the available outcomes of all scenarios presented.

Local optimization under the efficiency paradigm kills your company.

To run a profitable company, you not only have to keep your Operating Expenses low, but you also have to increase financial throughput. Since there are always fluctuations in the rate at which a team or a process step can process incoming work (pardon my rather technical language), throughput may only be increased if the work is not blocked in the delivery system.

However, work will inevitably be blocked in the delivery process if we utilize every step to its full capacity. Because whenever there is a fluctuation in the processing rate — let’s assume work gets processed slower than expected — then immediately a backlog of work waiting emerges in front of that process step. A local hiccup directly impacts the global delivery process. Consequently, to increase financial throughput, we cannot utilize all the process steps in the product delivery process. But isn’t that the exact opposite that we discovered just a minute ago?

Oh boy, this sounds like a dilemma, which can be visualized like this.

The fully constructed conflict resolution diagram showing the core conflict that leads to dysfunctional policies. It states “To run a profitable company we need to keep the operating expenses low and we need to increase the financial throughput. In order to keep the operating expenses low we need to utilize every process step. In order to increase financial throughput we can’t utilize every process step. Fully utilizing and not fully utilizing is a conflict — Stefan Willuda
The dilemma is visualized as a Conflict Resolution Diagram.

Suppose we want to alter the policies that drive our decisions on whether to start or not to start new work, we first have to solve this conflict. But bare with me, if the team we are looking at in this post can solve it — evolving from Snapshot A to Snapshot C — your team can do it, too.

The easiest way to solve this conflict is by checking if the underlying assumptions that lead to this conflict are really really true. This conflict will fall apart if we can invalidate underlying assumptions that created the conflict in the first place.

Let’s examine the assumptions.

  • A process step or a team generates more Operating Expenses if they are sitting idle.
  • Product development time can be chopped into equally valuable chunks and distributed lossless onto different tasks.

Think about those assumptions. Your gut feeling might already tell you that those assumptions, although engraved in everyday practice, don’t seem to match your perception. For good reasons! It turns out that Operating Expenses in product development do not correlate with the busyness or idleness of a team. An idle team gets the same salaries, runs the same computers, uses the same office space as a super busy team. Only sometimes computation increases Operating Expenses if a team is busy. Which, to some extent, is a reverse correlation in which idleness would even decrease Operating Expenses. Of course, one could argue that not having this team would reduce the Operating Expenses — no doubt about that. However, most of the teams and most of the process steps are there for a reason. If you sack a team or skip a process step, you lose all the benefits that those teams and process steps add to value creation, making it more intelligent to have this team or process step than not to have it. So the first assumption doesn’t stand the test in the realm of product development work.

An idle product development team does not increase Operating Expenses.

Considering the second assumption, you might have already recognized that it also doesn’t survive the test of reality. In product development, four hours of uninterrupted time is absolutely not the same as eight times 30 minutes on changing tasks. I’m not going to examine the obvious any further.

  • Debunked: A process step or a team generates more Operating Expenses if they are sitting idle.
  • Debunked: Product development time can be chopped into equally valuable chunks and distributed lossless onto different tasks.

Debunking these implicit assumptions on product development work guides you on a path towards alternative policies that allow you to start the right amount of work at the right time. Assuming that the Operating Expenses stay relatively stable as long as we don’t significantly change the process or enlarge the product development team, we can solely focus on increasing the product delivery process’s financial throughput. This is a huge relief!

A tiny practical advice

Allow me to give you a piece of practical advice. If you find this step-by-step thinking process in this article helpful, run through it with your whole team as well. Suppose you and your team have not been speaking openly about implicit assumptions that let you take on more work that is healthy (personally and financially). In that case, likely, the behavior you are aiming at does not kick in or deteriorate after a short amount of time. Take your time to surface everything that implicitly drives current behavior. This can even be something like, “If I’m not be seen as busy all the time, I will not receive a pay raise in the upcoming planning round.” or “Everyone here is super busy all the time. If I’m the only one having slack I will be looked down at.”. It’s important to call a spade a spade.

We already know that we must avoid clogs caused by too much parallel work to increase flow and financial throughput. But it might already dawn on you that you cannot indefinitely decrease the amount of work in the system — value is only created if actual work gets done. In the very practical Theory of Constraints (TOC), the right amount of work that needs to flow through the system — avoiding clogs on the one hand and avoiding harmful under-utilization on the other hand — is determined by the buffer in front of the constraint process step. What might sound academic is pretty straight forward as you can see by Snapshot C’s operations.

A straightforward application of Drum-Buffer-Rope on a generic Kanban Board. This is Team Cs’ Kanban Board (red stickies are temporarily waiting).

The policy on releasing new work to the teams’ delivery process looks at the buffer in front of the constraint process step (indicated by the star). As soon as the buffer gets ’a hole’, new work is released into the delivery process.

This, of course, leads to the necessity to identify your process’ constraint step. But don’t you worry. In product development, usually, some sort of ticket tracking system like ‘Planview leankit’ or ‘Atlassian Jira’ is in place to make work in the process visible. As a first approximation, take your historical data on the teams’ Flow Times (the time from starting the work until it’s done) and look at each process step average duration. Consider the process step with the longest average work item cycle time (the time for this particular process step from start to finish) as your constraint process step. Voila!

Having the constraint process step makes it easy to put a buffer in front. A buffer is like a mini backlog that stores just enough work items to keep the constraint process step working. The optimal size depends on your process and needs some tuning. If the constraint process step suddenly runs out of work items and there is no ‘upstream’ blocker in your workflow, the buffer might need to be increased. If your work items get old while waiting in the buffer, you can reduce the buffer’s size.

Practically, this means adding a ‘buffer’ column to your teams’ Kanban board and adding a minimum and a maximum number of tickets (WIP limit) for this column. To not artificially impede the flow of the work, remove all the other column-based WIP limits that you might have in place already. This Kanban board gives everyone in the team visibility on what’s going on in the whole process, which also allows everyone to decide on whether to start new work from the backlog or not.

Now you have all the ingredients you need to increase the speed and throughput of your team reliably.

Make it actionable —policies and boundaries

After this thoughtful part of this post, it’s time for some action. Grab a pen and a sheet of paper and write down the new policies for your team on when to release new work into your product delivery process. Find your own words based on your insights from the passages above. Remember that those policies will explicitly replace the current policies you and your team have surfaced in the abovementioned steps.

Take those bullet points as a source of inspiration.

  • We aim for fast flow and high value creation in very short delivery cycles. By this, we achieve good predictability when it comes to Flow Time forecasts.
  • To achieve this, we only start new work from the backlog if the buffer has the capacity and the total number of tickets in front of the constraint process step is below X items.
  • We do not mind if process steps that are not the constraint are idle. On the contrary, we embrace and value slack time to signify our responsiveness and capability to improve the teams’ fitness.
  • We want to get as close as possible to one-piece flow, delivering a work item from start to finish in one fell swoop.
  • We acknowledge that starting work too soon is the root of all evil.

Now write down what boundaries you will keep intact to preserve the flow of work. Again the following bullet points are just a source of inspiration.

  • Our work process starts with the process step [start] and ends after the process step [end] is completed.
  • All the work waiting to be worked on sits in a waiting step outside our product delivery process. We call this waiting step [backlog].
  • We, the team, are the only ones with authority to start new work (Pull). Work may never be started for us (Push).
  • Since planning and preparation should also be considered work on the work item, we defer those work until the downstream work process steps can take on new work. Planning activities are visible on our Kanban Board as well.

Ensure that your boundaries explicitly cover that your team’s management does not interfere with your teams’ way of working. In other words, don’t let the management with formal authority interfere with the value creation process. Try to talk about results, speed and reliability instead. As you’ve seen by now, it’s pretty hard to grasp what your team is doing there if the process of making policies explicit has not been undergone. Don’t be furious if you hear sentences like: “I’ve seen that Susan in your team seems to have spare capacity since she was watching Java tutorials on youtube. She better starts working on something of value instead of wasting precious time!” If you like, you can try to make the underlying assumptions of this statement explicit together and figure out if those need to be invalidated to fit the context of product development. If you don’t like to do that, try to shield your team to avoid work being pushed into your product delivery process prematurely.

Don’t let formal management interfere with the value creation process.

I consider it wise to have those agreements visibly written down for the team and its working partners (I deliberately try to avoid the word Stakeholder here). This helps to stick to them, and it can nudge fruitful conversations about how to organize product delivery work.

Also, take those principles and policies into your retrospectives from time to time. Check with your team how well you are doing in terms of complying with those self-imposed policies and in terms of team performance. It can be helpful to follow the structure of the Flow-Centered Retrospective.

It might need some trial and error to find the right timing for replenishment to release new work into your product delivery process. Theoretically, all the information to make a sound pull decision is available for everyone in the team as long as the team keeps the Kanban Board up to date. This should make pulling new work possible continuously. However, in practice, it’s quite common to have a micro-planning for upcoming backlog items to understand who needs to collaborate closely and if everything is available to deliver this item in one fell swoop after it’s pulled into the delivery process. This micro-planning makes synchronous replenishment necessary. Doing this in the daily standup routine is fine. Some teams out there also have impromptu replenishment sessions if the buffer runs out of work items. I’m sure you’ll figure out what works for you.

Measure your flow and throughput

It’s super rewarding to see structural ‘interventions’ like discussed in this post making a difference in your teams’ performance. But how will you know? The simulation comes up with impressive numbers because it obviously measures those metrics. Discuss those measurements with your team as well. Usually, ticket management systems and digital Kanban boards support teams with basic metrics concerning speed, reliability, and throughput. The financial throughput usually cannot be tracked with the ticketing system; however, companies measure it anyway — maybe not on a team level. Make clear that those metrics are the teams’ property. You and your team measure your performance to become better. It’s like an athlete who wants to know if there is progress and if the training makes a difference. Since numbers of speed, reliability, and (financial) throughput are not based on estimates, they can be compared between teams. However, in the idea of this blog post, those metrics shall guide your way to become better and not put arbitrary pressure on teams, as happened frequently with useless metrics like velocity. Don’t be surprised if you achieve almost the same results as the team in this simulation coming from Snapshot A and moving to Snapshot C.

Keep an eye on your process step cycle times and try to bring them down over time, of course focussing on your constraint process step, since this is what will ultimately increase your teams’ overall throughput. Performing the ‘Five Focusing Steps’, your team can constantly improve your teams’ performance (until the constraint is no longer within your team’s delivery process). There is plenty of literature out there on how to perform this ongoing improvement, and I’m not going to cover this in this blog post. Take a look at the deep dive sources at the end of this post to find some inspiration.

A word on change

The change described in this post is profound and very simple at the same time. It is profound because it deals with implicit assumptions and thus has a lot of transformative power. And simple since it’s just an exchange of policies and principles, and it’s basically just a decision. I’ve worked with teams executing this change-flip in a single retrospective.

To make changes last is on another level. A very effective way to make change stick is to reframe it. Change is a means to a positive end. Your policies above can help you achieve this positive end. It’s also beneficial not to change alone. That’s why I invite you to alter those policies and practices together with your team as a joint effort. And engrave the changed policies, principles into your daily team routines to repeat their application. Life gets easier if you don’t have to rely solely on your willpower but can trust the process. That’s why I favor team rituals that encompass the policies practically.

Logical Thinking Processes

In this post, I’ve tried to mimic some of the Logical Thinking Processes that have emerged within the Theory of Constraints school of thought. Some years ago, I was inspired by the famous example from the audio guide ‘Beyond the Goal’ by Eliyahu Goldratt. Talking about new technology (like MRP and ERP systems back in the good old days), Eli Goldratt states:

“Technology can bring benefits if and only if it diminishes a limitation.“

So Goldratt suggests the following:

If you want to introduce new technology or new software, ask yourself four questions:

  1. What are the powers of that technology?
  2. What limitation is this going to diminish?
  3. What rules helped us accommodate the limitation before we had that new technology?
    Suppose you don’t find and articulate these rules. In that case, the chances are high that you will perpetuate these rules when using the new technology, which prevents you from diminishing the targeted limitation and getting significant benefits from your disruptive technology investment.
  4. What rules should we use now?

Goldratt concludes:

“It’s the hardest thing to find the new rules”.

With this in mind, I’ve tried to establish this line of thought within this blog post. I find this train of thought very helpful for almost every change initiative.

Let me know if your team changed and what the results have been.

Some of you might be interested in the simulation itself, how it is done, what the programmed assumptions are, and so on. I might write a separate blog post on that and link it here. In the meantime, feel free to contact me if you want to ask some questions. You can find the assumptions built into the simulation at the end of this post.

On my account

Feel free to follow me on Twitter or LinkedIn to receive excerpts of related content regularly.

Deep dive sources

Assumptions of the simulation

The simulation that generated all the data, screenshots, and videos above is based on assumptions that closely mimic the reality of a product delivery team. This is the nerdy stuff, so only continue reading if you want to know how the simulation operates.

Every day two new work items enter the backlog of the team. There is always something to work on for the team at this rate—moreover, each work item ages over time. If a work item is not started within ten workdays, the item is removed from the backlog. This might not reflect your perception of reality but was designed to keep the flow of the simulation smooth.

To finish a work item, it has to run through an interdependent product delivery process. For the sake of clarity, the work items may only run ‘downstream’ once they have been started. The process itself consists of value-adding and non-value-adding (waiting) columns. This makes waiting time more visible in the simulation.

Each work item needs a random amount of effort per value-adding process step, and the effort is calculated on a random distribution for each step within a specific range.

Since in product development effort never is distributed equally in a Gaussian ‘normal’ distribution, the simulation adds an effort distribution factor that to some extent reflects reality, meaning that some things go pretty fast, a lot of things take their time. You also have your black swans that take disproportionally longer than other work items. A robust product delivery approach can cope with this uncertainty; thus, the simulation considers this.

Of course, a work item may only leave a particular process step after all the effort of this process step has been applied.

A certain amount of effort can be applied to the work items per workday. We assume having a team of 6 people that are rather t-shaped and can support each other without too much friction. Most of the available capacity of the team is spent on the actual product development.

You may notice that the team focuses not too much on the review process step. I’ve seen this quite commonly. Sometimes it’s due to external dependencies, sometimes because the review may only be done by a Product Owner, and sometimes because ‘development’ is considered the team’s primary work. The simulation mimics this typical pattern, also — I have to admit — quite drastically.

Of course, no team member in the world can apply eight straight hours of uninterrupted time on doing ‘her work’. That’s why the simulation throws random disturbances into the mix. Some days there are almost no interruptions, and from time to time, a whole working day is lost and anything in between. We might want to argue about the specific number I’ve applied, but the overall pattern should be pretty realistic.

Since multi-tasking and frequent context switches pay a high toll, the simulation drains applicable effort per day as soon as multi-tasking kicks in. The simulation reduces the effort applied to the work item itself by a multi-tasking factor. So the higher the (mental) load on the team, the less effort is applied to do the work at hand, and the more effort is spent on the context switch itself.

Although this is no exact science, I think these attention penalties should be considered being realistic.

Each work item also has a specific value that would be generated if it would enter the market quickly. This value drops over time, and we assume that a short time to market is beneficial for the team and the company. The simulation does not introduce work items that lead to a penalty if not delivered on time. So either work items add value or lose all their value due to a long time to market.

Visual by Karl Bredemeyer

If a specific work item loses value and how much is calculated based on a randomly selected cost of delay pattern (normal distribution out of four basic patterns) and the time since this particular work item has entered the team’s backlog. So as you can see, some work items lose value quickly, some steadily and some slowly.

Of course, the simulation tracks the cycle times for each process step and the overall flow time for the whole product delivery process.

The simulation aims to emulate the uncertainties of real-world product delivery that stem from the work item itself and the delivery ‘process’ with its interdependent events.

That’s it. Feel free to drop me a line if you have questions or remarks. You can find the simulation here to play around with the different policies to start new work.

--

--

idealo Tech Blog

BetaCodex Consultant, Former Scrum, Kanban and Management Consultant | Agile Coach | TOC Enthusiast | I believe that a humane global economy is possible.