Simulating the Negative Consequences of Multitasking on Flow, Throughput, and Value Generation

Using a simple simulation to make the multitasking madness observable at a glance and what to do about it.

Quite recently I had the chance to give a talk about the disastrous consequences of starting work too soon when it comes to product development. I’ve said that starting work too soon is the root of all evil. And although this might have been exaggerated to some extent I wanted to make the point that starting work too soon leads to multitasking and multitasking leads to loss of responsiveness and throughput which ultimately leads to reduced value generation. (If you want to read the transcript of the talk you find it here.)

At this talk, I’ve presented simulation-generated charts that illustrated the massively negative effects of starting work too soon. Those charts intrigued some of the audiences which made me confident to present them to you as well and to give some context on how those charts have been derived and how you can use simulation to shed light on the effects described above.

Image for post
Image for post
The negative effects of multitasking on predictability and value generation. Comparison of two simulation results.

The multitasking effects

As you’ve seen in the charts above multitasking has severe consequences for the overall throughput of a delivery system (like product development or service desk work). Just let us dive a bit deeper to reveal the full impact of multitasking based on a simple simulation.

If you want to better understand the simulation itself you are going to find a detailed description, a video and a chance to play around with that simulation in the last part of that post.

First, let’s take a look at one reason why it’s so tempting to multi-task? The simulation reveals that if a team does single-tasking, which means saying “no” to opportunities, they miss opportunities quickly. This simulation presented here takes into consideration that ideas have an expiration date. As soon as the window of opportunity closes the idea loses its relevance and the work item is terminated (maybe a stakeholder no longer asks for the work item or the season for that feature is over). The simulation counts this as a “drop” of a work item. However, this termination will only be recognized if the work on that work item has not started already.

Take a look at the following simulation result.

Image for post
Image for post
The percentage of lost opportunities over a year. While multitasking seems superior in the first 20% of the simulation year it is not in the last 80%.

You notice quickly that the teams that multi-task can start with several work items in a very short time. In other words, they don’t have to say “no” to the opportunities. I’m pretty sure that in the real world this makes a lot of people happy. That’s why no opportunity is missed for 74 days. The teams that single-task miss their first opportunities after 14 days.

From a perspective of “starting responsiveness”, the teams that multitask seem to have an advantage here. However, the tide turns on day 75. In the simulation, we assume that the teams at some point can no longer handle additional work items. This means that even the teams that perform multitasking have to stop starting more work items at some point since they are completely overloaded by then. And that is the point where the multitasking teams lose almost every opportunity because they are super loaded and busy all the time trying to finish the work that they have started already. The single-tasking teams also lose opportunities (the simulation is realistically configured in a way that there are more opportunities than the teams can handle). Nevertheless, these teams remain responsive and have a fair chance of seizing opportunities by the hand as they emerge.

Although starting soon seems to be more responsive at first glance it ultimately isn’t when it comes to finishing work.

Image for post
Image for post
The cumulative amount of finished work items over some time. While single-tasking creates a steady stream of finished items multitasking doesn’t.

While the teams that single-task create a constant stream of finished work items the teams that multi-task hardly finish anything over a long period. When these teams finish work they do it in larger batches than the single-tasking teams. I don’t want to discuss the consequences of large delivery batches in this post, however, you might want to keep in mind that they bare a cost.

From a perspective of delivery responsiveness, the single-tasking teams perform superior. Interestingly, the responsiveness of starting new work is usually valued highly in organizations which makes it even harder to resist starting new work before the work already started gets finished.

So although multitasking seems superior from a perspective of starting work, which is usually highly rewarded in organizations, teams quickly lose their ability to react on changing user demands since they are completely busy with all the work stat they promisingly have started already. Single-tasking teams have to strictly say “no” to starting new work all the time when there is already work in process. Although this may not be highly rewarded by people putting demand onto those teams, in the long run, those single-tasking teams are highly responsive when it comes to new demands.

Ok, if you say “Well, yes that was one run of the simulation but if the needed process time per work item is randomly distributed, how does it look next time?” I’ve run 25 simulations for single-tasking and multitasking to compare those two approaches.

Image for post
Image for post
25 simulation runs comparing the output of single-tasking teams and multitasking teams. Look at the different scaling of the axes.

You might recognize that the results of those 25 runs produce interesting patterns for the output of those teams. Although due to the randomness of the process times there is variation in the output the single-tasking teams reliably deliver more and with seemingly high reliability.

You are also free to play around with the simulation by yourself.

If we further assume that every work item has a particular, randomly lognormal distributed value that it can create per day (e.g. revenue per day) and that we pick randomly which work item we start working on we can simulate the effect of multitasking and single-tasking on the value-generation from the organization’s perspective.

Image for post
Image for post
The cumulated value generated over time. Single-tasking delivers value sooner and accumulates more value over time.

The effect is astonishing. Although single-tasking doesn’t seem to be as responsive as multitasking at first glance it generates value much faster. In this idealized simulation, the single-tasking teams generate 21 times the value than the multitasking teams. Looking at real organizations with real people this might be overly optimistic, nevertheless, single-tasking has a positive impact on the overall value generation.

Often predictability is desperately missed in modern product development work. “When will it be done?” is still one of the most frequently asked questions when talking to teams. Many teams and product managers surrendered to the so-called VUCA world and they simply assume that the world of product development is completely unpredictable. And while this might be true to some extent when it comes to the complexity of social or economic systems it certainly is not true for the delivery part of product development. However, if we measure the delivery times (Lead Times or Flow Times) of our teams we easily could assume that it is a completely unpredictable process.

Very often we don’t know when started work will be done. Contemporary management approaches try to tackle that assumed unpredictability by planning excessively and tighter monitoring.

But what if the uncertainty in product delivery is not the cause for the unpredictable delivery times but the symptom of something else? If you take a look at the charts below you might recognize that the Flow Times (time from starting a work item to finishing it) for each single work item is much more predictable for the single-tasking teams while it looks almost arbitrary for the multitasking teams.

Image for post
Image for post
The Flow Time (time elapsed from starting work to finishing the work) varies dramatically for each single work item for the multitasking teams.

The simulation is built in a way that the actual time it takes for each work item to be finished varies randomly (lognormal distribution). This means some work items need only a little time to be done and some need a lot of effort. And although this actual effort needed is randomly distributed for both the single-tasking teams as well for the multitasking teams we recognize that the Flow Time is pretty stable around 10 to 20 days for all the work items while the Flow time for the multitasking team varies drastically from 20 days to more than 300 days.

“Your process is unpredictable. What you may not realize, though, is that you are the one responsible for making it that way.” Daniel Vacanty *

So while contemporary management approaches try to plan the flow of work itself it seems to be more effective to stop the planning at all and to alter the restrictions in which the product delivery work is performed. Reducing the number of parallel work items and allow Pull- (starting work as soon as capacity is available) instead of Push-mechanics (starting work as soon as the work is waiting) are two very effective ways to bring predictability back to your delivery process. If the preconditions for product development changes it is more effective to simply measure the Flow Times (from starting a work item to finishing it) to answer the omnipresent question of “When will it be done?”.

The effects described above — the reduced throughput, the reduced value generation, and the unpredictability — result from the excessive introduction of Waiting Time into the value generation process if you start work too early.

Image for post
Image for post
Flow Efficiency explained by Steve Tendon and Daniel Doiron. Graphic from https://leanpub.com/workflow

The lean and agile folks express the ratio of value-adding and non-value-adding time in terms of Flow Efficiency. The ratio of the total value-adding Touch Time and the Flow Time which includes the Touch Time and the non-value-adding Waiting Time is the Flow Efficiency.

Without going into any details let’s assume that a higher Flow Efficiency is favorable over a low Flow Efficiency since it is less wasteful in terms of effective use of available team time.

If you take a look at the simulation results below you can immediately spot that single-tasking yields a higher Flow Efficiency in the long run. This leads to more efficient use of available team time and thus much higher throughput.

Image for post
Image for post

Although highly simplified the simulation reveals the negative effects of starting work too early. Starting work too early leads to multitasking which has disastrous consequences on throughput, value generation, and predictability.

Minimize the amount of work in process and using Pull instead of Push are effective counter-measures to multitasking and have a huge impact on the productivity of product development teams.

Those counter-measures are usually under the team’s control although it may not seem to be that way at first glance. If a team may feel the push of starting work prematurely because of “outside pressure” this simulation may help to make the dire consequences explicit and reduce the pressure that is put on product development teams.

Feel free to play around with the simulation and get a feeling of the boundary conditions effects (Push vs. Pull, WIP Limits).

Why simulations?

Delivering great value to customers while simultaneously achieving joy, satisfaction and flow at work is something that I want to support in my profession. Unfortunately, the prerequisites for this kind of work environment are quite demanding and more often than not I’ve struggled to generate the understanding and the willingness to create the necessary preconditions for this kind of work environment. I often thought that people don’t understand the full magnitude of their decisions for instance when releasing work too early into the value delivery system.

Feedback cycles in real-life situations are often too long to effectively learn from them and even if you have fast feedback cycles, organizations by their nature are not a great place for well-designed experiments in which you immediately can see and compare the effects of your decisions.

Simulations, although often rather primitive may help us to overcome those obstacles and support fast-paced learning.

The following video displays the simulation in action. 5 single-tasking teams and 5 multitasking teams try to burn 100 work items. I find it interesting to see the real-world patterns of dealing with work becoming revealed too clearly in a simple simulation.

If you want to play around with the simulation by yourself, feel free to check out the simulation here.

On my account

Feel free to follow me on Twitter or LinkedIn to regularly receive excerpts of related content.

The simulation setting

If you want to dive deeper into the simulation itself you might want to read on. I am going to describe the setup of the simulation and its underlying assumptions.

Let’s begin super simple. Imagine five teams that share a pool of work items.

Image for post
Image for post

Each work item can be done by any of those teams. There is no specialization considered in this simulation. To finish a work item it only needs to be worked on by one single team. To keep the simulation easy to use I’ve renounced dependencies between teams for this blog post although I know how common they are in scaled product development.

Image for post
Image for post

You may want to play around with the number of teams. However, consider that the simulation software used for this particular simulation is browser-based, which means your browser has to run all the calculations for the simulation in real-time on your machine. If you have many teams and loads of work items in the simulation you machine may slow down.

Image for post
Image for post

In this simulation, each time can have the state of being busy or being calling for a work item.

Image for post
Image for post

Each work item can rather be waiting, in progress, blocked (means started but currently not worked on), done or dropped.

Image for post
Image for post

The work items need a certain amount of effort to be finished. This effort is randomly calculated (lognormal distribution around the mean effort per work item) for each work item. The simulation emulates 365 days of a year (yes, I know that we usually don’t work on weekends). Every day when a work item is in progress this day is subtracted from the needed effort of this work item. As soon as the work is completed and we’ve spent enough progress time on a work item in order to fulfill the needed amount of work the work item is marked as done.

Image for post
Image for post

Every work item has a randomly assigned expiry time (lognormal distribution around the mean expiry date). Every day at which a work item is waiting to be worked on the expiry time is reduced by this one day. After the whole expiry time is consumed the work item “drops”. When a work item is dropped it can no longer be worked on. Think about it as a missed opportunity.

Image for post
Image for post

Each work item as soon as it gets worked on is associated with a particular team. This is true whether this work item is pulled by the team or if it’s pushed onto the team. To make this relation explicit, the work item steadily moves toward its associated team. However, the work item may only move if it is in progress.

Image for post
Image for post

From time to time new work items join the pool of already waiting work items. Initially, the spawn-rate is configured in a way that there should always be a work item to work on even if work items drop from time to time.

Image for post
Image for post

The heart of the simulation is the possibility to make the effects visible of pushing work into a system faster than the system can process this work.

“The principle remains the same: any time you try to shove items into a system at a faster rate than items can exit the system, you are met with disastrous consequences. This principle seems immediately obvious and intuitive. Yet, for whatever reason, we constantly ignore this rule when we manage knowledge work.” Daniel Vacanti *

You may seemingly adjust at which probability a work item that is waiting to be processed may be pushed onto one of the teams.

Image for post
Image for post

The single-tasking teams obviously don’t accept any Push that’s why the probability is zero percent. It is fun to play around with the simulation and see the effects on predictability and throughput emerge from a high degree of Push.

Even if the workflow for the teams is based on Push instead of Pull the simulation acknowledges that you cannot push work onto teams indefinitely. At some point, the team no longer can start new work. Although this is not a work in progress limit in its original sense it somehow limits the maximum amount of work in progress for each team.

Image for post
Image for post

Even if a team works in multitasking-mode it actually can only work on one work item at a time since we assume that the whole knowledge of the team is needed in order to make progress on that work item. (We may argue about this restriction in the simulation.) If a work item gets started but then another work item gets the team’s attention the former work item becomes “blocked”. You could also call it Waiting Time again but I wanted to make clear that the work item really is blocked from its progress and I didn’t want to confuse the different Waiting Times within this simulation.

Multitasking bears the often hidden cost of Context Switching Time. This is a non-productive time between switching from one unfinished work item to another. In order to be more realistic, the simulation takes this Context Switching Time into consideration when a team switches from one work item to another. The initial configuration assumes that it drains one day of the team's capacity if the team shuffles work items.

Image for post
Image for post
Image for post
Image for post

However, I know that not every work item switch costs a full day to adjust to the new work item. That’s why this Context Switch Time only kicks in with a certain probability. You may play around with the different probabilities and see their effects.

Image for post
Image for post

I think it’s fun to see the simulation in action and look at all the work items flying around but the real benefit of this simulation comes from the graphs plotted while running the simulation.

Image for post
Image for post

Since I’ve talked about the most relevant charts at the beginning of this post I am not going deeper into those. Let the simulation run and see the different graphs become plotted in real-time.

Insightmaker.com offers some documentation on how to run a simulation. If you want to see how results change if you run many rounds (comparable to Monte Carlo simulation) you may want to use sensitivity testing in insight maker.

I highly recommend the “compare results” feature of insightmaker.com.

Image for post
Image for post

If you run more than one simulation you can compare certain results with each other.

Image for post
Image for post
Image for post
Image for post

If you want to take a look at the raw simulation results data I got you covered. Download the compressed file here.

Final words

I hope that this simulation makes the effects of starting work too soon more accessible and understandable. You may have as much fun as I have when playing around with the simulation. Maybe it helps you to communicate the necessity of saying “no” to your colleagues.

Hopefully, this simulation encourages you to give work in progress limits and single-tasking a try. This ultimately may lead to more productive and more joyful work.

If you have questions regarding this simulation feel free to leave a reply or get in touch on twitter.

Former Scrum, Kanban and Management Consultant | Agile Coach | TOC Enthusiast | idealo Internet | I believe that a humane global economy is possible.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store