There is a myth that a product must be broken down into smaller and smaller teams to address specific problems. A lot of managers talk about authority and ownership. Does ownership exist when a team becomes a screwdriver for a product?
Imagine a tactical organization that allows you to change the size of your team and the capability of a product in your portfolio by changing your backlog's prioritization. Now imagine that this is possible and still reduces coordination costs and improves management in environments with high variability. Nice huh?
It was to solve this problem that the Unified Flow was born. In this model, the same team serves the same projects through the same process. But, first, I'll explain a little better how this works.
Does your company have an extensive portfolio of products to manage, or perhaps you have many fronts that many teams serve? What is the size variability of teams (or squads) you have? Maybe you follow the scrum teachings to the letter of 7 to 10 people on a team, but often that's not feasible. Some initiatives have two or three people, and others end up with more than twenty.
Does the amount of knowledge required vary? In some cases, it takes little ability to implement. In others, vast knowledge of performance and optimization or implementation of complex algorithms and systems orchestration.
And when we put multiple external suppliers interacting with your team, we make the situation worse.
It is common to think a process should avoid variability as much as possible. In a manufacturing process, it is a fact. If one item is not the same as another, we call it a defect. Avoiding variability is good in this context.
Product development is different. We do not create the product itself. Instead, we make a recipe to create that product, whether physical or digital, through a series of experiments.
For example, in a candy factory, a recipe for a lemon pie is created by a cook and replicated thousands of times. Making the recipe involves a lot of experimentation, experience, and creativity.
If the product is created and sales are not going well, something needs to be done. To improve sales, the cook adds a little sugar to the recipe and a little less flour. This variability can increase or decrease sales, so there is a risk, but the product does not evolve without these changes.
You cannot generate new value for your product without adding variability. In the case of digital products, what varies is the plan. New features will come in, go out or change. Some vendors will delay delivery, forcing a change in scope, bugs will appear, and that beautiful Gantt Chart plan that several hands have approved is gone.
Variability will always show up, and with it, built-in risk. Being exposed to risk is a centerpiece of value creation. We need a process to accommodate it because what matters is not the variability. It's the economic impact of that variability.
Knowing and preparing for the worst risks and letting the smallest ones happen, continuously measuring and analyzing, we become value creators based on the unexpected.
When projects start to go wrong, managers first think: "let's allocate more people." Did the project get delayed? More people. Changed scope? More people. Putting more people in seems to be the universal answer everyone is looking for.
In the classic book The Mythical Man-Month, Fred Brooks created the famous phrase that came to be called "Brook's Law."
"Adding manpower to a late software project makes it later" Fred Brooks.
It establishes that managers cannot partition projects neatly into small discrete pieces that developers can implement independently. Furthermore, whenever we put new people on the project, we experience a drop in productivity due to the increased communication needed to get new people into the game. In short: people producing need to help and teach about the project to the newcomers.
In the case of a project that is already behind schedule, stopping those who are working on teaching new people who will take a few days or weeks to start producing may not be a good idea.
This will be true in the case of short assignments, moving a team to another project for a short period. The amount spent on reducing productivity may not compensate for the change in context.
This is also valid in other cases. For example, each project, product, or feature can have difficulty and size variations, resulting in variations in the length of teams, with smaller projects with fewer people and more extensive projects with many and/or multiple teams.
Usually, an average manager uses this approach to get to a deadline he has promised someone. Then, everything is done to get the delivery done. The result is a poorly made product no more stable than a house of cards. After months of bug fixing and performance issues, users start to run away, not to mention the team, which right after a project like this usually runs off to the next job.
In my own experience, this is pure chaos. To this day, I haven't seen any success with this approach, but some managers keep trying.
With all this variability, it is tough to achieve predictability since each case is different. So, what do I do based on when giving an estimate or forecast?
Queuing theory is a branch of probability that studies queuing through mathematical analysis and provides models that make it possible to scale systems where demand grows randomly.
A queue forms when the system capacity is less than the demand for items. It's the excess of the system. We are used to queues in our daily lives: at the supermarket, at the bank, in traffic. The inventory can also be considered a queue because if you sell everything you produce, you would never create a queue.
In all these cases, the queues are easy to identify. I see the people in front of me, the cars parked, and the inventory in the store's warehouse. But there is a type of queue that is a little harder to see. The knowledge work queue.
Knowledge work takes place in the world of ideas. When I have a high stock of ideas, they are usually in emails, notes, chats, or the numerous tools we use to manage knowledge. They are only evident to a few people. Every time I create a new idea for a product, that idea doesn't physically exist and automatically goes into progress, whether executed or not. Our queues are the work-in-progress inventory.
Even with visual management methods, managing the queue is not so simple. First, it is necessary to understand and classify the types of work and the associated risk. They are not parts of a machine. I might have an entire project hidden inside a card, while the next is a request to change the button's color.
When a system reaches more than 80% of its capacity, its queues are potentially infinite. Think about cars: if 100% of the road is filled with cars, all the vehicles stop every time one car stops.
Often a job has a queue time longer than the touch time, unnecessarily increasing delivery time.
And this dramatically increases the risk as it causes a delay in customer feedback. This creates even more variability, as delayed feedback lets us go further down the wrong path. This generates overload in the system because we continue to have to deliver results. Finally, it reduces the quality, since with such tight deadlines, quality who?
The intuitive thinking is that everyone is always working and giving their best. So work hard, then work harder. However, if it is in the things that do not generate results, it is not only useless but also harms the entire system.
This difficulty makes queue management the main job of the flow manager.
Managing queues is key to improving economics in product development. (don)
The Unified Flow is a way to reduce the economic impacts of all this variability. It's a workflow model based on Toyota's pull systems and queuing theory, designed around Don Reinertsen's product flow knowledge work.
Having multiple projects for multiple teams has its risk built in. If each team has its own queue, any problem with that team can block the entire queue. If the ERP vendor or a partner couldn't deliver the integration API on time and I can't pull anything out of the project without it, what do I do with this team? Or if that same team needs to deliver on time, but unexpected technological changes require a more significant effort?
There are many queuing models. The ones that matter most in our context are:
This model is prevalent and can be seen in supermarket checkouts. In our world, it is the model of one project per team. Each project has its queue of items to be done, and only one team attends each queue. If any queue has problems, we will have an idle server; if a server has issues, we will have a stopped queue.
This model can also be seen in the supermarket as the quick cashier or in a bank's waiting room. The virtual queue is usually a single queue served by multiple servers.
If a server has a problem with a shared queue, everyone in the queue feels a little delayed. Mathematically, this means that the variation in handle time is more minor.
"In the one-queue-per-server model, a single problem job can block the entire queue."
Shared queues lead to less variance in processing time, and the one-queue-per-server model leads to more extensive, unnecessary queues.
The performance in the two queue structures is different, even if both have the same demand and the same capacity.
Don Reinertsen is one of the most well-known and influential authors in product management. He has influenced people like David Anderson on the Kanban Method and Eric Reis on Lean Startup. So it's not a little thing, right?
His latest book, "Principles of Product Development Flow," deals with economic impacts on software development, such as queues, variability, batch size, and decentralization.
"We can serve random demands with fewer queues if we aggregate that demand and server with a shared resource." Don Reinertsen
There is a myth that resource decentralization is always good. Having a team 100% allocated to a project is a myth that came with the rise of agile methods, especially Scrum, which believes that focused teams will result in better response times. But this is not always true.
With proper management, environments with high variability can benefit significantly from the centralization of resources. Furthermore, this centralization is essential in preventing unexpected queues from forming.
When there are varying demands from several projects, when these demands are combined, the overall relative variability is less than the variability of each component.
Bringing it into our world: Let's say you have 9 senior developers and 9 projects in your portfolio. You could put a developer on each project, which is a good option if you have low variability in demands. But if the variability, as we've seen before, is high, the best option is to centralize these nine senior developers into a single team. This will match each project's variability and reduce the economic impact of variability on our system.
From Don Reinertsen's research, this would reduce the variability by more or less a factor of 3.
In practice, if you have 2 projects, and both suffer from variability, unifying your Flow can reduce the overall economic impact.
Suppose I have several teams serving the same queue or a single larger group serving all of them. In that case, I can reduce the total variability while maintaining delivery, even if at a slower speed.
If a large variability occurs, such as Project Y having to stop for legal reasons or because it waits for another supplier to implement, the queue is reprioritized, and the grouped team simply continues working on the demands of Project X.
This type of situation is expected in the daily life of companies. However, the market moves quickly, and agility is needed to change the plan and better serve customers.
Each customer has a different pressure at a given moment. Therefore, projects can gain or lose preference at any time, depending on various factors.
A fixed team for this type of scenario with high variability is inefficient. However, some products can possibly have a team focused on a specific project for a long time. In this case, the problem does not exist.
But we can have a considerable advantage in scenarios where the same company or department needs to create and maintain or even make small creations that pop up at any time.
Unified Flow is a tactical solution that helps to reduce the cost of global coordination of a company, increase efficiency in team allocation and sizing and give significant advantages by not having to manage people directly, offering a grand opening to self-management.
In the following articles, I'll talk a little more about how we apply the ideas of the unified Flow, tools, metrics, and problems we encounter in practice.
SHARE
0 comments