Using the Assembly Line Approach for Value Stream Optimization

This article presents the Assembly Line Model as a practical tool for the systematic optimization of value streams. It explores how to systematically enhance the model to measure performance, identify bottlenecks, and establish a structured measurement system that supports continuous, data-driven improvement.

Systematic improvement is not a goal—it’s a habit built on metrics, feedback, and intent.1“ –

‍Setting the Stage

The Assembly Line Model has already been introduced in the article Visualization of Value Streams and applied to Value Stream Identification in Using the Assembly Line Approach for Value Stream Identification.” This article builds on that foundation and focuses on how the model supports systematic Value Stream Optimization.2

The core of Value Stream Thinking can be understood through four key steps3:

  1. Value Stream Identification – What is my Value Stream, and why does it exist? –> Value Stream Canvas
  2. Current State Description – How can I describe my Value Stream in a way that enables improvement? –> Visualize/Model the value stream, with all necessary information.
  3. Initial Optimization – How can I begin organizing around value to establish a solid foundation? –> Option building and Team Topologies
  4. Continuous Optimization – How can I systematically improve the Value Stream over time? –> Measure the performance and systematically improve.

While the steps follow a logical sequence, they are not strictly linear. Throughout the lifecycle of the Value Stream, information is continuously added or refined to improve how the stream is managed and optimized. Later steps can influence earlier ones—for example, Value Stream Mapping may reveal that the actual scope begins or ends differently than initially assumed. Similarly, early measurement can help establish a baseline or highlight areas where deeper analysis is needed.

‍How to Optimize a Product Development Value Stream

There are several factors that determine the speed and quality of a Value Stream. Since our paradigm is iterative and incremental development, the development process can be viewed as a series of learning cycles. The shorter and faster these cycles are, the faster the entire system can learn and adapt.4 We can influence the speed and effectiveness of feedback through the following approaches:

  1. Shift Left: Detect and correct issues earlier in the development process.
  2. Increase Cycle Frequency: Run more learning cycles within the same timeframe.
  3. Accelerate Cycle Speed: Move work through the system more quickly – from idea to delivery and back.
  4. Improve Cycle Reliability: Reduce failures and rework to ensure consistent flow.
  5. Eliminate Waste: Remove non-value-adding activities that slow down the system.
Basic Stages of Product Development

‍Which Metrics help Drive Optimization?

The metrics we choose to track depend on the specific goals of our optimization efforts. Common objectives include increasing speed, improving efficiency, enhancing product quality, boosting employee satisfaction, and fostering greater transparency. Many of these metrics are interrelated – for example, eliminating waste from the system typically leads to faster delivery, as teams can focus more on value-adding activities. Similarly, improving quality at one stage reduces the need for rework at later stages, allowing value to flow more quickly through the system. For now, we will focus on Flow and Quality Metrics, although this broader topic clearly warrants deeper exploration in the future.

‍Flow Metrics

Flow metrics are not a new invention—they’ve been used in various forms across software and systems development for years. What MiK Kersten did with the Flow Framework™5 was to formalize and package these metrics into a coherent model, making them easier to apply across value streams. Similarly, the Scaled Agile Framework (SAFe)6 has published supporting guidance on these concepts. The illustration below shows how these metrics align with a typical development value stream, as modeled by the Assembly Line. The accompanying table summarizes each metric and its definition.

Flow Metrics in the Assembly Line

It’s important to note that different organizations and frameworks may use slightly different terms or interpretations – so aligning on definitions before discussing or applying the metrics is essential.

Metric

Description

Flow Time

The total time it takes for a flow item to go from ’start‘ to ‚completion‘ (e.g., from creation to delivery).

Flow Efficiency

The ratio of active work time to total flow time; highlights delays and wait states.

Flow Load

The number of flow items in progress; reflects the amount of work-in-process (WIP).

Flow Velocity

The number of flow items completed over a period; shows throughput of value delivery.

Flow Distribution

The proportion of different types of flow items (e.g., Features, Defects, Risks, Debt) being worked on; indicates investment balance.

Flow Predictability

The consistency and reliability of delivery times across flow items; measures stability of value flow.

‍Quality Metrics

Quality metrics are measurable indicators used to assess the effectiveness, reliability, and overall performance of a product, system, or process in meeting defined quality standards.7 They help teams identify defects, monitor trends, improve delivery practices, and ensure that outcomes align with customer expectations and business goals. Commonly used types are listed in the table below.

Metric

Description

Defect Backlog

Number of known defects that have not yet been resolved.

Defect Flow Rate

Rate at which defects are resolved (can be measured over time).

Escape Rate

Number of Defects that have escaped to the next station, most prominent is the Escape Rate to the Customer(s).

Mean Time to Repair (MTTR) or Defect Resultion Rate

Average time taken to resolve a defect once discovered.

‍What is our Starting Point?

Product development can begin from a variety of starting points, and the DevOps Evolution Model offers a simple typology to classify them. In practice, it has proven valuable to first understand the current state at a meta level and align on the desired target state before beginning optimization with the Assembly Line model. This usually triggers a discussion about shift-left approaches, transaction costs and batch sizes (extra article to be written).

The DevOps Evolution Model

Model #1: Although teams develop in iterations, additional effort is still required to release the solution. This “undone work” consists of activities not included in any team’s Definition of Done (DoD) and remains after standard development tasks are completed.
Model #2: Teams begin their shift-left efforts by incorporating undone work into the Definition of Done, gradually bringing release-related activities closer to development.

Model #3: Undone work is reduced to a degree that it can be completed within each iteration, enabling near-incremental delivery at the iteration level.
Model #4: Undone work is fully eliminated. The organization adopts a continuous delivery model, enabling high-quality releases with every Feature, Story, or code change.

Measuring the Value Stream – First Steps

Measurement within the Assembly Line can be performed at varying levels of detail. We begin with an end-to-end perspective and then examine the individual components more closely. To better reflect the full end-to-end flow, we’ve extended the Assembly Line model to include the requirements phase.8

Measurement Model for the Assembly Line

In natural language our optimization should make us faster, deliver better quality, and make us more productive (improve throughput). When we translate that to the flow metrics this means:

  • Improve flow time (make us faster) and Deployment Frequency
  • Deliver better quality (less escaped defects)
  • Reduce the amount of required work to deliver the product (improve throughput) and improve throughput

If our optimization does not show improvements in this metrics, it is not really an optimization. So whatever we do, should have a positive impact on these metrics. Let’s look at each of the metrics in more detail.

Measuring Flow Time

Since different types of work items move through the Value Stream, it’s important to first define what we want to measure.
At the most granular level, we might look at a simple code change – how long does it take for a single line of code to move through the Value Stream, or more specifically, through the DevOps pipeline?9 Another relevant item type is a defect fix – measuring the time from when a defect is detected to when the fix is delivered and validated.

A third perspective is the Feature level – how long does it take from the start of feature implementation to the confirmation that the feature is functioning as intended?

Ultimately, all this information supports continuous improvement. However, the most critical insight is the frequency with which we can deliver value to the customer10 – that is, how often we can provide a new, high-quality version ready for hand-off. Equally important is how quickly the customer can consume that version and provide feedback.11 Since we are interested in tracking performance over time, all statistics must be measured across a defined time period. Below are a few examples of measurements, as outlined in the “Measurement Model for the Assembly Line” above. The final decision on what to measure – and in what order – should be made based on the specific context of the value stream.12

Defect Resolution Time as Flow Rate Example

The Defect Resolution Rate is a good example of a Flow Rate metric for the flow item type “Defect.” It measures the time elapsed between the detection of a defect and its full resolution. To support continuous improvement, individual steps within the resolution process can be measured separately to identify potential bottlenecks. In our example, the process is divided into three phases: from Detection to Submission, from Submission to Code Fix, and from Code Fix to Confirmation of Resolution in the integrated product.

Deployment Frequency

The graph displays clusters of five working days per week along with the corresponding deployment status (hand-off to next station):
Green indicates a successful deployment (e.g. Tuesday in the first week).
Red represents a failed deployment attempt (e.g. Monday in the first week).
White shows that no deployment attempt was made on that day.
A stacked column indicates multiple deployment attempts on the same day.

Deployment Frequency: On average, there were 1.3 successful deployments per week
Success Rate: 4 of 7 successful deploys is 57%
Increasing deployment frequency accelerates feedback loops, which in turn has a positive impact on overall flow time.

Defect Escape Rate

The Defect Escape Rate measures the proportion of defects that are discovered after a release—or more generally, after a hand-off from one station to the next in the value stream. Its purpose is to reveal how many defects have bypassed internal quality controls and testing processes. A high escape rate often points to weaknesses in testing, automation, or development practices, and serves as a key indicator for improving upstream quality to reduce downstream firefighting and rework.

Measuring the Defect Escape Rate—and analyzing which defects escape from which stages—is used to improve the test strategy by shifting defect detection to earlier points in the process. This shortens feedback cycles and ultimately reduces overall flow time.

Defect Backlog and Flowrate

The Defect Flow Rate displays the number of incoming and resolved defects over a given time period (day, week, or month). If the red bar (incoming defects) is higher than the green bar (resolved defects), more defects were detected than fixed. The black line represents the cumulative number of open defects—it rises when unresolved defects accumulate and falls when resolution outpaces detection. A growing backlog indicates that development is introducing defects faster than it can fix them. This signals a decline in quality and leads to increased effort for defect management and testing. It often suggests that new feature development is prioritized over quality, resulting in longer stabilization phases – a pattern that aligns with Model #1 in the DevOps Evolution Model.

Apply Measures on all Stages

While the above measures were discussed primarily in the context of the final hand-off, they can be applied to each stage (blue boxes) in the Assembly Line. This helps identify bottlenecks in terms of both speed and quality, as the slowest or least reliable stage ultimately determines the overall performance of the system. Applying metrics at each stage enables targeted improvements where they will have the greatest impact on the effectiveness of the entire value stream.

Which of the stages/components are the bottleneck?

Balancing the Cost and Value of Measurement

Measuring the performance of a Value Stream is comparable to purchasing information: it involves costs – for collecting, validating, analyzing, and presenting the data – and it has a quality dimension13, as the reliability and accuracy of the data determine how well it reflects reality and, ultimately, how useful it is for decision-making.

Conclusion: Making Optimization Measurable and Systematic

Optimizing a value stream is not a one-time event—it’s a continuous discipline grounded in visibility, measurement, and learning. The Assembly Line Model provides a structured, visual way to deconstruct complex development flows into manageable stages, making it easier to spot friction, delays, and quality issues across the end-to-end system.

In this article, we extended the Assembly Line Model from visualization and identification to practical optimization. We showed how flow and quality metrics—when applied not just at the final hand-off but across every stage—can highlight systemic bottlenecks in both speed and quality. These insights enable targeted improvements where they matter most.

Importantly, measurement must be balanced: meaningful metrics help drive action, but they come with costs in time, tooling, and effort. By aligning what we measure with our specific goals—whether it’s faster delivery, higher quality, or more predictable flow—we ensure that the investment in measurement yields real value.

Systematic improvement doesn’t happen by chance. It emerges from a habit of measuring what matters, learning from feedback, and adjusting with intent. The Assembly Line Model helps teams anchor this habit in a concrete, scalable way—enabling organizations to build not only better products, but better systems for delivering them.

Notes & References

  1. This quote was created during a conversation between me (Peter Vollmer) and ChatGPT. I was seeking a statement that reflects the mindset of systematic improvement—grounded in measurement, feedback loops, and purposeful action—based on my past experience with successful optimizations on the Assembly Line. ↩︎
  2. The Assembly Line model can be used throughout the entire lifecycle of a Value Stream. Depending on the current stage, new information is simply added to the existing model – so that earlier work remains intact and continues to provide value. ↩︎
  3. This is a simplified perspective focused on a single Value Stream, without addressing the broader transformation activities and the more detailed tasks required—both of which are described elsewhere on the website. However, this simplification may help clarify the topic at hand. ↩︎
  4. A detailed explanation can be found in the Assembly Line Whitepaper ↩︎
  5. Originally introduced in his book: Kersten, M. (2018). Project to Product: How to Survive and Thrive in the Age of Digital Disruption with the Flow Framework™. IT Revolution Press. Further information can be found at: https://flowframework.org/ ↩︎
  6. See, e.g. https://framework.scaledagile.com/measure-and-grow or https://framework.scaledagile.com/make-value-flow-without-interruptions/. ↩︎
  7. See also IEEE‑STD‑1061‑1998, Standard for a Software Quality Metrics Methodology and ISO/IEC 9126‑1. ↩︎
  8. The requirements phase can vary significantly across different value streams. In this example, we assume that the organization is already structured around value, with a product-level backlog in place and team-level backlogs derived from it. The arrows between the team backlog and components are intentionally left vague, as we are not yet addressing team structure (e.g., component teams vs. stream-aligned teams). While team organization will have a significant impact on the performance of the value stream, our focus here is solely on measuring the current state performance. At this stage, the Assembly Line model can be depicted independently of those organizational details. ↩︎
  9. In that scope we are in the middle of the DevOps domain and will apply the DevOps mindset, principles and techniques. ↩︎
  10. This frequency is critical to agility—in other words, how quickly can we make a change to the product when needed? We’ve all encountered situations where development teams say it’s too late or too risky to implement a change, or where an urgent security vulnerability requires an immediate fix. Such changes are often classified by impact and urgency using labels like Major, Minor, Minor.Minor, Patch, or Hotfix – each typically following a slightly different process variant. ↩︎
  11. It’s often a chicken-and-egg problem, which is why it can be beneficial to deliver more frequently than the customer is able to consume new versions. This may even encourage the customer to optimize their own processes. This is especially relevant at the hand-off points between value stream networks. ↩︎
  12. We provide here a brief description of each metric to convey the basic idea behind it. Each metric warrants its own dedicated guidance article to fully explore its value, the nuances of accurate measurement, how to interpret the results effectively, and how to derive actionable improvement measures. ↩︎
  13. See also, Youtube: Agile Austin: Optimize Flow of Development Value Streams, where I discuss how to improve reliability of value stream metrics. ↩︎