Visualization of Value Streams
This article explores how value streams can be effectively visualized through different modeling approaches—from classic lean maps to modern development-oriented views. It introduces the Assembly Line Model, a new way to represent software and cyber-physical product development. The focus is on modeling the flow from backlog to delivery, helping teams gain clarity, alignment, and insight into how value is truly created.
„All models are wrong, but some are useful.“ – George Box
Visualization is Modeling
Visualizing a value stream is, at its core, an act of modeling. As George Box famously said, “All models are wrong, but some are useful.” This reminds us that while no model can fully capture the complexity of a real-world process, the right model—focused on relevant attributes and the appropriate level of detail—can be an invaluable tool. A well-crafted model helps people understand the system, engage in meaningful discussions, and solve real problems. The more suitable the model, the greater the likelihood of arriving at effective solutions.
If we translate this to our needs we understand why there are different modeling approaches around. We have the traditional models from the early days of Lean Manufacturing, often stated as material- and information-flow map, some enhanced models, introducing swim lanes and a more product development oriented view, DevOps oriented model as used in the Scaled Agile Framework. Not to forget the simple models for high level value stream explanations and a couple of value stream landscape models.



A new Model tailored for Prodcut Development
Each of these models serves a distinct purpose and may be well-suited depending on the context in which it is applied. The closer the model aligns with its intended use, the more effectively it can support informed analysis and decision-making.
Notably, many of these value stream models originate from the context of mass production, where repeatability and efficiency are primary concerns. This raises the question: to what extent does that paradigm align with the nature of modern product development, particularly in environments that emphasize iterative and incremental approaches?
To explore this, it is useful to compare the underlying assumptions of the mass production mindset with those of the product development mindset.

Traditional value stream mapping originates from manufacturing, where the goal is to produce consistent batches of similar items on a production line. At the end of the process, you expect a uniform set of nearly identical products. In this context, going back to a previous station is seen as a failure – it indicates that a part deviates from the standard, introducing unwanted variability into an otherwise controlled and repeatable process.

But software development doesn’t follow the rules of mass production. We’re not producing batches of identical items—instead, we’re evolving a single product over time. Features and improvements are added incrementally or in parallel, and the product becomes more refined as it moves through the development process. The end result isn’t a collection of uniform outputs, but a continuously improved version of the same product. In this context, going back isn’t a failure—it’s a vital feedback loop. Revisiting earlier decisions often leads to better solutions, deeper understanding, and higher quality. This kind of positive variability—where ideas improve over time—is not just accepted; it’s essential.
Finding the right Model for Product Development
Creating an effective model for software and cyber-physical product development requires accounting for the unique characteristics of these domains. Unlike traditional manufacturing, they operate in dynamic, iterative, and largely invisible2 environments. A detailed explanation of the proposed Assembly Line Model—which addresses these challenges—is provided in the whitepaper Reimagining Value Stream Mapping: A Fresh Approach for Software and Cyber-Physical Systems. In this article, we offer a high-level overview to provide an initial understanding of the model.
A short Describtion of the Assembly Line Model
The assembly line serves as a simplified model of a typical product development process, structured around its main phases.

- From Idea to Backlog: This phase encompasses the process of defining what the product should achieve—whether through requirements, specifications, user stories, or other forms of description. It translates strategic intent into actionable items.
- Backlog: The backlog serves as a prioritized (ranked) list of work items, representing a structured set of instructions for what should be built next.
- From Backlog to Delivery: This phase covers the actual development work—designing, implementing, testing, and preparing the product for release. It is the heart of the product creation process.
- From Delivery to Value Realization: Simply delivering a product does not automatically generate value. Value is only realized when the product is deployed, adopted, and used effectively by end users—fulfilling its intended purpose (or in some cases, delivering unexpected outcomes).
In this context, we will focus on the most complex segment: from backlog to delivery. This phase represents the core of the development flow and often poses the greatest modeling challenge. While the surrounding phases are certainly important, they are typically more straightforward to visualize.

1 – Identifying the product (value delivery): The first—and often more challenging than expected—step is gaining a clear understanding of the product being delivered and the nature of the final hand-off. When analyzing a value stream, defining its scope is essential: where does responsibility begin, and where does it end?
A common pitfall is thinking about the product solely in terms of what the user can accomplish with it. While user outcomes are important, they don’t necessarily help us model how the product is developed. Instead, we need to focus on what is actually handed off at the end of the development process.
In a straightforward software example, this might be a fully documented and tested software package. In the case of cyber-physical systems, it could involve both hardware and the embedded software that controls it.
In very large and complex environments, a single value stream can become so extensive that it must be partitioned into smaller, more manageable value streams. In such cases, the value stream under observation typically delivers a sub-product—a component or capability that contributes to a larger system. The final end product is then an aggregation of many such sub-products, each developed and delivered through its own dedicated value stream3.
2 – Final integration & testing: Once the product has been identified, we can begin visualizing the value stream. The key question at this stage is: What components are integrated to build the final product, and what must happen to make it ready for delivery (as defined in step 1)?
This phase is typically illustrated by the blue circles in the diagram box. These represent activities such as component release management4, integration/build processes, and various forms of testing (e.g., system, performance, compliance).
The green circle at the end represents the acceptance criteria—the point at which the integrated product is considered ready for hand-off or release, aligning with the delivery expectations outlined in step 1.
3 – Assembled components: While step 2 focused on the activities required to integrate components into a final product, this step identifies and describes the components themselves that are being assembled. For each of these components, we then recursively examine how they are constructed from their own sub-components.
4 – Stop decomposition: This decomposition continues until we reach a sufficient level of detail—where further breakdown no longer adds meaningful insight or value.
5 – Suppliers: The same principle applies to supplier-delivered components: in many cases, their internal structure is either unknown or irrelevant to the modeling effort, and can therefore be treated as black boxes within the value stream.
References
- https://commons.wikimedia.org/w/index.php?curid=28553995 ↩︎
- Unlike in manufacturing, where each station and material flow is visible on the shop floor, much of software development happens in a less tangible environment. Many activities are carried out by individual developers or executed through automated pipelines, making the overall process far less visible and harder to observe directly. ↩︎
- In such cases, we often refer to a value stream landscape—a structured network of interconnected value streams, each delivering a sub-product or capability. Together, these value streams contribute to the development and delivery of the overall solution, with each playing a defined role within the broader system. ↩︎
- These represent activities such as selecting the components to be used, reviewing release notes to understand what’s new, identifying known issues, and gathering any other necessary information. ↩︎