Skip to main content

On Value Stream Management in DevOps, and Seeing Problems as Solutions

You know that thing when a term emerges and you kind of get what it means, but you think you’d better read up on it to be sure? Well, so it was for me with Value Stream Management, as applied to agile development in general and DevOps in particular. So, I have done some reading up.

Typically, there seems to be some disagreement around what the phrase means. A cursory Google of “value stream DevOps” suggests that “value stream mapping” is the term du jour, however a debate continues on the difference between “value stream mapping” (ostensibly around removing waste in lean best practices) and “value streams” – for once, and for illustration purposes only, I refer to Wikipedia: “While named similarly, Lean value stream mapping is a process-based practice that seeks to identify waste, whereas value streams provide a higher-level overview of how a stakeholder receives value.”

Value streams are also seen (for example in this 2014 paper) as different to (business) processes, in that they are about making sure value is added, versus being about how things are done. This differentiation may help business architects, who (by nature) like precision in their terminology. However the paper also references Hammer & Champy’s 1993 definition of a process, which specifically mentions value: “Process is a technical term with a precise definition: an organized group of related activities that together create a result of value to the customer.” Surely a process without value is no process at all?

Meanwhile analysts such as Forrester have settled on Value Stream Management, which they reference as “an emerging market” even though at least some of the above has been around for a hundred years or so. Perhaps none of the terminological debate matters, at least to the people trying to do things with whatever the term means. Which is what, precisely? The answer lies in the restating of a problem as a solution: if value stream management is the answer, the challenge comes from a recognition that things are not working as well as they could be, and therefore are not delivering value as a result.

In the specific instance of DevOps, VSM can be seen as a direct response to the challenge of DevOps Friction, which I write about in this report. So, how does the pain manifest itself? The answer is twofold. For people and organisations who are already competent at DevOps, particularly those cloud-native organisations who are DevOps-by-default (and might wonder what other approach might exist), the challenge is knowing whether specific iterations, sprints and releases are of maximum benefit, delivering something of use as efficiently as possible.

In this instance, the discipline of value stream management acts as Zen master, asking why things are as they are and whether they can be improved. Meanwhile the ‘emerging market’ of VSM refers to tooling which smooths and simplifies development and operational workflows, enabling the discipline to be implemented and hopefully maximising value as a result. Which gives us another “problem-as-solution” flip — while many of the tools available today are API-based, enabling their integration into workflows, they have not always been built with end-to-end value delivery in mind.

A second group feeling the pain concerns organisations that see DevOps as an answer, but are yet to harness it in a meaningful way beyond individual initiatives — many traditional enterprises tend to fall into this category, and we’ve held various webinars about helping organisations scale their DevOps efforts. For these groups, value stream management offers an entry point: it suggests where effort should be focused, not as DevOps as an end in itself but as a means for delivering increased, measurable value out of software.

In addition, it creates a way of thinking about DevOps as practical workflows, enabled by automation tools, as opposed to ‘just’ a set of philosophical constructs. The latter are fine, but without some kind of guidance, organisations can be left with a range of tooling options but no clear idea about how to make sure they are delivering. It’s for this reason that I was quite keen on GitHub’s announcement around actions, a couple of weeks ago: standardisation, around not just principles, but also processes and tools, is key to efficiency.

The bottom line is that, whatever the terminology, we are moving away from thinking that ‘DevOps is the answer’ and towards ‘implementing the right kind of DevOps processes, with the right tools, to deliver higher levels of value’. Whether about principles or tooling, value stream management can therefore be filed in the category of concepts that, when they are working right, they cease to exist. Perhaps this will become true in the future but right now, we are a long way from that point.

Afternoon: If you want to read up on the notions of value management as applied to business outcomes, I can recommend this book by my old consulting colleague Roger Davies.



from Gigaom https://gigaom.com/2018/11/21/on-value-stream-management-in-devops-and-seeing-problems-as-solutions/

Comments

Popular posts from this blog

Who is NetApp?

At Cloud Field Day 9 Netapp presented some of its cloud solutions. This comes on the heels of NetApp Insight , the annual corporate event that should give its user base not just new products but also a general overview of the company strategy for the future. NetApp presented a lot of interesting news and projects around multi-cloud data and system management. The Transition to Data Fabric This is not the first time that NetApp radically changed its strategy. Do you remember when NetApp was the boring ONTAP-only company? Not that there is anything wrong with ONTAP of course (the storage OS originally designed by NetApp is still at the core of many of its storage appliances). It just can’t be the solution for everything, even if it does work pretty well. When ONTAP was the only answer to every question (even with StorageGrid and EF systems already part of the portfolio), the company started to look boring and, honestly, not very credible. The day the Data Fabric vision was announced

Inside Research: People Analytics

In a recent report, “ Key Criteria for Evaluating People Analytics ,” distinguished analyst Stowe Boyd looks at the emerging field of people analytics, and examines the platforms that focus on human resources and the criteria with which to best judge their capabilities. Stowe in the report outlines the table stakes criteria of People Analytics—the essential features and capabilities without which a platform can’t be considered relevant in this sector. These include basic analytic elements such as recording performance reviews, attendance monitoring, and integration with other HR tools. The report also defines the key criteria, or the features that actively differentiate products within the market and help organizations to choose an appropriate solution. These criteria include: Full employee life cycle tracking Support for different employee types (seasonal or freelance workers) Employee surveys Diversity and inclusion monitoring Stowe also looks at the rapid innovation and em