value stream Archives - SD Times https://sdtimes.com/tag/value-stream/ Software Development News Mon, 20 Mar 2023 18:16:08 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.1 https://sdtimes.com/wp-content/uploads/2019/06/bnGl7Am3_400x400-50x50.jpeg value stream Archives - SD Times https://sdtimes.com/tag/value-stream/ 32 32 Report: 92% of organizations are not prepared for digital transformation https://sdtimes.com/value-stream/report-92-of-organizations-are-not-prepared-for-digital-transformation/ Mon, 20 Mar 2023 18:16:08 +0000 https://sdtimes.com/?p=50616 The majority of organizations seeking to make a digital transformation are not equipped to do it successfully, according to the results of the recently released “2023 Project to Product: State of the Industry” report by portfolio management company Planview. Only 8% of respondents stated that they have operationalized the shift from project to product, meaning … continue reading

The post Report: 92% of organizations are not prepared for digital transformation appeared first on SD Times.

]]>
The majority of organizations seeking to make a digital transformation are not equipped to do it successfully, according to the results of the recently released “2023 Project to Product: State of the Industry” report by portfolio management company Planview.

Only 8% of respondents stated that they have operationalized the shift from project to product, meaning that 92% have yet to realize or capture the full value of a product operating model at scale. However, 63% reported that they are in the exploratory phase and 29% said they are expanding on earlier experiments.

It was also found that, until a mature product model is in place, enterprises spend 70% of their delivery capacity on defect remediation and waste 40% of their efforts due to overload and bottlenecks.

Read the full article here on VSM Times.

The post Report: 92% of organizations are not prepared for digital transformation appeared first on SD Times.

]]>
ConnectALL 2.11 introduces Logic Flow Adapters https://sdtimes.com/value-stream/connectall-2-11-introduces-logic-flow-adapters/ Thu, 09 Feb 2023 16:57:13 +0000 https://sdtimes.com/?p=50292 ConnectALL 2.11 is the latest release of the value stream management (VSM) company’s flagship VSM platform. ConnectALL is calling this release a “complete overhaul” of the platform, providing a more modern UI and stronger VSM capabilities.  Logic Flow Adapters were added to the platform in this release. These allow users to incorporate business logic into … continue reading

The post ConnectALL 2.11 introduces Logic Flow Adapters appeared first on SD Times.

]]>
ConnectALL 2.11 is the latest release of the value stream management (VSM) company’s flagship VSM platform. ConnectALL is calling this release a “complete overhaul” of the platform, providing a more modern UI and stronger VSM capabilities. 

Logic Flow Adapters were added to the platform in this release. These allow users to incorporate business logic into their value streams, based on inputs from multiple different applications. 

Users write and manage custom scripts for these in a single hub, which allows them to create and execute scripts without doing anything in the backend.

Read the full article here on VSM Times.

The post ConnectALL 2.11 introduces Logic Flow Adapters appeared first on SD Times.

]]>
GitLab enters value stream market with new Value Streams Dashboard https://sdtimes.com/value-stream/gitlab-enters-value-stream-market-with-new-values-streams-dashboard/ Wed, 25 Jan 2023 19:52:51 +0000 https://sdtimes.com/?p=50161 GitLab is officially entering the value stream management space with the beta release of its Value Streams Dashboard.  The new dashboard provides an overall view of metrics like DORA and flow metrics. By tracking these metrics over a period of time, development teams will be able to locate trends early, drill down into individual metrics, … continue reading

The post GitLab enters value stream market with new Value Streams Dashboard appeared first on SD Times.

]]>
GitLab is officially entering the value stream management space with the beta release of its Value Streams Dashboard

The new dashboard provides an overall view of metrics like DORA and flow metrics. By tracking these metrics over a period of time, development teams will be able to locate trends early, drill down into individual metrics, take action to improve performance, and track innovation investments. 

And, going up the chain, business leaders can also look at these metrics to eliminate bottlenecks and make decisions like where to add resources to support developers.

Read the full story on VSM Times.

The post GitLab enters value stream market with new Value Streams Dashboard appeared first on SD Times.

]]>
Beyond features and bugs: Expanding how to evaluate development investments https://sdtimes.com/value-stream-management/beyond-features-and-bugs-expanding-how-to-evaluate-development-investments/ Mon, 16 Jan 2023 18:34:15 +0000 https://sdtimes.com/?p=50078 Overnight, every company in the world became a software company. Those companies are either on the journey to becoming a world-class software company or they are going extinct. One key step in a successful journey requires connecting the daily work done by software teams to corporate goals and embracing autonomy with alignment.  Software development is … continue reading

The post Beyond features and bugs: Expanding how to evaluate development investments appeared first on SD Times.

]]>
Overnight, every company in the world became a software company. Those companies are either on the journey to becoming a world-class software company or they are going extinct. One key step in a successful journey requires connecting the daily work done by software teams to corporate goals and embracing autonomy with alignment. 

Software development is a business differentiator that requires strategic investments to improve the bottom line. Having worked in all aspects of the software development lifecycle, I know most people in the industry think in terms of two types of deliverables – creating new features and fixing bugs. In reality, that’s too limiting. I hear management complain that developer productivity is down simply because developers are responsible for what appears to be everything now and may spend less than 50% of their time writing code. The amount of time a developer has available for coding is tracked, but many other activities are hidden and considered “tax” of the organization, like caretaking the pipeline, fixing problems with your environment and helping testers.

Developers want to write code. The business wants them to write code. Customers want the solutions their code provides. That means we need to understand where developer time is actually being spent and give them an opportunity to write code for time-consuming manual activities that can be automated. 

Four fundamental development categories  

For product management to be effective, four types of work need to be visible, supported, and funded. 

Features. Delivering cutting-edge features is the fun part of a developer’s job. Creativity takes time and delighting customers isn’t easy in highly-competitive markets. Resources need to be invested in developing new or improved functionality, usability, flexibility, and other customer-friendly features. 

Defects. I’ve heard it said that today’s features are tomorrow’s bugs. Fixing defects, bugs, and other issues is a routine part of software development. Releases often go out with bugs, so your team may be putting out fires related to your releases and those of your vendors and partners. Identifying and eliminating problems that hurt the customer experience is important, but they need to be weighed against other priorities. 

Risk. Risk-related activities represent the majority of the hidden work that developers do. Improving the software engine to produce more reliably involves setting up guardrails and security for better deployment. DevOps practices push for complete automation of manual checklists that have been used to validate items, including whitesource library checks, open source license validation, testing, deployments, and code analysis. Developers WANT to automate them, and implementing code that does the checks is fun. 

Technical debt. As soon as something is built, the need for modernization is a possibility as the world changes. The technology gap between modern standards and legacy systems grows over time, so it’s important to regularly assess the tradeoffs between investing in patches or updates and rethinking the design given new learnings and needs. This includes build vs buy decisions. 

Balancing development work investments

Peter Hyde of Gartner defines a work profile as, “The proportion of each type of work item delivered in a time period by the software value stream.” Too much focus on one area can throw the organization out of balance, making it harder to deliver customer value. For example, if too much time is focused on features/defects, you’ll end up with a fragile development environment straddled with technical debt, ultimately killing its ability to deliver new features.

Start by identifying the percentage of your resources going into adding features, fixing defects, reducing risk, and addressing technical debt to understand your current work profile, which is also known as your work distribution. This may take some digging if visibility is lacking, especially if everything is currently being categorized under features or defects. This will give you a baseline for analysis and allocation of future spending, which is based on your goals. 

If the goal is to make significant future investments into sets of components, continual modernization of the technology base (technical debt) and refinement of the software delivery engine (risk) will reduce the overall “cost” and time of delivering new features or fixing bugs. Investments into risk may also increase the ability to obtain certifications, improve agility to react to outages, reduce MTTR, and boost customer trust.  

Ultimately the team, typically driven by product, must make tradeoffs with the limited resources they have. The team needs a way to tradeoff investing into better security automation vs improving the technology stack vs just adding features and fixing bugs.

Connecting value stream management and outcome mapping 

Outcomes are the core of delivering value to the customer, and every outcome can be supported by one of the four types of work. Teams should think hard about the outcomes they are working towards. They need to clarify why items are important, identify obstacles, explore how to learn more, and identify how to measure progress towards the outcomes. 

The work of product management is central to translating the outcomes into the core work types. Ultimately they are responsible for justifying and defending investment decisions.  Value stream management helps in this journey by providing the data and associated visualizations connected to outcomes.

Investment strategy advice

Balancing work is essential to effective product management. Start with your outcomes, but communicate in terms of the work distribution. You know the strategy you’re trying to accomplish from a business perspective. Translate that for your individual product managers, so they can connect those business outcomes to actual work that has to get done. 

To keep everyone on the same page, train your executive team on how to translate outcomes to the work profiles, and proactively sell why the tradeoffs were made in the first place between the various types of work. Finding the right balance will make a developer’s job more enjoyable while also improving customer value.

The post Beyond features and bugs: Expanding how to evaluate development investments appeared first on SD Times.

]]>
2023: The Year of Continuous Improvement https://sdtimes.com/devops/2023-the-year-of-continuous-improvement/ Fri, 13 Jan 2023 18:11:26 +0000 https://sdtimes.com/?p=50071 March 13, 2020. Friday the 13th. That’s when a large number of companies shut their offices to prevent the spread of a deadly virus – COVID-19. Many thought this would be a short, temporary thing.  They were wrong. The remainder of 2020 and 2021 were spent trying to figure out how to get an entire … continue reading

The post 2023: The Year of Continuous Improvement appeared first on SD Times.

]]>
March 13, 2020. Friday the 13th. That’s when a large number of companies shut their offices to prevent the spread of a deadly virus – COVID-19. Many thought this would be a short, temporary thing. 

They were wrong.

The remainder of 2020 and 2021 were spent trying to figure out how to get an entire workforce to work remotely, while still being able to collaborate and innovate. Sales of cloud solutions soared. Much of the new software companies invested in required training just to get up to speed.

But training in the form of in-person conferences ceased to exist, and organizers sought to digitalize the live experience to closely resemble those conferences.

Fast forward to 2023. The software and infrastructure organizations have put in place enabled them to continue to work, albeit not necessarily at peak performance. Most companies today have figured out the ‘what’ of remote work, and some have advanced to the ‘how.’

But this move to a digital transformation has provided organizations with tools that can help them work even more efficiently than they could when tethered to an on-premises data center, and are only now just starting to reap the benefits. 

Thus, the editors of SD Times have determined that 2023 will be “The Year of Continuous Improvement.” It will, though, extend beyond 2023.

Bob Walker, technical director at continuous delivery company Octopus Deploy, said, “The way I kind of look at that is that you have a revolution, where everyone’s bought all these new tools and they’re starting to implement everything. Then you have this evolution of, we just adopted this brand new CI tool, or this brand new CD tool, whatever the case may be. And then you have this evolution where you have to learn through it, and everything takes time.”

Development managers, or a team of software engineers, or QA, have to worry about making sure they’re delivering on goals and OKRs, to ensure the software they deliver has value. So, Walker noted, “it’s a balance between ‘what can we do right now’ versus ‘what can we do in a few month’s time’? What do we have right now that is ‘good enough’ to get us through the next couple of weeks or the next couple months, and then start looking at how we can make small changes to these other improvements? It can be a massive time investment.”

Show me the metrics

Continuous improvement begins with an understanding of what’s happening in your product and processes. There are DevOps and workflow metrics that teams can leverage to find weaknesses or hurdles that slow production or are wasteful time sucks, such as waiting on a pull request. 

Mik Kersten, who wrote the book “Project to Product” on optimizing flow, holds the view that continuous improvement needs to be driven by data. “You need to be able to measure, you need to understand how you’re driving business outcomes, or failing to drive business outcomes,” he said. “But it’s not just at the team level, or at the level of the Scrum team, or the Agile team, but the level of the organization.”

Yet, like Agile development and DevOps adoption, there’s no prescription for success. Some organizations do daily Scrum stand-ups but still deliver software in a “waterfall” fashion. Some will adopt automated testing and note that it’s an improvement. So, this begs the question: Isn’t incremental improvement good? Does it have to be an overarching goal?

Chris Gardner, VP and research director at Forrester, said data bears out the need for organization-wide improvement efforts, so that as they adopt things like automated testing, or value stream management, they can begin to move down the road in a more unified way, as opposed to simply being better at testing, or better at security.

“When we ask folks if they’re leveraging DevOps or SRE, or platform methodologies, the numbers are usually pretty high in terms of people saying they’re doing it,” Gardner said. “But then we ask them, the second question is, are you doing it across your organization? Is every application being supported this way? And the answer is inevitably no, it’s not scaled out. So I believe that continuous improvement also means scaling out success, and not just having it in pockets.”

For Gardner, continuous improvement is not just implementing new methodologies, but scaling the ones you have within your organization that are successful, and perhaps scaling down the ones that are not. “Not every approach is going to be a winner,” he said. 

Eat more lean

Agile programming, DevOps and now value stream management are seen as the best-practice approaches to continuous improvement. These are based on lean manufacturing principles that advanced organizations use to eliminate process bottlenecks and repetitive tasks.

Value stream management, particularly, has become a new driver for continuous improvement.

According to Lance Knight, president and COO of VSM platform provider ConnectALL, value stream management is a human endeavor performed with a mindset of being more efficient. “When you think about the Lean principles that are around value stream management, it’s about looking at how to remove non-value-added activities, maybe automate some of your value-added activities and remove costs and overhead inside your value stream.”

Value stream management, he noted, is a driver of continuous improvement. “You’re continually looking at how you’re doing things, you’re continually looking at what can be removed to be more efficient,” he said.

Knight went on to make the point that you can’t simply deploy value stream management and be done. “It’s a human endeavor, people keep looking at it, managing it, facilitating it to remove waste,” he said. So, to have a successful implementation, he advised: “Learn lean, implement, map your value stream, understand systems thinking, consistently look for places to improve, either by changing human processes or by using software to automate, to drive that efficiency and create predictability in your software value stream.”

At software tools provider Atlassian, they’re working to move software teams to mastery by offering coaching. “Coach teams help [IT teams] get feedback about their previous processes and then allow for continuous improvement,” said Suzie Prince, head of product, DevOps, at Atlassian. In Compass, Atlassian’s developer portal that provides a real-time representation of the engineering output, they’ve created CheckOps, which Prince described as akin to a retrospective. “You’re going to look at your components that are in production, and look at the health of them every day. And this will give you insights into what that health looks like and allow you again to continuously improve on keeping them to the certain bar that you expect.”

Another driver of continuous improvement, she said, is the current economic uncertainty. With conditions being as they are, she said, “We know that people will be thinking about waste and efficiency. And so we also will be able to provide insights into things like this continuous flow of work and reducing the waste of where people are waiting for things and the handoffs that are a long time. We want to use automation to reduce that as well. All which I think fits in the same set of continuously improving.”

Key to it all is automation

Automation and continuous improvement are inexorably tied together, heard in many conversations SD Times has had with practitioners of the course of the year. It is essential to freeing up high-level engineers from having to perform repetitive, mundane tasks as well as adding reliability to work processes.

So whether it’s automation for creating and executing test scripts, or for triggering events when a change to a code base is made, or implementing tighter restrictions on data access, automation can make organizations more efficient and their processes more reliable.

When starting to use automation, according to John Laffey, product strategy lead at configuration management company Puppet (now a Perforce company), you should first find the things that interrupt your day. “IT and DevOps staffs tend to be really, really interrupt- driven, when I got out and talk to them,” he said. “I hear anything from 30% to 50% of some people’s time is spent doing things they had no intention of doing when they logged on in the morning. That is the stuff you should automate.” 

By automating repetitive little things that are easy fixes, that’s going to start freeing up time to be more productive and innovative, Laffey said. On the other hand, he said there’s not point in automating things that you’re going to do once a month, “I once had a boss that spent days and days writing a script to automate something we did like once a quarter that took 15 minutes. There’s no return on investment on that. Automate the things that you can do and that others can use.”

The post 2023: The Year of Continuous Improvement appeared first on SD Times.

]]>
Value stream management provides predictability in unpredictable times https://sdtimes.com/valuestream/value-stream-management-provides-predictability-in-unpredictable-times/ Thu, 05 Jan 2023 22:04:23 +0000 https://sdtimes.com/?p=49977 In 2019, most business leaders probably wouldn’t have predicted the changes that would be coming their way in early 2020 thanks to a global pandemic. If they had, perhaps they would have been able to make decisions more proactively and wouldn’t have had to scramble to convert their workforce to remote, digitize all their experiences, … continue reading

The post Value stream management provides predictability in unpredictable times appeared first on SD Times.

]]>
In 2019, most business leaders probably wouldn’t have predicted the changes that would be coming their way in early 2020 thanks to a global pandemic. If they had, perhaps they would have been able to make decisions more proactively and wouldn’t have had to scramble to convert their workforce to remote, digitize all their experiences, and deal with an economic downturn. 

Now, the country is in another period of uncertainty. You’ve read the headlines all year: The Great Resignation, layoffs, a possible recession, Elon Musk’s takeover of Twitter shaking up marketing spending, introductions of things like GitHub Copilot and ChatGPT having workers worrying about their future job security, and more. The list could go on and on, but one thing that would help people through these times is knowing they’ll make it out okay on the other end. 

Unfortunately that level of predictability isn’t always possible in the real world, but in the business world, value stream management can help you with it.

According to Lance Knight, president and COO of ConnectALL, the information you can get from value stream management can help you with predictability. This includes things like understanding how information flows and how you get work done. 

“You can’t really be predictable until you understand how things are getting done,” said Knight. 

He also claimed that predictability is a more important outcome of value stream management than the actual delivery of value, simply because of the fact that “you can’t deliver value unless you have a predictable system.” 

Derek Holt, general manager of Intelligent DevOps at Digital.ai, agreed, adding “If we can democratize the data internally, we can not only get a better view, but we can start to use things like machine learning to predict the future. Like, how do we not just show flow metrics, but how do we find areas for flow acceleration? Not just what are our quality metrics, but how do we drive quality improvement? A big one we’re seeing right now is predicting risk and changing risk. How do you predict that before it happens?”

Knight also said that a value stream is only as effective as the information that you feed into it, so you really need to amplify feedback loops, remove non-value-added activities and add automation. Then once your value stream is optimized, you can realize the benefit of predictability. 

If you’ve already been working with value streams for a while then it may be time to make sure all those pieces are running smoothly and look for areas where there is waste that can be removed. 

Knight also explained the importance of embracing the “holistic part” in value stream management. What he means by this is not just thinking about metrics, but thinking about how you can train people to understand Lean principles so that they can understand how the way they develop software will meet their digital transformation needs. 

Challenges companies face 

Of course, all that is easier said than done. There are still challenges that companies face after adopting value stream management to actually get to the maturity level where they gain that predictability. 

One issue is that there is confusion in the market caused by vendors about what value stream management actually is. “Some people think value stream management is the automation of your DevOps pipeline. Some people think value stream management is the metrics that I get. And there’s confusion between value management and value stream management,” said Knight. 

Knight wants us to remember that value stream management isn’t anything new; It can trace its origins back to Lean Manufacturing created by Toyota in the 1950s in Japan.  

And ultimately, value is just the delivery of goods and services. Putting any other definition on it is just the industry being confused, Knight believes. 

“So people who are trying to implement value streams are getting mixed messages, and that’s the number one challenge with value stream management,” said Knight.

Digital.ai’s Holt explained that another challenge, especially for those just getting started, is getting overwhelmed. 

“Don’t be paralyzed by how big it seems,” said Holt. He recommends companies have early conversations acknowledging that they might get things wrong, and just get started. 

Where has value stream been? Where is it headed? 

In our last Buyer’s Guide on value stream management, the theme was that it aligns business and IT. 

Holt has seen in the past year that companies are adopting mentalities that are less about that alignment. Now the focus is that software is the business and the business is software. 

In this new mentality, metrics have become crucial, so it’s important to have a value stream management system in place that actually enables you to track certain metrics. 

“Things like OKRs continued to kind of explode as a simple means to drive better outcome-based alignment … simple KPIs around objective-based development efforts or outcome-based development efforts,” said Holt. 

Holt also noted that in Digital.ai’s recently published 16th annual State of Agile report, around 40% of respondents had adopted one of these approaches, and that was significantly up from the previous year. 

He went on to explain that companies investing in value stream management want to be sure that their investments are actually paying off, especially in the current economic climate.

He also said value streams can help organizations make small, evolutionary improvements, rather than one big revolution. 

“Value stream management is building on some of the core transformations that happened before,” said Holt. “Wiithout the Agile transformation, there would have been no DevOps, and without Agile and DevOps, there probably wouldn’t be an ability to talk about value stream management.”

So value stream management will continue to build on the successes of the past, while also layering in new trends like low code, explained Holt. 

What sets successful value stream management practices apart

Chris Condo, principal analyst at Forrester, last month wrote a blog post where he laid out the three qualities that set successful value stream management practitioners apart. 

  1. Use of AI/ML to predict end dates. According to Condo, development teams with access to predictive capabilities are able to use them to create timelines that are more likely to be met. He noted that the successful teams don’t replace estimates produced by people on their team, but rather augment those estimates with machine estimation. 
  2. Bottleneck analysis. Teams can use value stream management to discover what the real cause of their bottlenecks is. “When it comes to VSM, too many clients put the cart before the horse, thinking that they need a high-performing DevOps culture and tool chain to effectively use VSM. None of this could be further from the truth,” said Condo.
  3. Strong metrics and KPIs. Development leaders want these metrics if they are going to be putting money into value stream management, so look for vendors that can provide excellent metrics. 

 

The post Value stream management provides predictability in unpredictable times appeared first on SD Times.

]]>
Value stream management is all about continuous improvement https://sdtimes.com/valuestream/value-stream-management-is-all-about-continuous-improvement/ Fri, 16 Sep 2022 13:51:58 +0000 https://sdtimes.com/?p=48915 Value stream management has a terminology problem, since there are terms out there that sound the same but are actually different: value stream, value stream mapping, value stream management, and value management – which leaves many confused. “There’s nothing wrong with value stream management itself, but there’s plenty wrong with how it’s being considered and … continue reading

The post Value stream management is all about continuous improvement appeared first on SD Times.

]]>
Value stream management has a terminology problem, since there are terms out there that sound the same but are actually different: value stream, value stream mapping, value stream management, and value management – which leaves many confused.

“There’s nothing wrong with value stream management itself, but there’s plenty wrong with how it’s being considered and discussed by others, who often conflate it with either Agile or value management,” said Andrew Fuqua, SVP of Products at ConnectALL in the SD Times Live! webinar, You’ve Heard What Value Stream Management Isn’t. Now Hear the Truth About What It Is. “They’re not the same thing.”

The definition of value stream has been around for a very long time and it encompasses value-added and non-value-added activities that are required to take products or services from raw materials to the waiting arms of the customer, according to Lance Knight, president and COO at ConnectALL. At the high level of software development, this is the idea, planning, building, testing, and deploying.

To continue, read the original article on VSM Times.

The post Value stream management is all about continuous improvement appeared first on SD Times.

]]>
OASIS committee working on value stream interoperability standards https://sdtimes.com/value-stream/oasis-committee-working-on-value-stream-interoperability-standards/ Thu, 08 Sep 2022 15:41:23 +0000 https://sdtimes.com/?p=48812 In order to facilitate the development of standards for sharing data across different platforms within the value stream, a new technical committee has sprung up from within OASIS Open, which is an open source and standards consortium. According to the committee, organizations typically employ a number of different tools to measure software performance in order … continue reading

The post OASIS committee working on value stream interoperability standards appeared first on SD Times.

]]>
In order to facilitate the development of standards for sharing data across different platforms within the value stream, a new technical committee has sprung up from within OASIS Open, which is an open source and standards consortium.

According to the committee, organizations typically employ a number of different tools to measure software performance in order to maximize innovation, drive growth, and add value.

Led by Helen Beal, chair of the Value Stream Management Consortium and chief ambassador at the DevOps Institute, and Kelly Cullinane, director of energy and federal services at Copado, the Value Stream Management Interoperability (VSMI) Technical Committee aims to bring increased interoperability between these tools. This will enable a more secure approach to sharing data across platforms.

According to Beal, value stream management (VSM) is the next evolution of DevOps, and “pivotal to that is the DevOps tool chain and at the Consortium, we talked about the need for a common data model,” she said.

To read the full article, find it on VSM Times.

The post OASIS committee working on value stream interoperability standards appeared first on SD Times.

]]>
Don’t conflate value stream metrics with other development metrics https://sdtimes.com/valuestream/dont-conflate-value-stream-metrics-with-other-development-metrics/ Mon, 22 Aug 2022 18:10:34 +0000 https://sdtimes.com/?p=48656 Value stream management and data-driven insights have been hot topics these past few years, and interest will continue to grow. Late last year, Gartner put out a prediction that by 2023, 70% of organizations will be using value stream management in some capacity.  In order to do value stream successfully, however, companies need to understand … continue reading

The post Don’t conflate value stream metrics with other development metrics appeared first on SD Times.

]]>
Value stream management and data-driven insights have been hot topics these past few years, and interest will continue to grow. Late last year, Gartner put out a prediction that by 2023, 70% of organizations will be using value stream management in some capacity. 

In order to do value stream successfully, however, companies need to understand the difference between value stream metrics and engineering productivity or source code quality metrics. 

According to Manjunath (Manju) Bhat, research VP at Gartner, traditional DevOps metrics like release cadence, lead time, and cycle time can be useful measurements, but they don’t necessarily demonstrate business value. 

RELATED CONTENT: What role do developers play in value stream management?

“Our goal is not to accelerate release cadence but continually improve value delivery cadence – i.e., the rate at which users can absorb and appreciate the value delivered,” said Bhat. 

Over the past year, there have been a number of partnerships between value stream management companies and technology companies, which may be an indication that value stream companies are trying to sync up value stream data with other important metrics. 

For example, in March, Tasktop and Broadcom announced a partnership in which Tasktop’s technology would enable syncing of data between Broadcom’s ValueOps solution and software development tools. 

That same month, software intelligence company CAST and value stream management company LeanIX announced a partnership that was aimed at supporting customers in their data migration to the cloud. Together the two solutions would enable companies to make more informed decisions and develop more effective software strategies. 

According to Bhat, partnerships such as these bring “complementary value from across different tiers and vantage points.” He noted that most value stream management platforms are approaching the market from a position of strength that already exists, and then they build out their capabilities. 

Bhat believes that the value stream providers who are able to help organizations elevate customer satisfaction, employee happiness, and automation maturity will be the leaders in the market. 

The post Don’t conflate value stream metrics with other development metrics appeared first on SD Times.

]]>
DevOps feedback loop explained: Cascaded feedback https://sdtimes.com/valuestream/devops-feedback-loop-explained-cascaded-feedback/ Thu, 28 Jul 2022 17:48:05 +0000 https://sdtimes.com/?p=48408 Feedback is routinely requested and occasionally considered. Using feedback and doing something with it is nowhere near as routine, unfortunately. Perhaps this has been due to a lack of a practical application based on a focused understanding of feedback loops, and how to leverage them. We’ll look at Feedback Loops, the purposeful design of a … continue reading

The post DevOps feedback loop explained: Cascaded feedback appeared first on SD Times.

]]>
Feedback is routinely requested and occasionally considered. Using feedback and doing something with it is nowhere near as routine, unfortunately. Perhaps this has been due to a lack of a practical application based on a focused understanding of feedback loops, and how to leverage them. We’ll look at Feedback Loops, the purposeful design of a system or process to effectively gather and enable data-driven decisions; and behavior based on the feedback collected. We’ll also look at some potential issues and explore various countermeasures to address things like delayed feedback, noisy feedback, cascading feedback, and weak feedback. To do this, in this four-part series we’ll follow newly onboarded associate Alice through her experience with this new organization which needs to accelerate organizational value creation and delivery processes.

As Alice looked at the bigger picture of the quality process, it became clear that earlier feedback impacted, and may have created or obscured, subsequent feedback or issues.

A significant challenge of the past has been the ability to realistically represent and measure performance in all but the simplest of processes.  The reality is that most of our processes have dependencies and external influences.  While these were difficult at best using manual tools, automation of processes and the advent of observability enables a more realistic representation.  Exposing obscure relationships through discovery and understanding the relationships enable a better and more robust model for identification and measurement. This is especially important to begin to see and understand relationships, especially those that are complex and not easily observed.

Alice realized that the feedback loops that were providing information to product management were frequently misunderstood or used data that was not appropriate for the use (e.g. not fully burdened costs) as conflicting and not well documented microservice architecture and API implementations that have proliferated in their current environment.  Of course, we’ve long struggled with aggregating multiple KPIs that do not really reflect on or result in the desired outcome.  

As Alice explained to the product manager, the interactions between complex components of a microservices environment and automated business process ecosystems are an increasingly complex environment  of  interactions.  The delivered value or outcome must be engaged, such as the introduction of market leading capabilities faster and better than anyone else.

We can think of interdependent processes as something like the availability impact of multiple dependent systems, using availability as an analog for confidence in the feedback results as well as likely performance expectations.  Additionally, this approach identifies relative capability improvement with current approach / architecture:

(image from Standing On Shoulders: A Leader’s Guide to Digital Transformation ©2019-2020 Standing On Shoulders, LLC, used with permission) Image & Table depicts aggregated availability based on interdependent system availability and resulting net total availability.

In this example, the total system availability is the product of the dependent systems for the same business process scenarios, in this case by looking at component improvements and availability outcomes.  The impact of the performance of otherwise independent systems can have an enormous impact on complex business processes.   We must take care to understand the feedback loops and how we may encourage or even create subsequent noise via cascade. Transparency can be the key.

Earlier, we talked about noise in testing and impacts to trust and confidence.  That is another dimension of this same challenge, and opportunity. 

Alice and the product manager concluded that this might be related to their objectives for reduced fire fighting and improved collaboration.  Improved monitoring and if possible adding instrumentation or telemetry might be effective countermeasures that are consistent with other ongoing work.  The direct visibility of impact and alignment with the outcome is the best feedback of all, particularly when our part may be somewhat obscured or limited by other stream components. Understanding and modeling enable us to experiment and learn, especially with critical value systems.

Looking ahead, improving ecosystem visualization capabilities  in an evolving value stream management environment to capture and evaluate model quality and data consistency seems imminent.  Doing this might be a goal state that should soon be realizable soon with dynamic traceability maturing and observability seemingly in our near future.  

 

The post DevOps feedback loop explained: Cascaded feedback appeared first on SD Times.

]]>