Broadcom Archives - SD Times https://sdtimes.com/tag/broadcom/ Software Development News Mon, 09 Jan 2023 20:29:36 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.1 https://sdtimes.com/wp-content/uploads/2019/06/bnGl7Am3_400x400-50x50.jpeg Broadcom Archives - SD Times https://sdtimes.com/tag/broadcom/ 32 32 Broadcom: 84% of orgs will be using VSM by end of the year https://sdtimes.com/software-development/broadcom-84-of-orgs-will-be-using-vsm-by-end-of-the-year/ Mon, 09 Jan 2023 20:29:36 +0000 https://sdtimes.com/?p=50015 If you’re a regular reader of SD Times you may have gotten the sense that value stream management (VSM) is really taking off in tech. We’ve increasingly written about it, launched a new website just for value stream news, and even launched a value stream management conference that has been running annually since 2020.  And … continue reading

The post Broadcom: 84% of orgs will be using VSM by end of the year appeared first on SD Times.

]]>
If you’re a regular reader of SD Times you may have gotten the sense that value stream management (VSM) is really taking off in tech. We’ve increasingly written about it, launched a new website just for value stream news, and even launched a value stream management conference that has been running annually since 2020. 

And if that wasn’t enough proof of its growing popularity, a new survey from Broadcom provides numbers to back up. According to its survey of over 500 IT and business leaders, it is expected that 84% of enterprises will have adopted VSM by the end of the year. This is up from just 42% in 2021.

According to Broadcom, early adoption of VSM started around four years ago, and within the past two there has been a shift to mainstream adoption. Sixty percent of survey respondents said they will use VSM to deliver at least one product this year. 

Read the full story on VSM Times.

The post Broadcom: 84% of orgs will be using VSM by end of the year appeared first on SD Times.

]]>
How service virtualization supports cloud computing: Key use cases https://sdtimes.com/test/how-service-virtualization-supports-cloud-computing-key-use-cases/ Tue, 01 Nov 2022 14:49:35 +0000 https://sdtimes.com/?p=49416 (First of two parts) Several weeks ago, a customer of the Broadcom Service Virtualization solution posed the following question: “Now that we’re moving to the cloud, do we still need Service Virtualization?”  The question struck me as odd. My sense is that this confusion probably stemmed from the misperception that, since cloud environments can be … continue reading

The post How service virtualization supports cloud computing: Key use cases appeared first on SD Times.

]]>
(First of two parts)

Several weeks ago, a customer of the Broadcom Service Virtualization solution posed the following question: “Now that we’re moving to the cloud, do we still need Service Virtualization?” 

The question struck me as odd. My sense is that this confusion probably stemmed from the misperception that, since cloud environments can be spun up quickly, people think they can easily address test environment bottlenecks and, in the process, service virtualization capabilities would be rendered unnecessary. Obviously, that is not the case at all! Being able to spin up infrastructure quickly does not address the issue of what elements need to be established in order to make environments useful for desired testing efforts. 

In fact, all the use cases for the Service Virtualization solution are just as relevant in the cloud as they are in traditional on-premises-based systems. Following are a few key examples of these use cases: 

  1. Simplification of test environments by simulating dependent end points   
  2. Support for early, shift-left testing of application components in isolation 
  3. Support for performance and reliability engineering 
  4. Support for integration testing with complex back-ends (like mainframes) or third-party systems
  5. Simplification of test data management 
  6. Support for training environments
  7. Support for chaos and negative testing 

All of these use cases are documented in detail here.  

Further, what’s more pertinent is that Service Virtualization helps to address many additional use cases that are unique to cloud-based systems. 

Fundamentally, Service Virtualization and cloud capabilities complement each other. Combined, Service Virtualization and cloud services deliver true application development and delivery agility that would not be possible with only one of these technologies. 

Using virtual services deployed to an ephemeral test environment in the cloud makes the setup of the environment fast, lightweight, and scalable. (Especially compared to setting up an entire SAP implementation in the ephemeral cloud environment, for example.) 

Let’s examine some key ways to use Service Virtualization for cloud computing. 

Service Virtualization Use Cases for Cloud Migration 

Cloud migration typically involves re-hosting, re-platforming, re-factoring, or re-architecting existing systems. Regardless of the type of migration, Service Virtualization plays a key role in functional, performance, and integration testing of migrated applications—and the use cases are the same as those for on-premises applications. 

However, there are a couple of special use cases that stand out for Service Virtualization’s support for cloud migration: 

  1. Early Pre-Migration Performance Verification and Proactive Performance Engineering 

In most cases, migrating applications to the cloud will result in performance changes, typically due to differences in application distribution and network characteristics. For example, various application components may reside in different parts of a hybrid cloud implementation, or performance latencies may be introduced by the use of distributed cloud systems. 

With Service Virtualization, we can easily simulate the performance of all the different application components, including their different response characteristics and latencies. Consequently, we can understand the performance impact, including both overall and at the component level, before the migration is initiated.  

This allows us to focus on appropriate proactive performance engineering to ensure that performance goals can be met post migration.  

In addition, Service Virtualization plays a key role in performance testing during and after the migration, which are common, well-understood use cases. 

2. Easier Hybrid Test Environment Management for Testing During Migration 

This is an extension to the common use case of Service Virtualization, which is focused on simplifying testing environments. 

However, during application migration this testing becomes more crucial given the mix of different environments that are involved. Customers typically migrate their applications or workloads to the cloud incrementally, rather than all at once. This means that test environments during migration are much more complicated to set up and manage. That’s because tests may span multiple environments, both cloud, for migrated applications—and on-premises—for pre-migration applications. In some cases, specific application components (such as those residing on mainframes), may not be migrated at all. 

Many customers are impeded from early migration testing due to the complexities of setting up test environments across evolving hybrid systems. 

For example, applications that are being migrated to the cloud may have dependencies on other applications in the legacy environment. Testing of such applications requires access to test environments for applications in the legacy environment, which may be difficult to orchestrate using continuous integration/continuous delivery (CI/CD) tools in the cloud. By using Service Virtualization, it is much easier to manage and provision virtual services that represent legacy applications, while having them run in the local cloud testing environment of the migrated application. 

On the other hand, prior to migration, applications running in legacy environments will have dependencies on applications that have been migrated to the cloud. In these cases, teams may not know how to set up access to the applications running in cloud environments. In many cases, there are security challenges in enabling such access. For example, legacy applications may not have been re-wired for the enhanced security protocols that apply to the cloud applications. 

By using Service Virtualization, teams can provision virtual services that represent the migrated applications within the bounds of the legacy environments themselves, or in secure testing sandboxes on the cloud. 

In addition, Service Virtualization plays a key role in parallel migrations, that is, when multiple applications that are dependent on each other are being migrated at the same time. This is an extension of the key principle of agile parallel development and testing, which is a well-known use case for Service Virtualization.

3. Better Support for Application Refactoring and Re-Architecting During Migration 

Organizations employ various application re-factoring techniques as part of their cloud migration. These commonly include re-engineering to leverage microservices architectures and container-based packaging, which are both key approaches for cloud-native applications. 

Regardless of the technique used, all these refactoring efforts involve making changes to existing applications. Given that, these modifications require extensive testing. All the traditional use cases of Service Virtualization apply to these testing efforts. 

For example, the strangler pattern is a popular re-factoring technique that is used to decompose a monolithic application into a microservices architecture that is more scalable and better suited to the cloud. In this scenario, testing approaches need to change dramatically to leverage distributed computing concepts more generally and microservices testing in particular. Service Virtualization is a key to enabling all kinds of microservices testing. We will address in detail how Service Virtualization supports the needs of such cloud-native applications in section IV below.

4. Alleviate Test Data Management Challenges During Migration 

In all of the above scenarios, the use of Service Virtualization also helps to greatly alleviate test data management (TDM) problems. These problems are complex in themselves, but they are compounded during migrations. In fact, data migration is one of the most complicated and time-consuming processes during cloud migration, which may make it difficult to create and provision test data during the testing process. 

For example, data that was once easy to access across applications in a legacy environment may no longer be available to the migrated applications (or vice-versa) due to the partitioning of data storage. Also, the mechanism for synchronizing data across data stores may itself have changed. This often requires additional cumbersome and laborious TDM work to set up test data for integration testing—data that may eventually be thrown away post migration. With Service Virtualization, you can simulate components and use synthetic test data generation in different parts of the cloud. This is a much faster and easier way to address  TDM problems. Teams also often use data virtualization in conjunction with Service Virtualization to address TDM requirements.  

Service Virtualization Use Cases for Hybrid Cloud Computing 

Once applications are migrated to the cloud, all of the classic use cases for Service Virtualization continue to apply. 

In this section, we will discuss some of the key use cases for supporting hybrid cloud computing. 

  1. Support for Hybrid Cloud Application Testing and Test Environments 

Post migration, many enterprises will operate hybrid systems based on a mix of on-premises applications in private clouds (such as those running on mainframes), different public cloud systems (including AWS, Azure, and Google Cloud Platform), and on various SaaS provider environments (such as Salesforce). See a simplified view in the figure below. 

 

Setting up test environments for these hybrid systems will continue to be a challenge. Establishing environments for integration testing across multiple clouds can be particularly difficult. 

Service Virtualization clearly helps to virtualize these dependencies, but more importantly, it makes virtual services easily available to developers and testers, where and when they need them. 

For example, consider the figure above. Application A is hosted on a private cloud, but dependent on other applications, including E, which is running in a SaaS environment, and J, which is running in a public cloud. Developers and testers for application A depend on virtual services created for E and J. For hybrid cloud environments, we also need to address where the virtual service will be hosted for different test types, and how they will be orchestrated across the different stages of the CI/CD pipeline. 

See figure below.

 

Generally speaking, during the CI process, developers and testers would like to have lightweight synthetic virtual services for E and J, and to have them created and hosted on the same cloud as A. This minimizes the overhead involved in multi-cloud orchestration. 

However, as we move from left to right in the CD lifecycle, we would not only want the virtual services for E and J to become progressively realistic, but also hosted closer to the remote environments, where the “real” dependent applications are hosted. And these services would need to orchestrate a multi-cloud CI/CD system. Service Virtualization frameworks would allow this by packaging virtual services into containers or virtual machines (VMs) that are appropriate for the environment they need to run in. 

Note that it is entirely possible for application teams to choose to host the virtual services for the CD lifecycle on the same host cloud as app A. Service Virtualization frameworks would allow that by mimicking the network latencies that arise from multi-cloud interactions. 

The key point is to emphasize that the use of Service Virtualization not only simplifies test environment management across clouds, but also provides the flexibility to deploy the virtual service where and when needed. 

2. Support for Agile Test Environments in Cloud Pipelines 

In the introduction, we discussed how Service Virtualization complements cloud capabilities. While cloud services make it faster and easier to provision and set up on-demand environments, the use of Service Virtualization complements that agility. With the solution, teams can quickly deploy useful application assets, such as virtual services, into their environments. 

For example, suppose our application under test has a dependency on a complex application like SAP, for which we need to set up a test instance of the app. Provisioning a new test environment in the cloud may take only a few seconds, but deploying and configuring a test installation of a complex application like SAP into that environment would take a long time, impeding the team’s ability to test quickly. In addition, teams would need to set up test data for the application, which can be complex and resource intensive. By comparison, deploying a lightweight virtual service that simulates a complex app like SAP takes no time at all, thereby minimizing the testing impediments associated with environment setup.   

3. Support for Scalable Test Environments in Cloud Pipelines

In cloud environments, virtual service environments (VSEs) can be deployed as containers into Kubernetes clusters. This allows test environments to scale automatically based on testing demand by expanding the number of virtual service instances. This is useful for performance and load testing, cases in which the load level is progressively scaled up. In response, the test environment hosting the virtual services can also automatically scale up to ensure consistent performance response. This can also help the virtual service to mimic the behavior of a real automatically scaling application. 

Sometimes, it is difficult to size a performance testing environment for an application so that it appropriately mimics production. Automatically scaling test environments can make this easier. For more details on this, please refer to my previous blog on Continuous Performance Testing of Microservices, which discusses how to do scaled component testing.

4. Support for Cloud Cost Reduction 

Many studies (such as one done by Cloud4C) have indicated that enterprises often over-provision cloud infrastructure and a significant proportion (about 30%) of cloud spending is wasted. This is due to various reasons, including the ease of environment provisioning, idle resources, oversizing, and lack of oversight. 

While production environments are more closely managed and monitored, this problem is seen quite often in test and other pre-production environments, which developers and teams are empowered to spin up to promote agility. Most often, these environments are over-provisioned (or sized larger than they need to be), contain data that is not useful after a certain time (for example, including aged test data or obsolete builds or test logs), and not properly cleaned up after their use—developers and testers love to quickly move on the next item on their backlog! 

Use of Service Virtualization can help to alleviate some of this waste. As discussed above, replacing real application instances with virtual services helps to reduce the size of the test environment significantly. Compared to complex applications, virtual services are also easier and faster to deploy and undeploy, making it easier for pipeline engineers to automate cleanup in their CI/CD pipeline scripts.  

In many cases, virtual service instances may be shared between multiple applications that are dependent on the same end point. Automatically scaling VSEs can also help to limit the initial size of test environments. 

Finally, VSEs to which actual virtual services are deployed, can be actively monitored to ensure tracking, usage, and de-provisioning when not used. 

(Continue on to Part 2)

 

The post How service virtualization supports cloud computing: Key use cases appeared first on SD Times.

]]>
Optimize continuous delivery with continuous reliability https://sdtimes.com/devops/optimize-continuous-delivery-with-continuous-reliability/ Wed, 10 Aug 2022 14:32:57 +0000 https://sdtimes.com/?p=48549 The 2021 State of DevOps report indicates that greater than 74% of organizations surveyed have Change Failure Rate (CFR) greater than 16% (the report provides a range from 16% to 30%). Of these, a significant proportion (> 35%) likely have CFRs exceeding 23%.  This means that while organizations seek to increase software change velocity (as … continue reading

The post Optimize continuous delivery with continuous reliability appeared first on SD Times.

]]>
The 2021 State of DevOps report indicates that greater than 74% of organizations surveyed have Change Failure Rate (CFR) greater than 16% (the report provides a range from 16% to 30%). Of these, a significant proportion (> 35%) likely have CFRs exceeding 23%. 

This means that while organizations seek to increase software change velocity (as measured by the other DORA metrics in the report), a significant number of deployments result in degraded service (or service outage) in production and subsequently require remediation (including hotfix, rollback, fix forward, patch etc.). The frequent failures potentially impair revenue and customer experience, as well as incur significant costs to remediate. 

Most customers whom we speak to are unable to proactively predict the risk of a change going into production. In fact, the 2021 State of Testing in DevOps report also indicates that greater than 70% of organizations surveyed are not confident about the quality of their releases. A smaller, but still significant, proportion (15%) “Release and Pray” that their changes won’t degrade production. 

Reliability is a key product/service/system quality metric.  CFR is one of many reliability metrics. Other metrics include availability, latency, thruput, performance, scalability, mean time between failures, among others. While reliability engineering in software has been an established discipline, we clearly have a problem ensuring reliability.  

In order to ensure reliability for software systems, we need to establish practices that plan for, specify, engineer, measure and analyze reliability continuously along the DevOps life cycle. We call this “Continuous Reliability” (CR).  

Key Practices for Continuous Reliability 

Continuous Reliability derives from the principle of “Continuous Everything” in DevOps. The emergence (and adoption) of Site Reliability Engineering (SRE) principles has led to CR evolving to be a key practice in DevOps and Continuous Delivery. In CR, the focus is to take a continuous proactive approach at every step of the DevOps lifecycle to ensure that reliability goals will be met in production. 

This implies that we are able to understand and control the risks of changes (and deployments) before they make it to production. 

The key pillars of CR are shown in the figure below:

CR is not, however, the purview of site reliability engineers (SREs) alone. Like other DevOps practices, CR requires active collaboration among multiple personas such as SREs, product managers/owners, architects, developers, testers, release/deployment engineers and operations engineers. 

Some of the key practices for supporting CR (that are overlaid on top of the core SRE principles) are described below.

1)    Continuous Testing for Reliability

Continuous Testing (CT) is an established practice in Continuous Delivery. However, the use of CT for continuous reliability validation is less common. Specifically for validation of the key reliability metrics (such as availability, latency, throughput, performance, scalability), many organizations still use waterfall-style performance testing, where most of the testing is done in long duration tests before release. This not only slows down the deployment, but does an incomplete job of validation. 

Our recommended approach is to validate these reliability metrics progressively at every step of the CI/CD lifecycle. This is described in detail in my prior blog on Continuous Performance Testing

2)    Continuous Observability 

Observability is also an established practice in DevOps. However, most observability solutions (such as Business Services Reliability) focus on production data and events. 

What is needed for CR is to “shift-left” observability into all stages of the CI/CD lifecycle, so that reliability insights can be gleaned from pre-production data (in conjunction with production data). For example, it is possible to glean reliability insights from patterns of code changes (in source code management systems), test results and coverage, as well as performance monitoring by correlating such data with past failure/reliability history in production.   

Pre-production environments are more data rich than production environments (in terms of variety); however, most of the data is not correlated and mined for insights. Such observability requires us to set up “systems of intelligence” (SOI, see figure below) where we continuously collect and analyze pre-production data along the CI/CD lifecycle to generate a variety of reliability predictions as and when applications change (see next section). 

3)      Continuous Failure, Risk Insights and Prediction 

An observability system in pre-production allows us to continuously assess and monitor failure risk along the CI/CD lifecycle. This allows us to proactively assess (and even predict) the failure risk associated with changes. 

For example, we set up a simple SOI for an application (using Google Analytics) where I collected code change data (from the source code management system) as well as history of escaped defects (from past deployments to production). By correlating such data (gradient boosted tree algorithm), I was able to establish an understanding of what code change patterns resulted in higher levels of escaped defects. In this case, I found a significant correlation between code churn and defects leaked (see figure below).

We were then able to use the same analytics to predict how escaped defects would change based on code churn in my current deployment (see inset in the figure above). 

While this is a very simple example of reliability prediction using a limited data set, we can do continuous failure risk prediction by exploiting a broader set of data from pre-production, including testing and deployment data. 

For example, in my previous article on Continuous Performance Testing, I discussed various approaches for performance testing of component-based applications. Such testing generates a huge amount of data that is extremely difficult to process manually. An observability system can then be used to collect the data to establish baselines of component reliability and performance, and in turn used to generate insights in terms of how system reliability may be impacted by changes in individual application components (or other system components). 

4)    Continuous Feedback  

One of the key benefits of an observability system is to be able to provide quick and continuous feedback to the development/test/release/SRE teams on the risk associated with changes and provide helpful insights on how to address them. This would allow development teams to proactively address these risks before the changes are deployed to production. For example, development teams can be alerted as soon as performing a commit (or a pull request) of the failure risks associated with the changes they have made. Testers can get feedback on the tests that are the most important to run. Similarly, SREs can get early planning insights into the level of error budgets they need to plan for the next release cycle. 

Next up: Continuous Quality 

Reliability, however, is just one dimension of application/system quality. It does not, for example, fully address how we maximize customer experience that is influenced by other factors such as value to users, ease of use, and more. In order to get true value from DevOps and Continuous Delivery initiatives, we need to establish practices for predictively attaining quality – we call this “Continuous Quality.” I will discuss this in my next blog. 

The post Optimize continuous delivery with continuous reliability appeared first on SD Times.

]]>
Service virtualization: A continuous life cycle technology https://sdtimes.com/test/service-virtualization-a-continuous-life-cycle-technology/ Mon, 01 Aug 2022 19:51:39 +0000 https://sdtimes.com/?p=48434 Service virtualization has helped countless organizations perform tests on application components that live outside their development organizations, or that are not available to the tester when needed to complete their tests. Virtualization enables organizations to put up a virtual service more easily than they can “yank a box on an Amazon server,” explained Shamim Ahmed, … continue reading

The post Service virtualization: A continuous life cycle technology appeared first on SD Times.

]]>
Service virtualization has helped countless organizations perform tests on application components that live outside their development organizations, or that are not available to the tester when needed to complete their tests.

Virtualization enables organizations to put up a virtual service more easily than they can “yank a box on an Amazon server,” explained Shamim Ahmed, DevOps CTO and evangelist at Broadcom. Yet today, service virtualization (SV) can be seen as a life cycle technology, empowering what Ahmed calls continuous virtualization. This, he said, “enables even developers doing parallel development right now, just for testing. That’s on the left-hand side. And on the right-hand side, we’ve seen extremes, like customers using service virtualization for chaos testing.”

SV helped early-adopting organizations to decouple teams, said Diego Lo Giudice, vice president and principal analyst at Forrester, so that you could decouple customer with client. But, he noted, “with organizations being broken up into small teams, and parallelizing, the work with Agile became very hard. Project managers thought they could manage that. And there’s no way you can really manage a bunch of small agile teams working; making sure that you synchronize them through project management is impossible. And so service virtualization was kind of used a bit to decouple, at least from the testing perspective.”

Virtualization enables organizations to put up a virtual service more easily than they can “yank a box on an Amazon server,” Ahmed explained. 

So, where is service virtualization being used beyond testing?

Service virtualization use cases

Diego Lo Giudice, vice president and principal analyst at Forrester, said SV remains mainly a testing capability, though he said he is seeing an accelerated use of SV in the API world. “I haven’t really gotten, you know, beyond the typical use cases of testing unreachable or expensive third-party resources,” he said, noting that the biggest use case he keeps seeing is virtualizing mainframe environments. “I love the example a CEO gave me that he was saving a lot of money with service virtualization simply because one of his teams, for testing purposes, couldn’t access the mainframe. They only had a window of 30 minutes a month, and they had to wait every time for those 30 minutes. With service virtualization, they were able to virtualize that access to the mainframe, and therefore the team now kind of had the virtual access to the mainframe available all the time.”

Using service virtualization with APIs, Lo Giudice said, is “just one of the types of testing that needs to be done; integration tests, that activity that can be automated, software delivery pipelines. I see it a lot there.”

Among other areas where service virtualization is being seen is to create employee onboarding environments. Alaska Airlines uses Parasoft’s virtualization solution for its training, according to Ryan Papineau, a senior software engineer at the airline. With virtualization, he said, “we’re able to scale the amount of people that we have go through our training program.” While there are typically no test cases, Alaska can use the environment to see if the users can perform certain tasks, but none of that gets recorded or impacts the production environment. 

Service virtualization and test data management

But perhaps the biggest area of SV growth is in the test data management (TDM) testing space – a term that Papineau said is “kind of messy, because it can mean a lot of things.” It has become, in a word or two, a catch-all buzzword.

“We’ve been screening some new automation engineers, and they’ll put test data management on their resume. But you’ll never see any concept of any tools or techniques listed,” Papineau said. “What I believe that to be is they’re listing it, to say ‘Hey, I use data-driven tests and had Excel,’ and I’m like, that’s not what I’m looking for. I’m looking for data structures and relationships and databases. And that life cycle of creation to modification to deletion. And using an ETL tool, or custom scripts, which we use separately.” 

Papineau said that Parasoft’s solution essentially uses data and iterates it over APIs, records it and creates the relationships with the data. Papineau said, “You get this nice exploded, fancy UI that has all the relationships and you can drill down and do cloning and subsetting, so it has a lot of the old traditional test data management aspects to it, but all within their context.” 

Broadcom’s Ahmed added that his company, which acquired the Lisa SV software developed by iTKO through its purchase of CA, is seeing much more synergy between servers, virtualization and test data management. “When we acquired Lisa, TDM was not that big. But now with all this GDPR, and all the other regulations around data privacy, TDM is really hard. And it’s one of the biggest problems the customers are grappling with.”

Ahmed believes SV and TDM go hand-in-glove. “The way they work together, I think, is another key evolution of how the use of service virtualization has evolved,” he said. “Using SV is actually one of the easier ways to do test data management. Because, you know, you can actually record the test data by recording the back and forth between a client and a server. So that gives you an opportunity to create lightweight data, as opposed to using the more traditional test data mechanisms, particularly so for API-based systems.”

He noted that the use of SV reduces “the tedium burden,” because creating the test data for a live application versus creating the test data for an emulator is a much lower amount of TDM burden for the testers and everybody else.”

System integrations

While much about service virtualization has gone unchanged over the last years, much has changed, according to Lo Giudice. Developers are choosing open source more, deciding they don’t need all the sophistication vendors are providing.  “I’ve got data that shows the adoption of service virtualization has never really gone over 20%,” he said. “When you ask developers and testers, what is it that you’re automating around in 2022, I think the system integrators” are the only ones for whom this is key. 

“It’s actually very useful” in integration projects, Lo Giudice said. “If you think about Lloyds Banking, a customer that’s got a complex landscape of apps, and you’re doing integration work with good partnerships going on,” service virtualization can be quite beneficial. “If you’ve got an app and it interfaces another 10 big apps, you’d better use service virtualization to automate that integration,” he said.

Integration projects between assets held on-premises and those residing in the cloud caused some hardships for Alaska Airlines, Papineau said. The problem, he said, stemmed from internal permissions and controls into the cloud. One of their developers was taking older data repository methods and deploying the cloud, and struggled with the internal permissions between on-prem and the cloud.”

Papineau said organizations have to understand their firewalls and the access to servers. “Are your server and client both in local? Are they both in cloud order, and does one have to transverse between the other,” Papineau said. “So what we did there is we stumbled on getting the firewall rules exposed, because now all of these different clients are trying to talk to this virtual server. And so it’s like, ‘Oh, you got this one going up. Now you need to do another firewall request for this one?’ And I am not kidding you. When we did the Virgin (Atlantic) acquisition, viral requests were the largest nightmare in the longest time. So that’s why it’s an internal problem we struggled with and just gave up on it like, No, this is just taking too much time. This should not be this hard. This literally is a firewall overhead problem that we ran into.”.

Continuous virtualization

Virtualization is not something you do before you do testing any longer. From the time you start to do your backlog and your design, you have to think about what services you need, and how you design them correctly.

Then, according to Broadcom’s DevOps CTO and evangelist Shamim Ahmed, you have to think about how to evolve those services. “We think of service virtualization evolving and on the continuum,” he said. “You start with something simple we call a synthetic virtual service that can be created very easily – not using the traditional record-response mechanism.”

He noted that the old way of creating a virtual service relied on the fact that the endpoint already exists. That’s what enabled record and replay,  but in today’s development environment, the endpoint may not exist – all you might have is an API specification, and you might not even know whether the API has been implemented or not. “You need to have new ways of creating a virtual service, a very simple, lightweight service that can be created for something like a Swagger definition of an API. Developers need that when they’re doing unit testing, for example. The way we look at this is what we call progressive virtualization – that simple thing that we created can now evolve, as you move your application from left to right in the CI/CD life cycle.”

He offered the example once that application gets to the stage of integration testing, you perhaps need to enhance that synthetic virtual service with some more behavior. So more data is added, and then when you get to system testing, you need to replace that synthetic virtual service with the real recording, so it becomes progressively realistic as you go from left to right. 

“There’s a whole life cycle that we need to think about around continuous virtualization that talks about the kind of virtual servers needed to do integration testing, or build verification,” Ahmed said. “And of course, all the other kinds of tests – functional, performance and even security testing – virtual services are just as applicable for those things…  because if you think about the number of third-party systems that a typical application accesses in this API-driven world, you simply can’t run many of your tests end-to-end without running into some kind of external dependency that you do not control, from the perspective of functional, performance and security testing. So you can start to emulate all of those characteristics in a virtual service.”

The post Service virtualization: A continuous life cycle technology appeared first on SD Times.

]]>
Getting around roadblocks to VSM metrics https://sdtimes.com/valuestream/maneuvering-around-vsm-roadblocks/ Tue, 28 Jun 2022 19:21:56 +0000 https://sdtimes.com/?p=48110 While many organizations think they have value stream management, they are encountering roadblocks to gain the metrics they need from it, according to Laureen Knudsen, chief transformation officer at Broadcom in the talk “Maneuvering around VSM roadblocks” at {virtual} VSMcon 2022.  A recent study by Broadcom found that 88% of people say they are doing … continue reading

The post Getting around roadblocks to VSM metrics appeared first on SD Times.

]]>
While many organizations think they have value stream management, they are encountering roadblocks to gain the metrics they need from it, according to Laureen Knudsen, chief transformation officer at Broadcom in the talk “Maneuvering around VSM roadblocks” at {virtual} VSMcon 2022

A recent study by Broadcom found that 88% of people say they are doing value stream management, but only 42% say they have anything defined as a value stream. 

A lot of organizations today are focusing on how to eliminate the last few siloes in their organizations, how to get the visibility they’ve been promised their whole product lifecycle, and how to use data effectively and efficiently, according to Knudsen.

To read the full article, visit VSM Times where the article was originally published.

The post Getting around roadblocks to VSM metrics appeared first on SD Times.

]]>
Continuous test data management for microservices, Part 2: Key steps https://sdtimes.com/test/continuous-test-data-management-for-microservices-part-2-key-steps/ Tue, 14 Jun 2022 15:27:51 +0000 https://sdtimes.com/?p=47960 This is part 2 in a series on applying test data management (TDM) to microservices. Part 1 can be found here.  The continuous TDM process for microservices applications is similar to that for general continuous TDM, but tailored to the nuances of the architecture. The key differences are as follows:  Step 1(b): Agile Design Rigorous … continue reading

The post Continuous test data management for microservices, Part 2: Key steps appeared first on SD Times.

]]>
This is part 2 in a series on applying test data management (TDM) to microservices. Part 1 can be found here


The continuous TDM process for microservices applications is similar to that for general continuous TDM, but tailored to the nuances of the architecture. The key differences are as follows: 

Step 1(b): Agile Design

Rigorous change impact analysis during this step is key to reducing the testing (and the TDM) burden for microservices applications—especially in the upper layers of the test pyramid and the CD stages of the lifecycle. There are various ways to do this, following are a few highlights: 

(a)   Code-change-based impact analysis (also known as a white-box, inside-out approach). Through this approach, we identify which services and transactions are affected by specific code changes in implementing backlog requirements. We then focus testing and TDM efforts on those services and transactions affected. This approach is supported by tools such as Broadcom TestAdvisor and Microsoft Test Impact Analysis. This approach is more useful for white and gray box testing, specifically unit and component testing.  

(b)  Model flow-based impact analysis (also known as a black-box, outside-in approach). Here we do change impact analysis using flows in model-based testing. This analysis helps to highlight key end-to-end or system integration scenarios that need to be tested, and can also be traced down to individual components and source code. This approach is supported by such tools as Broadcom Agile Requirements Designer, and is more beneficial for testing in the upper layers of the test pyramid. 

I recommend a combination of both approaches to ensure sufficient test coverage, while minimizing the number of tests in a microservices context. Based on the change impact set, we prepare test data for the tests discussed in the previous section. 

Step 2(a): Agile Parallel Development 

As discussed in the previous section, as part of development, a component developer must also define and implement these APIs:

  •  APIs that allow us to set test data values in the component data store. These are sometimes referred to as mutator APIs. 
  • APIs that allow us to extract test data values, for example, from instances of components in production. These are also known as accessor APIs.

Developers should use the white-box change impact testing technique discussed above to focus their unit and component testing efforts. 

Step 2(b): Agile Parallel Testing

This is an important stage in which testers and test data engineers design, or potentially generate or refresh, the test data for test scenarios that have been impacted by changes and that will be run in subsequent stages of the CI/CD lifecycle. This assessment is based on the backlog items under development. Testers use the TDM approaches described above for cross-service system testing and end-to-end testing.  

In addition, the test data will need to be packaged, for example, in containers or using virtual data copies. This approach can ease and speed provisioning into the appropriate test environment, along with test scripts and other artifacts.  

Step 3: Build

In this step, we typically run automated build verification tests and component regression tests using the test data generated in the previous step. 

Step 4: Testing in the CD Lifecycle Stages 

The focus in these stages is to run tests in the upper layers of the test pyramid using test data created during step 2(b).  The key in these stages is to minimize the elapsed time TDM activities require. This is an important consideration: The time required to create, provision, or deploy test data must not exceed the time it takes to deploy the application in each stage.  

How do you get started with continuous TDM for microservices?

Continuous TDM is meant to be practiced in conjunction with continuous testing. Various resources offer insights into evolving to continuous testing. If you are already practicing continuous testing with microservices, and want to move to continuous TDM, proceed as follows:   

  • For new functionality, follow the TDM approach I have described. 
  • For existing software, you may choose to focus continuous TDM efforts on the most problematic or change-prone application components, since those are the ones you need to test most often. It would help to model the tests related to those components, since you can derive the benefits of combining TDM with model-based testing. While focusing on TDM for these components, aggressively virtualize dependencies on other legacy components, which can lighten your overall TDM burden. In addition, developers must provide APIs to update and access the test data for their components. 
  • For other components that do not change as often, you need to test less often. As described above, virtualize these components while testing others that need testing. In this way, teams can address TDM needs as part of technical debt remediation for these components. 

The post Continuous test data management for microservices, Part 2: Key steps appeared first on SD Times.

]]>
Continuous test data management for microservices, Part 1: Key approaches https://sdtimes.com/microservices/continuous-test-data-management-for-microservices/ Mon, 06 Jun 2022 16:52:38 +0000 https://sdtimes.com/?p=47861 Applying TDM to microservices is quite challenging. This is due to the fact that an application may have many services, each with its own underlying diverse data store. Also, there can be intricate dependencies between these services, resulting in a type of ‘spaghetti architecture.’ For these systems, TDM for end-to-end system tests can be quite … continue reading

The post Continuous test data management for microservices, Part 1: Key approaches appeared first on SD Times.

]]>
Applying TDM to microservices is quite challenging. This is due to the fact that an application may have many services, each with its own underlying diverse data store. Also, there can be intricate dependencies between these services, resulting in a type of ‘spaghetti architecture.’

For these systems, TDM for end-to-end system tests can be quite complex. However, it lends itself very well to the continuous TDM approach. As part of this approach, it is key to align TDM with the test pyramid concept.

Let’s look at the TDM approaches for tests in the various layers of the pyramid. 

TDM Approach for Supporting Microservices Unit Tests

Unit tests test the code within the microservice and at the lowest level of granularity. This is typically at a function or method level within a class or object. This is no different than how we do unit testing for other types of applications. Most test data for such tests should be synthetic. Such data is typically created by the developer or software development engineer in test (SDET), who uses “as-code” algorithmic techniques, such as combinatorial. Through this approach, teams can establish a high level of test data coverage. While running unit tests, we recommend that all dependencies outside the component (or even the function being tested) are stubbed out using mocks or virtual services

TDM Approach for Supporting Microservices Component or API Tests

This step is key for TDM of microservices, since the other tests in the stack depend on it.  In these tests, we prepare the test data for testing the microservice or component as a whole via its API.

There are various ways of doing this depending on the context: 

  1. Generate simple synthetic test data based on the API specs. This is typically used for property-based testing or unit testing of the API.
  2. Generate more robust synthetic test data from API models, for example, by using a test modeling tool like Broadcom Agile Requirements Designer. This enables us to do more rigorous API testing, for example for regression tests.
  3. Generate test data by traffic sniffing a production instance of the service, for example, by using a tool like Wireshark. This helps us create more production-like data. This approach is very useful if for some reason it isn’t possible to take a subset of data from production instances. 
  4. Generate test data by sub-setting and masking test data from a production instance of the service, or by using data virtualization. Note that many microservice architectures do not allow direct access to the data store, so we may need special data access APIs to create such test data.  

Regardless of the approach, in most cases test data fabrication for a microservice must be prepared by the developer or producer of the microservice, and made available as part of service definition. Specifically, additional APIs should be provided to set up the test data for that component. This is necessary to allow for data encapsulation within a microservice. It is also required because different microservices may have various types of data stores, often with no direct access to the data. 

This also allows the TDM of microservices applications to re-use test data, which enables teams to scale tests at higher layers of the pyramid. For example, a system or end-to-end test may span hundreds of microservices, with each having its own unique encapsulated data storage. It would be very difficult to build test data for tests that span different microservices using traditional approaches.   

Again, for a single component API test, it is recommended that all dependencies from the component be virtualized to reduce the TDM burden placed on dependent systems. 

TDM Approach for Supporting Microservices Integration and Contract Tests

These tests validate the interaction between microservices based on behaviors defined in their API specifications.

The TDM principles used for such testing are generally the same as for the process for API testing described previously. The process goes as follows: 

For contract definition, we recommend using synthetic test data, for example, based on the API specs, to define the tests for the provider component. 

The validated contract should be a recorded virtual service based on the provider service. This virtual service can then be used for consumer tests. Note that in this case, a virtual service recording forms the basis of the test data for the consumer test. 

TDM Approach for Supporting an X-service System Test or Transaction Test at the API Level 

In this type of test, we have to support a chain of API calls across multiple services. For example, this type of test may involve invoking services A, B, and C in succession.

The TDM approach for supporting this type of test is essentially the same as that for a single API test described above—except that we need to set up the test data for each of the services involved in the transaction. 

However, an additional complexity is that you also need to ensure that the test data setup for each of these services (and the underlying services they depend on) are aligned, so the test can be successfully executed. Data synchronization across microservices is largely a data management issue, not specific to TDM per se, so you need to ensure that your microservices architecture sufficiently addresses this requirement. 

Assuming data synchronization between microservices is in place, the following approaches are recommended to make test management easier: 

  1. As mentioned before, use model-based testing to describe the cross-service system tests. This allows you to specify test data constraints for the test uniformly across affected services, so that that initial setup of test data is correct. This is done using the test data setup APIs we discussed above.
  2. Since setting up test data definition across services is more time consuming, I recommend minimizing the number of cross-service tests, based on change impact testing. Run transaction tests only if the transaction, or any of the underlying components of the transaction, have changed. Again, this is a key principle of continuous testing that’s aligned with the test pyramid. 
  3. If there have been no changes to a participating component or underlying sub-component, we recommend using a virtual service representation of that component. This will further help to reduce the TDM burden for that component. 
TDM Approach for Supporting End-to-End Business Process or User Acceptance Tests 

The TDM approach for these tests is similar to that for system tests described above, since user actions map to underlying API calls. Such tests are likely to span more components. 

Many customers prefer to use real components, rather than virtual services, for user acceptance testing, which means that the TDM burden can be significant. As before, the key to reducing TDM complexity for such tests is to reduce the number of tests to the bare minimum, using techniques like change-impact testing, which was discussed above. I also recommend you use the change-impact approach to decide whether to use real components or their virtual services counterparts. If a set of components has changed as part of the release or deployment, it makes sense to use the actual components. However, if any dependent components are unchanged, and their test data has not been refreshed or is not readily available, then virtual services can be considered.

The post Continuous test data management for microservices, Part 1: Key approaches appeared first on SD Times.

]]>
Broadcom acquires VMware for $61 billion https://sdtimes.com/softwaredev/broadcom-acquires-vmware-for-61-billion/ Thu, 26 May 2022 15:33:13 +0000 https://sdtimes.com/?p=47700 Broadcom, a semiconductor and infrastructure software solutions company, and VMware, a virtualization company, today announced that they have entered into an agreement under which Broadcom will take ownership of all of the outstanding shares of VMware. This will take place as a cash-and-stock transaction that values VMware at around $61 billion. With this, Broadcom will … continue reading

The post Broadcom acquires VMware for $61 billion appeared first on SD Times.

]]>
Broadcom, a semiconductor and infrastructure software solutions company, and VMware, a virtualization company, today announced that they have entered into an agreement under which Broadcom will take ownership of all of the outstanding shares of VMware.

This will take place as a cash-and-stock transaction that values VMware at around $61 billion. With this, Broadcom will also acquire $8 billion of VMware net debt.

Upon the closing of this transaction, the Broadcom Software Group will be rebranded and operate as VMware, adding Broadcom’s infrastructure and security software solutions to an expanded VMware portfolio. 

Raghu Raghuram, chief executive officer of VMware, said, “VMware has been reshaping the IT landscape for the past 24 years, helping our customers become digital businesses. We stand for innovation and unwavering support of our customers and their most important business operations and now we are extending our commitment to exceptional service and innovation by becoming the new software platform for Broadcom. Combining our assets and talented team with Broadcom’s existing enterprise software portfolio, all housed under the VMware brand, creates a remarkable enterprise software player. Collectively, we will deliver even more choice, value and innovation to customers, enabling them to thrive in this increasingly complex multi-cloud era.”

This combination works to provide enterprise customers with an expanded platform of essential infrastructure solutions intended to accelerate innovation as well as address several information technology infrastructure needs.

According to the companies, this acquisition will allow customers to enjoy greater choice and flexibility to build, run, manage, connect, and protect applications at scale across diversified, distributed environments, regardless of where they run. 

“Building upon our proven track record of successful M&A, this transaction combines our leading semiconductor and infrastructure software businesses with an iconic pioneer and innovator in enterprise software as we reimagine what we can deliver to customers as a leading infrastructure technology company,” said Hock Tan, president and chief executive officer of Broadcom. “We look forward to VMware’s talented team joining Broadcom, further cultivating a shared culture of innovation and driving even greater value for our combined stakeholders, including both sets of shareholders.”

For more information on this acquisition, see the investors section of the Broadcom website. 

 

The post Broadcom acquires VMware for $61 billion appeared first on SD Times.

]]>
Report: Digital Product Management finds its footing https://sdtimes.com/valuestream/report-digital-product-management-finds-it-footing/ Wed, 25 May 2022 19:21:24 +0000 https://sdtimes.com/?p=47681 The relatively new practice of digital product management is helping organizations better achieve their objectives by shifting to a product-focused business model, according to a new report from Dimensional Research, sponsored by Broadcom Software. Digital product management (DPM) encompasses traditional product management, but adds continuous improvement through experimentation and validation, as well as relying on … continue reading

The post Report: Digital Product Management finds its footing appeared first on SD Times.

]]>
The relatively new practice of digital product management is helping organizations better achieve their objectives by shifting to a product-focused business model, according to a new report from Dimensional Research, sponsored by Broadcom Software.

Digital product management (DPM) encompasses traditional product management, but adds continuous improvement through experimentation and validation, as well as relying on metrics such as OKRs to make informed product decisions. Across organizations, budget, teams and tasks are being connected to business objectives to better understand how their product’s performance affects the business. This, in turn, helps organizations improve productivity and efficiency.

One of the biggest hurdles facing IT – especially now that software is driving business outcomes to a huge degree – is that there remains a real lack of understanding of how business and development teams should work together.  Laureen Knudsen, chief transformation officer in the Agile Operations Division of Broadcom Software, explained: “I’m part of the Forbes councils. And they ask these questions that you answer, and you end up in a panel article.  And one of the questions that they asked was, there’s a lot of input coming to the technical teams, how do you prioritize it? And I was the only person that said, ‘You don’t, you throw it back at the business.’ Right? You get everybody in the business together, and you make them prioritize it, because that’s not my job as the technology leader. It’s the business leader, but there’s so many people that have done things poorly, all the way back to agility, that are trying to now say, this is how you do something, but it’s just a poor implementation. And it’s a lack of understanding. I thought that if Forbes doesn’t even understand where prioritization should lie, we’re in some trouble.”

Yet organizations are stepping up to deal with these issues. Focusing on company priorities for 2022, the report found that 56% of responding organizations are undertaking initiatives for delivering more customer value, followed by improved product quality at 52%. Half are focused on becoming more efficient and 40% are focused on reliable product delivery, the report said.

Knudsen said the problem is especially glaring in companies that create software not for sale but for internal use. “You need the companies, especially those that don’t sell software that they created internally, they need to know what that means from the design level and the product management level,” she said. DPM helps them focus on the importance of the software they’re creating for that internal use and how that helps deliver value to the customer – which, in this type of case, is the employee, she noted.

Organizations, she said, that are trying DPM and working through it “are actually finding that it’s really beneficial to them when they understand what their people need, and what they need to really be doing.”

As for DPM initiatives helping organizations deliver value, view it as a work in progress or just a fad, 94% stated DPM has been successful and provided value to their business, according to the report. Eighty-six percent reported that DPM helps them better connect business objectives and customer needs, while 89% stated that DPM solutions make digital transformation easier.

Shifting from a focus on software projects to software products lies at the heart of digital transformation and value stream management, which provides organizations with a view into their processes so bottlenecks can be eliminated and production can flow in a predictable manner.

The report further found that DPM adoption faces some challenges, with respondents saying the top challenge is resistance to change. Half said integrating the DPM solution with other applications and systems proved challenging. Yet 89% said despite these hurdles, their companies are adding new products to their DPM solution and process over the next 18 months. 

“DPM may be a young methodology, but as this research shows, it has very quickly proven its value to businesses that have adopted it,” Knudsen said. “DPM is not a solution looking for a problem, but rather a solution and enabler for a direction that companies have already taken.”

The post Report: Digital Product Management finds its footing appeared first on SD Times.

]]>
Tasktop and Broadcom partner on value stream solution https://sdtimes.com/valuestream/tasktop-and-broadcom-partner-on-value-stream-solution/ Wed, 09 Mar 2022 20:54:02 +0000 https://sdtimes.com/?p=46837 Value stream management companies Tasktop and Broadcom have announced a new partnership to enable companies to better measure their business value.  Tasktop’s technology will power Broadcom’s ValueOps Connectors, which will synchronize data between software development tools and Broadcom’s ValueOps Value Stream Management Solution. According to the companies, this solution will provide companies with the ability … continue reading

The post Tasktop and Broadcom partner on value stream solution appeared first on SD Times.

]]>
Value stream management companies Tasktop and Broadcom have announced a new partnership to enable companies to better measure their business value. 

Tasktop’s technology will power Broadcom’s ValueOps Connectors, which will synchronize data between software development tools and Broadcom’s ValueOps Value Stream Management Solution.

According to the companies, this solution will provide companies with the ability to collect and unify their value stream data. 

Key benefits will include extracting siloed data to create complete data sets, increased team collaboration, and reduction of errors related to manual data collection. 

 “Every business is a software business and accessible, reliable data is needed to derive value during the software delivery process. This requires solutions to connect technology and the business with the right metrics to enable more effective enterprise digital transformation,” said Mik Kersten, CEO at Tasktop. “Broadcom’s ValueOps Connectors powered by Tasktop help uncover data lost in silos, the key to unlocking all information needed for effective value stream management.”

The post Tasktop and Broadcom partner on value stream solution appeared first on SD Times.

]]>