Shamim Ahmed, Author at SD Times https://sdtimes.com/author/shamim-ahmed/ Software Development News Fri, 04 Nov 2022 15:43:13 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.1 https://sdtimes.com/wp-content/uploads/2019/06/bnGl7Am3_400x400-50x50.jpeg Shamim Ahmed, Author at SD Times https://sdtimes.com/author/shamim-ahmed/ 32 32 Service Virtualization Use Cases for Cloud-Native Applications https://sdtimes.com/service-virtualization-use-cases-for-cloud-native-applications/ Fri, 04 Nov 2022 13:00:20 +0000 https://sdtimes.com/?p=49478 This is the second article in a two-part series. The first part is here. Service virtualization is uniquely suited to support the needs of cloud-native applications. The solution’s principles support all the key attributes of cloud-native architectures. I would contend that service virtualization and cloud-native architectures are built for one another. Just as service virtualization … continue reading

The post Service Virtualization Use Cases for Cloud-Native Applications appeared first on SD Times.

]]>
This is the second article in a two-part series. The first part is here.

Service virtualization is uniquely suited to support the needs of cloud-native applications. The solution’s principles support all the key attributes of cloud-native architectures. I would contend that service virtualization and cloud-native architectures are built for one another. Just as service virtualization is considered essential for agile development, it is just as necessary for supporting cloud-native applications. 

In this section, we will discuss how Broadcom Service Virtualization supports each of the 10 key attributes of cloud-native application systems

Continuous Microservices Application Development and Testing 

One of the key attributes of cloud-native applications is the use of loosely coupled components, such as microservices. Because component-based systems have numerous dependencies on each other, use of service virtualization is especially necessary to support the agile development, testing, and continuous delivery of such applications. This is one of the most popular use cases for service virtualization. See figure below. 

 

Service virtualization can be used to support every aspect of microservices testing, including unit, component, integration, contract, and system testing. Plus, testing can be done continuously throughout the CI/CD pipeline. For more details on using service virtualization for component-based applications, please refer to my previous blogs (Continuous Service Virtualization Part 1 and Part 2). 

The following sections highlight a few key use cases:

a. Generation of Synthetic Virtual Services from Microservice APIs

Service virtualization can be used to generate a synthetic virtual service for a dependent component that does not yet exist, or is unavailable, using synthetic request-response pairs based on the API’s definition of the service.  This is especially useful during development and unit testing. 

The synthetic virtual service may be progressively enhanced to support subsequent testing (such as integration and system tests) along the CI/CD lifecycle.

b. Support for Microservices Contract Testing

A service consumer can test a service provider using synthetic request-response pairs developed from its API specification. A service provider can in turn test its interactions with a consumer using a validated contract—which can be implemented using a recorded virtual service. See figure below. 

c. Support for Continuous Microservices Performance Testing

This is one of the most important use cases for service virtualization. With service virtualization, we can truly shift microservices performance testing left. This significantly reduces the need for time consuming, end-to-end load tests that have to be conducted before release. Service virtualization enables limited, scaled performance testing at the component level to validate associated service level objectives (SLOs). This also enables scaled testing of transactions across multiple services using their APIs. For more details on this, please refer to my prior blog on continuous performance of microservices.

d.  Support for Easier Chaos and Negative Testing

Virtual services provide a means to support repeatable and structured chaos and detrimental conditions testing. For example, they can simulate such conditions as non-responsiveness, downtime, or slow response time. This is much easier and far less time consuming than having to power off servers or take down physical computing instances.

e. Support for Continuous Reliability Engineering

Use cases (c) and (d) above allow us to continuously validate the reliability of cloud-based applications by applying the principles of continuous testing at the component level. In this way, we can test every component change early in the lifecycle to see if its SLOs are met. With service virtualization, we can simulate dependent components along with their SLOs (or SLIs if SLOs are not available). For more information, please refer to my blog on Continuous Reliability.   

Support for Self-Service, Virtual, Shared, and Elastic Infrastructure 

This is the second key attribute for cloud-native applications. As discussed in section III, teams can use virtual services as a stand-in for real components. Virtual services can easily be packaged into lightweight container-based VSEs, which can be deployed on-demand into ephemeral cloud environments and they can scale automatically as needed. 

In fact, libraries of virtual services may themselves be offered as a service (see “Service Virtualization as a Service)—with all the capabilities of a cloud-native service—so that they may governed and consumed across multiple teams and applications. 

Support for Isolation from Server and Operation System Dependencies 

Virtual services can be packaged into containers that may be ported across a variety of computer environments, regardless of where the real end points are hosted. Plus, they can be deployed across multiple hybrid computing environments with different types of hardware and OS. This not only includes cloud-native applications, but legacy systems (such as mainframes) and other complex systems that these applications need to interact with. 

One use case of this characteristic is service virtualization’s support for testing cloud-native applications with function-as-a-service (FaaS) dependencies. Generally speaking, FaaS implementations are not very portable across multiple cloud providers. This makes it difficult to incorporate FaaS code into testing environments that may reside in a different cloud. Service virtualization can be used to virtualize the FaaS component so it can be deployed into a local testing environment with automatically scaling VSEs. This enables teams to simulate the FaaS behavior for an application under test that depends on a FaaS endpoint. 

Support for Independent Lifecycle Management Using Agile/DevOps 

Support for independent life cycle management is one of the key use cases for service virtualization. (See my previous blog for more on this topic.) Service virtualization helps to optimize continuous delivery. In fact, virtual services are key to supporting the principal of having a dedicated CI/CD pipeline for each microservice. 

Since microservices have dependencies on each other, this means that individual CI/CD pipelines may be impeded when a dependent microservice is not available or is undergoing parallel development. Virtual services help to remove these dependencies between parallel pipelines.  Virtual services can be made available to other services as needed. See figure below. 

Note: This principle can be used to support easier canary testing. Cloud-native applications are designed to allow frequent micro releases, for example, of a single component. This allows us to get fast feedback by deploying only the changed component into a canary environment. Service virtualization makes this much easier and more cost-effective by virtualizing the rest of the application ecosystem, enabling teams to focus on the behavior of only the changed component.  

Service virtualization supports a rich set of APIs that can be used to integrate with CI/CD and other tools for automation of deployment, orchestration, updates, and more.

Support for Lightweight Containers 

Virtual services can easily be packaged into lightweight containers. In addition, VSEs can be deployed on-demand into ephemeral cloud environments, and scale automatically as required  in Kubernetes clusters. 

Support for Best-of-Breed Languages and Frameworks

Virtual services are built at the protocol level, and therefore are generally able to support applications, irrespective of the programming language they are developed in. This allows us to build virtual services for applications that have been created with a broad range of languages and platforms. 

For developers, virtual services may be also be developed as code and this approach is supported by many programming languages.  

Support for API-Based Interaction and Collaboration 

As discussed before, service virtualization allows us to create synthetic virtual services from service API specifications. It also supports extensive API-based testing (including what’s referred to as “headless” testing) across layers of APIs typically used in cloud-based applications. 

In addition, virtual services don’t just support simple API protocols like REST. Virtual services can support a variety of API types (such as gRPC) across multiple types of application systems. 

The figure below shows a common API architecture that has experience, process, and system APIs. Service virtualization can be used to not only virtualize each of the sub-services that support the API, but the entire API layer by virtualizing the “nearest neighbor.” 

For example, we can run user-experience tests on front-end devices by virtualizing the “omni-channel” API—without having to set up a test environment for the all the complicated stacks below it! 

In addition, service virtualization supports a rich set of APIs that can be used to integrate with CI/CD and other tools for automation of deployment, orchestration, updates, and more.

Service virtualization can also be integrated with API Gateways to allow transparent access to API back-end services—whether implemented by a real service or a virtual service. See figure below. 

Support for Stateless and Stateful Services

Virtual services can be easily created to be stand-ins for stateless services by simulating their behavior. 

When we need to virtualize a stateful service, we can create virtual services that are supported by extensive test data that represents a sub-set of the stateful service’s data. We can do so by integrating with a test data management system. For more on the interaction between virtual services and test data in the context of microservices, please refer to my prior blog on continuous test data management

Support for Automation and Infrastructure-as-Code

Virtual services are highly amenable to automated deployment (for example in CI/CD pipelines), especially when packaged as containers. These services can be defined as part of infrastructure-as-code environment recipes, such as Helm Charts, for automated provisioning and deployment.  

Support for Governance Models

Virtual services are typically deployed into Virtual Service Environments. This allows us to define governance policies of virtual services that mimic those of corresponding applications. 

Summary and Conclusion 

We have examined how Broadcom Service Virtualization supports a wide variety of use cases for cloud computing—from cloud migration to support for cloud-native computing. 

Our view is that service virtualization and cloud capabilities complement each other. By combining  Service virtualization and cloud services, teams can establish a level of truly agile application development and delivery, which would simply not be possible with only one of these capabilities on its own. In fact, teams need to use service virtualization to support the requirements of cloud-native systems. 

The post Service Virtualization Use Cases for Cloud-Native Applications appeared first on SD Times.

]]>
How service virtualization supports cloud computing: Key use cases https://sdtimes.com/test/how-service-virtualization-supports-cloud-computing-key-use-cases/ Tue, 01 Nov 2022 14:49:35 +0000 https://sdtimes.com/?p=49416 (First of two parts) Several weeks ago, a customer of the Broadcom Service Virtualization solution posed the following question: “Now that we’re moving to the cloud, do we still need Service Virtualization?”  The question struck me as odd. My sense is that this confusion probably stemmed from the misperception that, since cloud environments can be … continue reading

The post How service virtualization supports cloud computing: Key use cases appeared first on SD Times.

]]>
(First of two parts)

Several weeks ago, a customer of the Broadcom Service Virtualization solution posed the following question: “Now that we’re moving to the cloud, do we still need Service Virtualization?” 

The question struck me as odd. My sense is that this confusion probably stemmed from the misperception that, since cloud environments can be spun up quickly, people think they can easily address test environment bottlenecks and, in the process, service virtualization capabilities would be rendered unnecessary. Obviously, that is not the case at all! Being able to spin up infrastructure quickly does not address the issue of what elements need to be established in order to make environments useful for desired testing efforts. 

In fact, all the use cases for the Service Virtualization solution are just as relevant in the cloud as they are in traditional on-premises-based systems. Following are a few key examples of these use cases: 

  1. Simplification of test environments by simulating dependent end points   
  2. Support for early, shift-left testing of application components in isolation 
  3. Support for performance and reliability engineering 
  4. Support for integration testing with complex back-ends (like mainframes) or third-party systems
  5. Simplification of test data management 
  6. Support for training environments
  7. Support for chaos and negative testing 

All of these use cases are documented in detail here.  

Further, what’s more pertinent is that Service Virtualization helps to address many additional use cases that are unique to cloud-based systems. 

Fundamentally, Service Virtualization and cloud capabilities complement each other. Combined, Service Virtualization and cloud services deliver true application development and delivery agility that would not be possible with only one of these technologies. 

Using virtual services deployed to an ephemeral test environment in the cloud makes the setup of the environment fast, lightweight, and scalable. (Especially compared to setting up an entire SAP implementation in the ephemeral cloud environment, for example.) 

Let’s examine some key ways to use Service Virtualization for cloud computing. 

Service Virtualization Use Cases for Cloud Migration 

Cloud migration typically involves re-hosting, re-platforming, re-factoring, or re-architecting existing systems. Regardless of the type of migration, Service Virtualization plays a key role in functional, performance, and integration testing of migrated applications—and the use cases are the same as those for on-premises applications. 

However, there are a couple of special use cases that stand out for Service Virtualization’s support for cloud migration: 

  1. Early Pre-Migration Performance Verification and Proactive Performance Engineering 

In most cases, migrating applications to the cloud will result in performance changes, typically due to differences in application distribution and network characteristics. For example, various application components may reside in different parts of a hybrid cloud implementation, or performance latencies may be introduced by the use of distributed cloud systems. 

With Service Virtualization, we can easily simulate the performance of all the different application components, including their different response characteristics and latencies. Consequently, we can understand the performance impact, including both overall and at the component level, before the migration is initiated.  

This allows us to focus on appropriate proactive performance engineering to ensure that performance goals can be met post migration.  

In addition, Service Virtualization plays a key role in performance testing during and after the migration, which are common, well-understood use cases. 

2. Easier Hybrid Test Environment Management for Testing During Migration 

This is an extension to the common use case of Service Virtualization, which is focused on simplifying testing environments. 

However, during application migration this testing becomes more crucial given the mix of different environments that are involved. Customers typically migrate their applications or workloads to the cloud incrementally, rather than all at once. This means that test environments during migration are much more complicated to set up and manage. That’s because tests may span multiple environments, both cloud, for migrated applications—and on-premises—for pre-migration applications. In some cases, specific application components (such as those residing on mainframes), may not be migrated at all. 

Many customers are impeded from early migration testing due to the complexities of setting up test environments across evolving hybrid systems. 

For example, applications that are being migrated to the cloud may have dependencies on other applications in the legacy environment. Testing of such applications requires access to test environments for applications in the legacy environment, which may be difficult to orchestrate using continuous integration/continuous delivery (CI/CD) tools in the cloud. By using Service Virtualization, it is much easier to manage and provision virtual services that represent legacy applications, while having them run in the local cloud testing environment of the migrated application. 

On the other hand, prior to migration, applications running in legacy environments will have dependencies on applications that have been migrated to the cloud. In these cases, teams may not know how to set up access to the applications running in cloud environments. In many cases, there are security challenges in enabling such access. For example, legacy applications may not have been re-wired for the enhanced security protocols that apply to the cloud applications. 

By using Service Virtualization, teams can provision virtual services that represent the migrated applications within the bounds of the legacy environments themselves, or in secure testing sandboxes on the cloud. 

In addition, Service Virtualization plays a key role in parallel migrations, that is, when multiple applications that are dependent on each other are being migrated at the same time. This is an extension of the key principle of agile parallel development and testing, which is a well-known use case for Service Virtualization.

3. Better Support for Application Refactoring and Re-Architecting During Migration 

Organizations employ various application re-factoring techniques as part of their cloud migration. These commonly include re-engineering to leverage microservices architectures and container-based packaging, which are both key approaches for cloud-native applications. 

Regardless of the technique used, all these refactoring efforts involve making changes to existing applications. Given that, these modifications require extensive testing. All the traditional use cases of Service Virtualization apply to these testing efforts. 

For example, the strangler pattern is a popular re-factoring technique that is used to decompose a monolithic application into a microservices architecture that is more scalable and better suited to the cloud. In this scenario, testing approaches need to change dramatically to leverage distributed computing concepts more generally and microservices testing in particular. Service Virtualization is a key to enabling all kinds of microservices testing. We will address in detail how Service Virtualization supports the needs of such cloud-native applications in section IV below.

4. Alleviate Test Data Management Challenges During Migration 

In all of the above scenarios, the use of Service Virtualization also helps to greatly alleviate test data management (TDM) problems. These problems are complex in themselves, but they are compounded during migrations. In fact, data migration is one of the most complicated and time-consuming processes during cloud migration, which may make it difficult to create and provision test data during the testing process. 

For example, data that was once easy to access across applications in a legacy environment may no longer be available to the migrated applications (or vice-versa) due to the partitioning of data storage. Also, the mechanism for synchronizing data across data stores may itself have changed. This often requires additional cumbersome and laborious TDM work to set up test data for integration testing—data that may eventually be thrown away post migration. With Service Virtualization, you can simulate components and use synthetic test data generation in different parts of the cloud. This is a much faster and easier way to address  TDM problems. Teams also often use data virtualization in conjunction with Service Virtualization to address TDM requirements.  

Service Virtualization Use Cases for Hybrid Cloud Computing 

Once applications are migrated to the cloud, all of the classic use cases for Service Virtualization continue to apply. 

In this section, we will discuss some of the key use cases for supporting hybrid cloud computing. 

  1. Support for Hybrid Cloud Application Testing and Test Environments 

Post migration, many enterprises will operate hybrid systems based on a mix of on-premises applications in private clouds (such as those running on mainframes), different public cloud systems (including AWS, Azure, and Google Cloud Platform), and on various SaaS provider environments (such as Salesforce). See a simplified view in the figure below. 

 

Setting up test environments for these hybrid systems will continue to be a challenge. Establishing environments for integration testing across multiple clouds can be particularly difficult. 

Service Virtualization clearly helps to virtualize these dependencies, but more importantly, it makes virtual services easily available to developers and testers, where and when they need them. 

For example, consider the figure above. Application A is hosted on a private cloud, but dependent on other applications, including E, which is running in a SaaS environment, and J, which is running in a public cloud. Developers and testers for application A depend on virtual services created for E and J. For hybrid cloud environments, we also need to address where the virtual service will be hosted for different test types, and how they will be orchestrated across the different stages of the CI/CD pipeline. 

See figure below.

 

Generally speaking, during the CI process, developers and testers would like to have lightweight synthetic virtual services for E and J, and to have them created and hosted on the same cloud as A. This minimizes the overhead involved in multi-cloud orchestration. 

However, as we move from left to right in the CD lifecycle, we would not only want the virtual services for E and J to become progressively realistic, but also hosted closer to the remote environments, where the “real” dependent applications are hosted. And these services would need to orchestrate a multi-cloud CI/CD system. Service Virtualization frameworks would allow this by packaging virtual services into containers or virtual machines (VMs) that are appropriate for the environment they need to run in. 

Note that it is entirely possible for application teams to choose to host the virtual services for the CD lifecycle on the same host cloud as app A. Service Virtualization frameworks would allow that by mimicking the network latencies that arise from multi-cloud interactions. 

The key point is to emphasize that the use of Service Virtualization not only simplifies test environment management across clouds, but also provides the flexibility to deploy the virtual service where and when needed. 

2. Support for Agile Test Environments in Cloud Pipelines 

In the introduction, we discussed how Service Virtualization complements cloud capabilities. While cloud services make it faster and easier to provision and set up on-demand environments, the use of Service Virtualization complements that agility. With the solution, teams can quickly deploy useful application assets, such as virtual services, into their environments. 

For example, suppose our application under test has a dependency on a complex application like SAP, for which we need to set up a test instance of the app. Provisioning a new test environment in the cloud may take only a few seconds, but deploying and configuring a test installation of a complex application like SAP into that environment would take a long time, impeding the team’s ability to test quickly. In addition, teams would need to set up test data for the application, which can be complex and resource intensive. By comparison, deploying a lightweight virtual service that simulates a complex app like SAP takes no time at all, thereby minimizing the testing impediments associated with environment setup.   

3. Support for Scalable Test Environments in Cloud Pipelines

In cloud environments, virtual service environments (VSEs) can be deployed as containers into Kubernetes clusters. This allows test environments to scale automatically based on testing demand by expanding the number of virtual service instances. This is useful for performance and load testing, cases in which the load level is progressively scaled up. In response, the test environment hosting the virtual services can also automatically scale up to ensure consistent performance response. This can also help the virtual service to mimic the behavior of a real automatically scaling application. 

Sometimes, it is difficult to size a performance testing environment for an application so that it appropriately mimics production. Automatically scaling test environments can make this easier. For more details on this, please refer to my previous blog on Continuous Performance Testing of Microservices, which discusses how to do scaled component testing.

4. Support for Cloud Cost Reduction 

Many studies (such as one done by Cloud4C) have indicated that enterprises often over-provision cloud infrastructure and a significant proportion (about 30%) of cloud spending is wasted. This is due to various reasons, including the ease of environment provisioning, idle resources, oversizing, and lack of oversight. 

While production environments are more closely managed and monitored, this problem is seen quite often in test and other pre-production environments, which developers and teams are empowered to spin up to promote agility. Most often, these environments are over-provisioned (or sized larger than they need to be), contain data that is not useful after a certain time (for example, including aged test data or obsolete builds or test logs), and not properly cleaned up after their use—developers and testers love to quickly move on the next item on their backlog! 

Use of Service Virtualization can help to alleviate some of this waste. As discussed above, replacing real application instances with virtual services helps to reduce the size of the test environment significantly. Compared to complex applications, virtual services are also easier and faster to deploy and undeploy, making it easier for pipeline engineers to automate cleanup in their CI/CD pipeline scripts.  

In many cases, virtual service instances may be shared between multiple applications that are dependent on the same end point. Automatically scaling VSEs can also help to limit the initial size of test environments. 

Finally, VSEs to which actual virtual services are deployed, can be actively monitored to ensure tracking, usage, and de-provisioning when not used. 

(Continue on to Part 2)

 

The post How service virtualization supports cloud computing: Key use cases appeared first on SD Times.

]]>