service virtualization Archives - SD Times https://sdtimes.com/tag/service-virtualization/ Software Development News Fri, 04 Nov 2022 15:43:13 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.1 https://sdtimes.com/wp-content/uploads/2019/06/bnGl7Am3_400x400-50x50.jpeg service virtualization Archives - SD Times https://sdtimes.com/tag/service-virtualization/ 32 32 How service virtualization supports cloud computing: Key use cases https://sdtimes.com/test/how-service-virtualization-supports-cloud-computing-key-use-cases/ Tue, 01 Nov 2022 14:49:35 +0000 https://sdtimes.com/?p=49416 (First of two parts) Several weeks ago, a customer of the Broadcom Service Virtualization solution posed the following question: “Now that we’re moving to the cloud, do we still need Service Virtualization?”  The question struck me as odd. My sense is that this confusion probably stemmed from the misperception that, since cloud environments can be … continue reading

The post How service virtualization supports cloud computing: Key use cases appeared first on SD Times.

]]>
(First of two parts)

Several weeks ago, a customer of the Broadcom Service Virtualization solution posed the following question: “Now that we’re moving to the cloud, do we still need Service Virtualization?” 

The question struck me as odd. My sense is that this confusion probably stemmed from the misperception that, since cloud environments can be spun up quickly, people think they can easily address test environment bottlenecks and, in the process, service virtualization capabilities would be rendered unnecessary. Obviously, that is not the case at all! Being able to spin up infrastructure quickly does not address the issue of what elements need to be established in order to make environments useful for desired testing efforts. 

In fact, all the use cases for the Service Virtualization solution are just as relevant in the cloud as they are in traditional on-premises-based systems. Following are a few key examples of these use cases: 

  1. Simplification of test environments by simulating dependent end points   
  2. Support for early, shift-left testing of application components in isolation 
  3. Support for performance and reliability engineering 
  4. Support for integration testing with complex back-ends (like mainframes) or third-party systems
  5. Simplification of test data management 
  6. Support for training environments
  7. Support for chaos and negative testing 

All of these use cases are documented in detail here.  

Further, what’s more pertinent is that Service Virtualization helps to address many additional use cases that are unique to cloud-based systems. 

Fundamentally, Service Virtualization and cloud capabilities complement each other. Combined, Service Virtualization and cloud services deliver true application development and delivery agility that would not be possible with only one of these technologies. 

Using virtual services deployed to an ephemeral test environment in the cloud makes the setup of the environment fast, lightweight, and scalable. (Especially compared to setting up an entire SAP implementation in the ephemeral cloud environment, for example.) 

Let’s examine some key ways to use Service Virtualization for cloud computing. 

Service Virtualization Use Cases for Cloud Migration 

Cloud migration typically involves re-hosting, re-platforming, re-factoring, or re-architecting existing systems. Regardless of the type of migration, Service Virtualization plays a key role in functional, performance, and integration testing of migrated applications—and the use cases are the same as those for on-premises applications. 

However, there are a couple of special use cases that stand out for Service Virtualization’s support for cloud migration: 

  1. Early Pre-Migration Performance Verification and Proactive Performance Engineering 

In most cases, migrating applications to the cloud will result in performance changes, typically due to differences in application distribution and network characteristics. For example, various application components may reside in different parts of a hybrid cloud implementation, or performance latencies may be introduced by the use of distributed cloud systems. 

With Service Virtualization, we can easily simulate the performance of all the different application components, including their different response characteristics and latencies. Consequently, we can understand the performance impact, including both overall and at the component level, before the migration is initiated.  

This allows us to focus on appropriate proactive performance engineering to ensure that performance goals can be met post migration.  

In addition, Service Virtualization plays a key role in performance testing during and after the migration, which are common, well-understood use cases. 

2. Easier Hybrid Test Environment Management for Testing During Migration 

This is an extension to the common use case of Service Virtualization, which is focused on simplifying testing environments. 

However, during application migration this testing becomes more crucial given the mix of different environments that are involved. Customers typically migrate their applications or workloads to the cloud incrementally, rather than all at once. This means that test environments during migration are much more complicated to set up and manage. That’s because tests may span multiple environments, both cloud, for migrated applications—and on-premises—for pre-migration applications. In some cases, specific application components (such as those residing on mainframes), may not be migrated at all. 

Many customers are impeded from early migration testing due to the complexities of setting up test environments across evolving hybrid systems. 

For example, applications that are being migrated to the cloud may have dependencies on other applications in the legacy environment. Testing of such applications requires access to test environments for applications in the legacy environment, which may be difficult to orchestrate using continuous integration/continuous delivery (CI/CD) tools in the cloud. By using Service Virtualization, it is much easier to manage and provision virtual services that represent legacy applications, while having them run in the local cloud testing environment of the migrated application. 

On the other hand, prior to migration, applications running in legacy environments will have dependencies on applications that have been migrated to the cloud. In these cases, teams may not know how to set up access to the applications running in cloud environments. In many cases, there are security challenges in enabling such access. For example, legacy applications may not have been re-wired for the enhanced security protocols that apply to the cloud applications. 

By using Service Virtualization, teams can provision virtual services that represent the migrated applications within the bounds of the legacy environments themselves, or in secure testing sandboxes on the cloud. 

In addition, Service Virtualization plays a key role in parallel migrations, that is, when multiple applications that are dependent on each other are being migrated at the same time. This is an extension of the key principle of agile parallel development and testing, which is a well-known use case for Service Virtualization.

3. Better Support for Application Refactoring and Re-Architecting During Migration 

Organizations employ various application re-factoring techniques as part of their cloud migration. These commonly include re-engineering to leverage microservices architectures and container-based packaging, which are both key approaches for cloud-native applications. 

Regardless of the technique used, all these refactoring efforts involve making changes to existing applications. Given that, these modifications require extensive testing. All the traditional use cases of Service Virtualization apply to these testing efforts. 

For example, the strangler pattern is a popular re-factoring technique that is used to decompose a monolithic application into a microservices architecture that is more scalable and better suited to the cloud. In this scenario, testing approaches need to change dramatically to leverage distributed computing concepts more generally and microservices testing in particular. Service Virtualization is a key to enabling all kinds of microservices testing. We will address in detail how Service Virtualization supports the needs of such cloud-native applications in section IV below.

4. Alleviate Test Data Management Challenges During Migration 

In all of the above scenarios, the use of Service Virtualization also helps to greatly alleviate test data management (TDM) problems. These problems are complex in themselves, but they are compounded during migrations. In fact, data migration is one of the most complicated and time-consuming processes during cloud migration, which may make it difficult to create and provision test data during the testing process. 

For example, data that was once easy to access across applications in a legacy environment may no longer be available to the migrated applications (or vice-versa) due to the partitioning of data storage. Also, the mechanism for synchronizing data across data stores may itself have changed. This often requires additional cumbersome and laborious TDM work to set up test data for integration testing—data that may eventually be thrown away post migration. With Service Virtualization, you can simulate components and use synthetic test data generation in different parts of the cloud. This is a much faster and easier way to address  TDM problems. Teams also often use data virtualization in conjunction with Service Virtualization to address TDM requirements.  

Service Virtualization Use Cases for Hybrid Cloud Computing 

Once applications are migrated to the cloud, all of the classic use cases for Service Virtualization continue to apply. 

In this section, we will discuss some of the key use cases for supporting hybrid cloud computing. 

  1. Support for Hybrid Cloud Application Testing and Test Environments 

Post migration, many enterprises will operate hybrid systems based on a mix of on-premises applications in private clouds (such as those running on mainframes), different public cloud systems (including AWS, Azure, and Google Cloud Platform), and on various SaaS provider environments (such as Salesforce). See a simplified view in the figure below. 

 

Setting up test environments for these hybrid systems will continue to be a challenge. Establishing environments for integration testing across multiple clouds can be particularly difficult. 

Service Virtualization clearly helps to virtualize these dependencies, but more importantly, it makes virtual services easily available to developers and testers, where and when they need them. 

For example, consider the figure above. Application A is hosted on a private cloud, but dependent on other applications, including E, which is running in a SaaS environment, and J, which is running in a public cloud. Developers and testers for application A depend on virtual services created for E and J. For hybrid cloud environments, we also need to address where the virtual service will be hosted for different test types, and how they will be orchestrated across the different stages of the CI/CD pipeline. 

See figure below.

 

Generally speaking, during the CI process, developers and testers would like to have lightweight synthetic virtual services for E and J, and to have them created and hosted on the same cloud as A. This minimizes the overhead involved in multi-cloud orchestration. 

However, as we move from left to right in the CD lifecycle, we would not only want the virtual services for E and J to become progressively realistic, but also hosted closer to the remote environments, where the “real” dependent applications are hosted. And these services would need to orchestrate a multi-cloud CI/CD system. Service Virtualization frameworks would allow this by packaging virtual services into containers or virtual machines (VMs) that are appropriate for the environment they need to run in. 

Note that it is entirely possible for application teams to choose to host the virtual services for the CD lifecycle on the same host cloud as app A. Service Virtualization frameworks would allow that by mimicking the network latencies that arise from multi-cloud interactions. 

The key point is to emphasize that the use of Service Virtualization not only simplifies test environment management across clouds, but also provides the flexibility to deploy the virtual service where and when needed. 

2. Support for Agile Test Environments in Cloud Pipelines 

In the introduction, we discussed how Service Virtualization complements cloud capabilities. While cloud services make it faster and easier to provision and set up on-demand environments, the use of Service Virtualization complements that agility. With the solution, teams can quickly deploy useful application assets, such as virtual services, into their environments. 

For example, suppose our application under test has a dependency on a complex application like SAP, for which we need to set up a test instance of the app. Provisioning a new test environment in the cloud may take only a few seconds, but deploying and configuring a test installation of a complex application like SAP into that environment would take a long time, impeding the team’s ability to test quickly. In addition, teams would need to set up test data for the application, which can be complex and resource intensive. By comparison, deploying a lightweight virtual service that simulates a complex app like SAP takes no time at all, thereby minimizing the testing impediments associated with environment setup.   

3. Support for Scalable Test Environments in Cloud Pipelines

In cloud environments, virtual service environments (VSEs) can be deployed as containers into Kubernetes clusters. This allows test environments to scale automatically based on testing demand by expanding the number of virtual service instances. This is useful for performance and load testing, cases in which the load level is progressively scaled up. In response, the test environment hosting the virtual services can also automatically scale up to ensure consistent performance response. This can also help the virtual service to mimic the behavior of a real automatically scaling application. 

Sometimes, it is difficult to size a performance testing environment for an application so that it appropriately mimics production. Automatically scaling test environments can make this easier. For more details on this, please refer to my previous blog on Continuous Performance Testing of Microservices, which discusses how to do scaled component testing.

4. Support for Cloud Cost Reduction 

Many studies (such as one done by Cloud4C) have indicated that enterprises often over-provision cloud infrastructure and a significant proportion (about 30%) of cloud spending is wasted. This is due to various reasons, including the ease of environment provisioning, idle resources, oversizing, and lack of oversight. 

While production environments are more closely managed and monitored, this problem is seen quite often in test and other pre-production environments, which developers and teams are empowered to spin up to promote agility. Most often, these environments are over-provisioned (or sized larger than they need to be), contain data that is not useful after a certain time (for example, including aged test data or obsolete builds or test logs), and not properly cleaned up after their use—developers and testers love to quickly move on the next item on their backlog! 

Use of Service Virtualization can help to alleviate some of this waste. As discussed above, replacing real application instances with virtual services helps to reduce the size of the test environment significantly. Compared to complex applications, virtual services are also easier and faster to deploy and undeploy, making it easier for pipeline engineers to automate cleanup in their CI/CD pipeline scripts.  

In many cases, virtual service instances may be shared between multiple applications that are dependent on the same end point. Automatically scaling VSEs can also help to limit the initial size of test environments. 

Finally, VSEs to which actual virtual services are deployed, can be actively monitored to ensure tracking, usage, and de-provisioning when not used. 

(Continue on to Part 2)

 

The post How service virtualization supports cloud computing: Key use cases appeared first on SD Times.

]]>
The next wave in service virtualization: Intelligent mocks! https://sdtimes.com/test/the-next-wave-in-service-virtualization-intelligent-mocks/ Wed, 27 Jul 2022 15:16:57 +0000 https://sdtimes.com/?p=48383 Did you know that service virtualization has been around for about two decades? That’s right. Even before the cloud was considered mainstream, we had service virtualization solutions to help in the development and testing of software applications.  As a refresher, service virtualization is a technique that simulates the behavior of various components in software applications. … continue reading

The post The next wave in service virtualization: Intelligent mocks! appeared first on SD Times.

]]>
Did you know that service virtualization has been around for about two decades? That’s right. Even before the cloud was considered mainstream, we had service virtualization solutions to help in the development and testing of software applications. 

As a refresher, service virtualization is a technique that simulates the behavior of various components in software applications. 3rd-party services, APIs, databases, mainframes, and other components that communicate using common standard messaging protocols can all be virtualized. Service virtualization has been a great benefit in testing because it acts like a stunt double for all the back-end services that need to be tested but may not always have easy access to them. 

Mocks have also been around for decades and perform a similar function which is to make a replica or imitation of an object to be tested. Mocking is primarily used in unit testing where there are dependencies on other complex objects. To isolate that behavior a mock is created to simulate the behavior of the real object. Object mocking is often represented by tools and frameworks like Mockito. With the right objects mocked, the unit test can focus on what must be tested and not how to set various objects to the right state just to be able to perform particular test scenarios. The following diagram illustrates the concept of how a mock or virtual service steps in, in place of the real object.

Both mocks and virtual services play a valuable role in software testing. Traditional mocks, however, lack the ability to provide robust integration testing. For example, being able to test the full stack of an application’s behavior based on the varied responses from a dependent service is generally not possible with mocks. What happens if I want to know a particular HTTP response that is returned by an API? But it’s not only about having the dependency available. With service virtualization tools you have the entire behavior of the dependency under your control. For example:

  • You may use service virtualization to return responses that contain test data. This capability is very hard or impossible to load to the real service. 
  • You may use service virtualization to return responses that represent various failure scenarios that may be very hard to reproduce using the real service. 

Sometimes companies find themselves in between mocks and service virtualization. You might say they are between a mock and a hard place! They like the versatility of building mocks on the fly, but they know that they have limitations. Sometimes, developers live with the limitations to do some level of testing and still get code out the door.

When is Traditional Service Virtualization Too Much?

Service virtualization, for all its benefits, may also be a big hammer when a smaller one is needed for some types of testing scenarios. Here are some of the challenges with legacy service virtualization tools:

  • High Total Cost of Ownership – Traditional service virtualization solutions are expensive
  • Expert Setup Often Required – Most companies have a team that handles the setup and administration of service virtualization
  • Often On-Prem Only – Most service virtualization tools are not cloud-based

With all its versatility, service virtualization, for some companies, may be too costly, too complex, and too time consuming for the benefits it provides. Sometimes developers are looking for a lightweight and faster way to implement virtual services. They want the benefits of virtual services without the headaches and delays of traditional service virtualization solutions.

What are Intelligent Mocks?

Basically, intelligent mocks provide the best of both worlds; the agile capabilities of mocks combined with the robustness and depth of service virtualization. A developer (or tester) should be able to configure intelligent mocks for their tests by themselves without waiting for another group within the company to handle their requests. This is possible when the solution is cloud-based and agile.

Intelligent mocks can be part of a testing platform where services are delivered to the developer or tester for any testing scenario (e.g., unit testing, UI testing, performance testing, chaos testing, etc.). Because this solution is cloud based it is cheaper to deploy and cheaper to maintain. Ideally intelligent mock services would be comprised of:

  • Mock Services

A lightweight HTTP only virtual service which can scale horizontally and vertically and provide rapid time to value with specs such as Swagger and WSDL or transactions (R/R pairs). Runs on the cloud and doesn’t require on-premises deployments that are difficult to maintain.

  • Asset Catalog

A central catalog to store all service virtualization related artifacts for collaboration. Fosters sharing and re-use of artifacts between developers and testers so that complex rework of assets is minimized.

  • Virtual Service Environment (VSE)

A containerized, on-demand VSE to deploy multiprotocol advanced virtual services without any component dependency on legacy service virtualization platforms. Spin up/down a dedicated VSE during test execution. All tests utilizing virtual services need to be hosted in a virtual service environment. 

  • (Test) Data Driven Virtual Services

Provide comprehensive test data generated on the fly for each mock service in a variety of scenarios and types. The ability to build test data quickly and compliantly to avoid common issues with privacy regulations.

Perhaps the key to lightweight service virtualization solutions is this last point regarding test data. An essential aspect of every test execution is the test data. Each test is driven by data. The more comprehensive the data, the more detailed and high quality the test scenarios will be. But getting data is not always easy, and it’s a time-consuming process. The source of this data could be hard-coded directly into the test while some rely on a spreadsheet to lookup this data or even hook up to a database at runtime.

Note: When a traditional mock service is created, you can view the transactions which contain test data which was a part of the specification file. However, the test data in the mock service is static which means if that mock service contains five transactions, then during execution the responses will be based on those five transactions only, which would be permissible for basic testing. But the goal of mock services is to stand-in for the real service. This implies that the testing would change from basic to a more comprehensive approach as the teams start their negative testing or contract testing. This warrants more test data as a requirement.

Shift Your Service Virtualization Left

The benefits of service virtualization and mocks are clear. Without them our application development would be much slower and costly. The question for many development teams is about finding the right testing tools for the job. Traditional mocks still work but are limited in their flexibility for handling variable test responses, also known as chaos testing. 

Traditional service virtualization, on the other hand, provides a high horsepower solution but is sometimes more than is needed during various testing scenarios. The next advance for continuous testing brings the benefits from both service virtualization and mocks together in the right proportions. Intelligent mocks will speed development by placing the power of continuous testing further into the hands of more developers and testers.

The post The next wave in service virtualization: Intelligent mocks! appeared first on SD Times.

]]>
How service virtualization helped Alaska Airlines straighten up and fly right https://sdtimes.com/test/how-service-virtualization-helped-alaska-airlines-straighten-up-and-fly-right/ Fri, 13 Nov 2020 21:44:29 +0000 https://sdtimes.com/?p=42104 Airlines are all about safety. That’s their number one concern. And what they do to help predict that planes will take off and arrive safely is to run different scenarios based on variances in the weight of the plane and the fuel it would consume to test for safety. But Ryan Papineau, a senior software … continue reading

The post How service virtualization helped Alaska Airlines straighten up and fly right appeared first on SD Times.

]]>
Airlines are all about safety. That’s their number one concern. And what they do to help predict that planes will take off and arrive safely is to run different scenarios based on variances in the weight of the plane and the fuel it would consume to test for safety.

But Ryan Papineau, a senior software engineer at Alaska Airlines, said because the data coming in for every flight is different from day to day, there was no easy way to consistently determine if it is correct.  

RELATED CONTENT: The top 3 pain points of test automation, and how to overcome them

Their problem was they needed to control the data, because they couldn’t do performance tests against a moving target. Papineau explained: “While I started doing the control data to do the performance testing, we were like, ‘Wait a second, we could use this control data to do this other piece.’ And so that’s when we were kind of looking at some tools out there. We wanted service virtualization and test data management. The weight and balance project was too critical and was untestable. We needed to resort to tools outside of our comfort zone for capabilities that we didn’t have.”

One of the services Alaska uses to get the data necessary for testing is called Sabre, a global distribution service for scheduling aircraft to go from Point A to Point B, he said. It’s essentially a mainframe, he added, and Alaska has a wrapper service to this mainframe that has every single flight and which passengers are on it. Alaska uses its own automation to build the data it needs, but struggled to do it at scale, because they could not simulate the entire airline. “We then thought we needed a whole other instance of [production] to be able to simulate that data, and then have the automation wherewithal to support that.”

The key to solving the problem was to virtualize that service, which is known as the passenger service. The passenger service tracks the state of a passenger: from booked, to  checked in, and finally boarded the aircraft. The passenger service also keeps track of the seat number and other data attributes, such as whether or not the passenger is flying with a child, infant, have a pet in cabin, or the plane’s cargo hold.

Papineau said after researching solutions, the airline chose Parasoft Virtualize, to create a virtualized service of the entire passenger interface. “What we did was we recorded a day’s worth of data. With this we had the aggregate numbers that came out of production. From there we used SQL to derive those aggregate numbers into an equivalent passenger seat map that captured the weight and balance state of the flight. It’s not a pixel-perfect at the per-seat level, but it is at the aggregate zoning for the weight and balance state. Using Parasoft Virtualize we then mapped those states back to a Virtual Service that returns repeatable passenger data day over day.

Ryan Papineau will be discussing how Alaska Airlines succeeded with virtualization in greater detail at the Automated Software Testing and Quality Summit Tuesday, Nov. 17.  SD Times is the media sponsor of the event. 

The post How service virtualization helped Alaska Airlines straighten up and fly right appeared first on SD Times.

]]>
Test environment management an obstacle to continuous testing, report finds https://sdtimes.com/test/test-environment-management-an-obstacle-to-continuous-testing-report-finds/ Thu, 12 Dec 2019 16:31:35 +0000 https://sdtimes.com/?p=38199 Companies may be shifting testing left, but lack of access to internal services as well as external services can delay testing and cause unnecessary bottlenecks. According to the Sogeti 2019 Continuous Testing report, test environments are one of the biggest bottlenecks to achieving continuous testing. The survey results reveal the inordinate amount of time that … continue reading

The post Test environment management an obstacle to continuous testing, report finds appeared first on SD Times.

]]>
Companies may be shifting testing left, but lack of access to internal services as well as external services can delay testing and cause unnecessary bottlenecks.

According to the Sogeti 2019 Continuous Testing report, test environments are one of the biggest bottlenecks to achieving continuous testing. The survey results reveal the inordinate amount of time that organizations spend on test environment management as well as some of the key challenges in this area.

Time came up as a key issue when respondents were asked about – “test environment-related challenges that impeded efforts to improve the software development lifecycle (SDLC).” Participants gave the highest weighting to “wait times and cost for environment provisioning” (36% of respondents) and “complexity of needed applications” (36%), followed by “inability to identify defects early in the testing process” (33%).

RELATED CONTENT:
Don’t become a statistic: How to save your failing software development initiatives 
Facing the challenges of continuous testing

This is where service virtualization can come in.

Service virtualization (SV) simulates or “mocks” unavailable systems by emulating their dynamic behavior, data, and performance. This means that teams can work in parallel for faster delivery. 

Mock services or service virtualization are critical for when the application or module you are developing and testing is dependent on the other services or systems regardless whether external or internal. Such dependencies could cause major testing bottlenecks, as they may not be easily available when you need them, or they may have constraints like costs or limited control over data it returns.

Mock services remove these dependencies and also help to control the behavior of the dependencies by simulating the service using the endpoint provisioned by you – and this moves your testing to the next level. You can read this blog post on the benefits and concepts behind Mock services and service virtualization concept in general.

The Sogeti report continues, “We have also seen a few positive developments in terms of the adoption of virtualization, containerization, and tool-based automation. These trends are likely to strengthen in the future as organizations realize that virtualization and containerization are absolutely necessary to meet the demands of Agile and DevOps on a limited budget. The next two to three years are also likely to see organizations opting for increased levels of automation, particularly for solutions that automatically tell them about the impact that changes in functional requirements will have on test cases.”

Service virtualization shifts left 
As continuous testing becomes the norm for successful application delivery, service virtualization is shifting left and becoming more available to developers who want to test earlier in the testing cycle. 

Rather than waiting for the end of the testing cycle, and relying on service virtualization as a pre-production only tool, SV has become democratized, with developers creating mock environments for smaller unit tests, throughout the SDLC.

Tools like WireMock and CodeSV can help developers to create mock services so they are not reliant on enterprise service virtualization support, and users can even integrate enterprise service virtualization capabilities with BlazeMeter, so that developers across all teams can create virtual services to test faster and more effectively. 

Sign up for our webinar here to learn more about service virtualization and how it can help you test faster, and with less bottlenecks in 2020.

 

Content provided by SD Times and Broadcom.

The post Test environment management an obstacle to continuous testing, report finds appeared first on SD Times.

]]>
SD Times 2017 Testing Showcase https://sdtimes.com/agile/sd-times-2017-testing-showcase/ https://sdtimes.com/agile/sd-times-2017-testing-showcase/#comments Mon, 02 Oct 2017 20:12:29 +0000 https://sdtimes.com/?p=27267 Continuous testing. Automated testing. Artificial testing. Service virtualization. Test-driven development. These are among the many technologies available to organizations looking to bring their testing up to the speed of software development. Ensuring quality can no longer be the drag on software deployment, if businesses want to stay competitive and be able to take advantage of … continue reading

The post SD Times 2017 Testing Showcase appeared first on SD Times.

]]>
Continuous testing. Automated testing. Artificial testing. Service virtualization. Test-driven development.

These are among the many technologies available to organizations looking to bring their testing up to the speed of software development. Ensuring quality can no longer be the drag on software deployment, if businesses want to stay competitive and be able to take advantage of changes in their markets.

How do organizations decide which path to take? Are they trying to test during sprints? Are they struggling to ensure the services their applications rely on won’t break them? Are they convinced that manual testing is the only way to be certain the software meets their level of quality? How much risk are they willing to accept from deploying apps that they didn’t have covered 100 percent by tests?

The SD Times Testing Showcase has been put together to give our readers a look at the many offerings on the market to help them address their testing challenges and align their testing with the rhythms of their software development life cycle.

So no matter which direction you’re heading with your testing – standing pat is not an option – we’re sure you’ll find something from the following providers to help you to your future of testing.

Mobile Labs: Manage all mobile assets from a single lab
Panaya: Expanding the reach of automation
With Appvance it’s AI all the time
Synopsys: Building application security in from start to finish
Parasoft empowers software testers with orchestrated virtualized testing environment
With TechExcel’s TestDev, it’s game on
Tricentis enables Continuous Testing
HPE tools help weather seismic shifts in enterprise testing

The post SD Times 2017 Testing Showcase appeared first on SD Times.

]]>
https://sdtimes.com/agile/sd-times-2017-testing-showcase/feed/ 4
Parasoft empowers software testers with orchestrated virtualized testing environment https://sdtimes.com/agile/parasoft-empowers-software-testers-orchestrated-virtualized-testing-environment/ https://sdtimes.com/agile/parasoft-empowers-software-testers-orchestrated-virtualized-testing-environment/#comments Sun, 01 Oct 2017 20:11:31 +0000 https://sdtimes.com/?p=27278 Efficiently and effectively testing code in an Agile environment has proven to be a challenge that most software developers are woefully ill-equipped to do. After all, Agile is all about constant iterations, and a rapid deployment cycle that leverages the slipstream ideology. With that in mind, it becomes easy to understand why the QA process … continue reading

The post Parasoft empowers software testers with orchestrated virtualized testing environment appeared first on SD Times.

]]>
Efficiently and effectively testing code in an Agile environment has proven to be a challenge that most software developers are woefully ill-equipped to do. After all, Agile is all about constant iterations, and a rapid deployment cycle that leverages the slipstream ideology. With that in mind, it becomes easy to understand why the QA process can be somewhat daunting in the world of Agile.

Marc Brown, CMO at Parasoft, said “While Agile does create a challenge for software testing iterations, the simple fact of the matter is that there are testing procedures and technologies that overcome those challenges.” One such technology is virtualization, where a virtual representation of the physical environment can be manifested and used to test software, quickly and repetitively.

Brown added, “The same issues that impact agile are also prevalent in the world of IoT, where QA testing has become a must to prevent unsecure products from reaching the market. The same can be said for mission-critical and safety-critical products as well.”

Therein lies the real challenge: How can today’s software QA practitioners effectively insert themselves into the development process and prevent buggy and poorly secured code from making it into a shipping product?

Brown said, “QA testers have to start viewing themselves as part of the process, and offer demonstrable value to their organizations by establishing themselves as a critical part of the development team.” Once enterprises realize that efficient testing can help them to avoid major issues, such as the breach that impacted Equifax or the spate of ransomware impacting operations, it becomes clear that QA is of the utmost importance.

“The answer for properly manifesting a QA test environment means that orchestration as well as virtualization must be used and testers should be creating virtual labs to create the appropriate dependencies and test environments that mimic production systems,” Brown said.

The power of service virtualization
To better understand how an application or service acts in the real world, a true analog must be created that can mimic dependencies, data, processes, and loads that would be experienced by a deployed service or application. That is exactly where service virtualization comes into the testing picture. Service virtualization simulates all of the dependencies needed by the application or service under test in order to perform full-system testing. This includes all connections and protocols used by the device with realistic responses to communication.
For example, service virtualization can simulate an enterprise server back-end that an IoT device communicates with to provide periodic sensor readings. Similarly, virtualization can control the IoT device in a realistic manner. Service and API testing provides a way to drive the device under test in a manner that ensures the services it provides (and APIs provided) are performing flawlessly.

What’s more, those tests can be manipulated via the automation platform to perform performance and security tests as needed. Meanwhile, runtime monitoring detects errors in real-time on the device under test and captures important trace information.

That trace information can be used to resolve issues that normally do not occur until after an application is actually deployed. Take for example problems related to memory leaks, which normally remain undetected until a product is finished and deployed under real world loads. The combination of service virtualization, orchestrated testing powered by automation, and the ability to monitor in real time delivers the intelligence that allows problems such as memory leaks to be caught and resolved early and cheaply.

Brown said “Unsurprisingly, most defects are introduced into a project at the beginning, even before the first line of code is written. Most bugs are found and fixed during testing, but a good percentage (as much as 20%!) are discovered during operation, after the product has been sold and shipped.”

Building a virtual lab:
Testing normally occurs in a lab environment; however, physical labs can rarely offer the same robustness as a production system. With that in mind, it becomes evident that even in the most sophisticated lab, it’s difficult to scale to a realistic environment.

Brown adds “Without service virtualization, none of the above would be possible. However, Parasoft has gone beyond just including service virtualization to making sure it can be deployed in a test environment without too much difficulty.”

Brown said “while many organizations are still new to the concepts of service virtualization, service virtualization has become a foundational element for Agile teams and DevOps teams that need continuous testing capabilities.”

WIth that in mind, it becomes very clear that organizations do not have to re-invent the wheel to bring service virtualization to fruition. Parasoft has gone to great lengths to build a suite of testing orchestration products that leverages virtualization. The company’s Parasoft Virtualize product suite allows testers to access a complete test environment, anytime, anywhere. Parasoft Virtualize, an open automated service virtualization solution, creates, deploys, and manages simulated dev/test environments. It simulates the behavior of dependent applications that are still-evolving, difficult to access, or difficult to configure for development or testing.

The post Parasoft empowers software testers with orchestrated virtualized testing environment appeared first on SD Times.

]]>
https://sdtimes.com/agile/parasoft-empowers-software-testers-orchestrated-virtualized-testing-environment/feed/ 2
Service Virtualization brings expediency https://sdtimes.com/hpe/service-virtualization-brings-expediency/ Thu, 31 Aug 2017 18:00:49 +0000 https://sdtimes.com/?p=26856 The consumerism of IT is transforming how businesses work. Users want what they want, when they want it, and IT departments have to keep pace. “The advent of modern technologies such as mobile, cloud, virtualization and IoT has fueled higher consumer expectations and demands. We expect information and services to be served anytime, anywhere at … continue reading

The post Service Virtualization brings expediency appeared first on SD Times.

]]>
The consumerism of IT is transforming how businesses work. Users want what they want, when they want it, and IT departments have to keep pace.

“The advent of modern technologies such as mobile, cloud, virtualization and IoT has fueled higher consumer expectations and demands. We expect information and services to be served anytime, anywhere at the click of a button,” Kimm Yeo, Senior Manager,  worldwide product marketing at HPE, said.

One new technology that’s making this digital transformation possible is service virtualization, which gives developers a chance to roll out their own test environments and test these pieces of applications before the entirety is full assembled.

Yeo said the new computing capabilities available to consumers today don’t come without added complexity and cost. “They require staff who need to coordinate and manage the disparate number of tools as well as siloed apps and components that need to come together as a composite-based applications for their customers. The challenges are compounded as businesses and application owners try to get the different development, ops and QA teams to collaborate and deliver great quality apps with speed.”

 But many development and QA teams have been struggling with the balancing act of releasing their products and services with speed while preserving quality. The advent of new technology such as service virtualization has proved to ease the development and testing process, leading to both cost and time savings.

Early adopters are finding that using virtual services can significantly speed up the development and testing of new (or re-engineered) software components. “These created virtual services and test assets can easily be shared with other teams and re-used as part of the agile, continuous testing and DevOps process, Yeo said.

HPE Service Virtualization improves communications and simulations of dependent components that could be on a client, in middleware or legacy system, she explained.

“As long as the components needed are web-based, cloud-based or SOA based applications (service oriented architecture) written in Java or C#, and leverage transport and messaging APIs such as Http/s, XML, REST, JSON, JMS, MQ and more, developers and testers can use HPE Service Virtualization to  virtualize and simulate the restricted components,” she said.

Increasing role in mobile IoT apps
Research firm Gartner, Inc. forecasts that IoT technologies will result in some 8.4 billion connected things being in use worldwide in 2017, up 31 percent from 2016, and will reach 20.4 billion by 2020. Total spending on endpoints and services will reach almost $2 trillion in 2017. China, North America, and Western Europe are driving the use of connected things, and the three regions together will represent 67 percent of the overall IoT installed base in 2017.

Highlights of HPE Service Virtualization
Apart from IoT connected apps testing support, HPE Service Virtualization 4.0 continues to expand and enhance support in multiple areas. Here are a few of the highlights:

Continued breadth of protocol support — From non-intrusive virtualization of SAG WebMethods integration server to enhanced financial protocols such as SWIFT MT/MX messages and FIX financial messages over IBM WebSphere MQ protocol. You can realistically simulate SWIFT protocol messages, modify the test data or swift test scenarios effortlessly and without the need to have the technical know-how or the SWIFT network environment available.

Enhanced Virtual Service design and simulation — The introduction of new dynamic data rule and data driving capabilities further help reduce time and improved efficiency for the users.

Continued support for DevOps and Continuous Testing — With the enhanced SV Jenkins plug-in. The updated HPE Application Automation Tools 5 Jenkins plugin allows ease of manipulation, deployment, undeploy capabilites and the management of changes in virtual services and assets as part of the continuous delivery pipeline.­

Infrastructure and licensing changes — There are several changes here such as the ­­­introduction of a new SV Server concurrent license which allows running SV Server in dynamic network environment and/or for cloud deployments, support of Linux in beta, and changes to the SV distribution packages and support of 64-bit versions only (removal of 32-bit versions) of SV Designer and Server.

The post Service Virtualization brings expediency appeared first on SD Times.

]]>
Service virtualization keeps software testing on track https://sdtimes.com/iot/service-virtualization-keeps-software-testing-track/ Wed, 31 May 2017 13:00:46 +0000 https://sdtimes.com/?p=25350 Using outside components?  If so, you better test them, even if they came from the most reputable open-source project or commercial component provider you know. If you’re not testing components, especially within the context of other components required for your application and the environment in which your application will run, expect to find defects in … continue reading

The post Service virtualization keeps software testing on track appeared first on SD Times.

]]>
Using outside components?  If so, you better test them, even if they came from the most reputable open-source project or commercial component provider you know. If you’re not testing components, especially within the context of other components required for your application and the environment in which your application will run, expect to find defects in production that could have been avoided easily and cost-effectively.

“We did some research recently [about] release management and what we found is people are more concerned about quality than they are time to market,” said Theresa Lanowitz, founder of analyst firm voke. “This is the first time we’ve seen the switch.”

In the voke 2015 Service Virtualization Snapshot Report, most of the participants said that dependencies were delaying releases. Eighty-one percent said dependencies slowed their ability to develop software, reproduce a defect or fix a defect. Eight-four percent said dependencies negatively affected QA’s ability to begin testing, start a new test cycle, test a required platform or verify a defect.

Such delays can lead to quality issues if elements of testing are skipped to save time or if testing is executed inadequately.

“If a development team is dependent on a component yet to be built, they’re not going to test it,” said Marc Brown, CMO at Parasoft.

Service virtualization solves that issue and many others.

What about mocks and stubs?
In the absence of service virtualization, developers can create mocks and stubs to simulate what will likely happen in production, but the tactics don’t always yield accurate results. As the sophistication of components and interactions increases, the accuracy of what’s being emulated can decrease and it becomes increasingly expensive for the team to create and maintain the mocks and stubs.

“Mocks and stubs are one way to deal with some of the basic elements, but it’s not going to scale. It creates more overhead and potentially more risk for teams,” said Parasoft’s Brown. “You’re not going to be able to do certain things that you could do with service virtualization.”

Harsh Upreti, product marketing manager at SmartBear, said the main reason his customers want service virtualization is to move beyond basic mocking.

“What happens is you have a lot of dependencies on other teams, other products and their APIs,” he said. “Some of the APIs may not be relevant because they are still under development or they’re a little bit costly because maybe you’re hitting a Google Map that costs you $50 for every 1,000 calls,” he said.

The benefits of service virtualization increase when development and testing are using it to access the same systems. Specifically, developers can prevent more defects in the first place, and QA can perform end-to-end testing.

voke’s survey found that dependencies were negatively impacting software release cycles and quality.  On average, respondents had 53 dependencies. However, 67 percent reported unrestricted access to only 10 or fewer dependencies.

“The reason why you need service virtualization is that it completely cuts dependencies across the board,” said Aruna Ravichandran, VP of DevOps Product and Solutions Marketing at CA Technologies. “Developers no longer have to wait for systems to be available because each of those back-end calls can be automated.”

Get access to more resources
Service virtualization enables developers and testers to test against resources that either are  unavailable, rarely available or incomplete. For example, access to a mainframe may only be possible during certain hours. Service virtualization enables developers and testers to avoid all that.

“What it enables you to do is run a complete end-to-end test at any time throughout any aspect of your software lifecycle so a developer can say, ‘Let’s see what this looks like end-to-end if we had all these things,’ ” said voke’s Lanowitz. “What does it look like for performance, functionality and anything else we’re trying to test against, so the ability to access components, services, systems, architectures, sensors, mainframes, databases and the list goes on.”

Even if resources are available, time and cost can get in the way. For example, if a developer is building an application that requires connections to an ERP system and a credit card system, the developer has to work with IT to make sure the systems are properly provisioned and that testing can be done with the credit card system. Testing that involves third-party systems can cost money, whether its testing fees or setting up a real-world test environment.

Still, teams trying to cut costs have been known to adopt service virtualization, cut it in an attempt to save money and then readopt it because cost of service virtualization was outweighed by the economic and time-saving benefits it provides.

Blind faith is dangerous
Developers’ testing responsibilities have continued to grow as more types of testing have “shifted left.”  Meanwhile, many commercial component providers have gone out of their way to deliver stable components so developers can use them with high levels of confidence. Still, the reliability of a component doesn’t depend only the component itself.

“A component may work fine independently, but what if they’re not tested together?” said voke’s Lanowitz. “What if you have Component[s] A, B and C and Component A has been tested 100%, Component B has been tested 80% and Component C has been tested 80%, but when they’re combined they don’t work together?”

Using service virtualization, developers can emulate such conditions so they can better understand how a component would actually behave in production.

“Many components provided by the open-source community or third parties can have security, performance or load-related issues. Just look at the number of systems that have gone down and created some sort of cost,” said Parasoft’s Brown. “There’s a business cost or liability or a brand-tarnishing issue. I wouldn’t trust anybody right now.”

In the absence of service virtualization, production data also may be impacted in some unintended way. Brown said Parasoft has seen some issues in the banking industry where people were testing against live production data and some of the production data made it to development. The data also found its way to other areas, which meant that customers’ credit card numbers weren’t actually secure.

Security is a very real issue and one that continues to become more important every day. Components built in the past may have been built at a time when security threats were not as pervasive, severe or varied as they are today. Although people want to trust the components they use and avoid coding something they can get from a commercial vendor or the open-source community, there’s no substitute for testing. Hackers continue to devise more sophisticated ways to compromise software.

“If I adopt a component, I really need to make sure that I’ve got some reusable assets that can help me validate those components fully so I can have the level of confidence I need without slowing things down,” said Brown.

Accelerate delivery without sacrificing quality
Fast access to virtual resources is better than slow access or no access to actual resources. With service virtualization, development and testing teams can work in parallel, which saves precious time.

“Our customers tell us they used to wait almost a third of the time for the development teams to get APIs [to testing],” said SmartBear’s Upreti. “Now they’re available immediately so [the testing team] doesn’t have to follow up with [the development team]. They work faster and there are better relationships between team members. It’s creating better conditions to work in software development teams.”

Vodafone New Zealand, a Parasoft customer, found it harder to deliver reliable software due to increasing customer expectations and software complexity. Part of the problem was the company’s acquisitions of other businesses, which resulted in more systems and dependencies that further complicated software updates.

To ensure the new functionality operated properly and it didn’t damage existing functionality, development teams needed to test their work and third-party components in realistic test environments, which was too costly and time-consuming to do using actual systems.

AutoTrader mimics reality, saves money
AutoTrader, one of CA’s customers, was able to test across devices and avoid $300,000 in test hardware and software costs. Its website, AutoTrader.com, is used by more than 18 million people per month who are researching, selling and buying cars. A decade ago, the company was releasing just four web services per year. Now the company is under pressure to deliver a release a week. Meanwhile, the number of devices and versions of devices and operating systems customers are using has grown, complicating testing.

“When I talked to them about their application strategy, one of the key things they shared with us [was the desire] to provide a seamless service across devices,” said Ravichandran. “Service virtualization gave them the ability to test new features, apps, and third-party components across multiple devices.”

AutoTrader was also able to reduce software defects by nearly 25 percent and it reduced testing time by 99 percent.

Generally speaking, service virtualization is a good way to reproduce and reduce defects.

“One of the biggest problems is that something will work fine on a developer’s machine, but then it gets into production or test and there’s a problem. The defect can’t be reproduced,” said voke’s Lanowitz. “With service virtualization, you have access to that production-like environment so you can accurately and realistically reproduce the defects and you can do economical testing of realistic behavior such as performance which is one of those non-functional requirements we overlook.”

Using service virtualization, software teams can reduce the number of defects pre-production and in production while increasing test coverage and reducing testing cycle time and release cycle time.

“Ideally, you want to get to the point where when it comes time to check in your source code, you’re checking in virtualized assets with it,” said Lanowitz.

The IoT will drive more demand
The IoT is giving rise to even more complex ecosystems that need to be tested and because they’re so complex, it’s impractical if not impossible to test all the possible scenarios without using service virtualization.

“Service virtualization allows you to virtualize components in the world of system of systems, which is critical,” said Parasoft’s Brown. “You can virtualize an embedded device, services, sensors and [outside] components.”

Beyond that, service virtualization allows developers to contemplate abnormal conditions that wouldn’t otherwise be apparent without access to the actual physical systems. Because so many things can go wrong in an IoT or IIoT scenario, it’s critical to understand normal and abnormal behavior, such as what effect different types of loads have.

“As we move into the Internet of Things, if you’re not using service virtualization, you’re not going to keep up with everything we need to do,” said voke’s Lanowitz. “You have to be constantly testing, making sure things are performing. You need to make sure you have the availability and everything you need to test what’s going on inside that thing.”

Where to start?
Some companies haven’t adopted service virtualization yet because they don’t know where to start – development or QA?  On the other hand, that may not be the right way to frame the problem.

“I always recommend starting with those fee-based systems you have to pay to access for testing or start with a small project where you have a good rapport between your developers and your testers because your testers are going to benefit from service virtualization,” said voke’s Lanowitz. “There are a few things you can do. You can say anything we’re using in the enterprise, anything in our core logic we should use virtualized assets for.”

In one case, service virtualization worked so well that virtual assets were accidently deployed instead of real assets. However, the problem was found and fixed immediately, Lanowitz said.

Component testing is just the beginning
Today’s developers need on-demand test environments for continuous testing and on-demand testing.. Already, service virtualization has become a foundational element for Agile teams and DevOps teams that need continuous testing capabilities.

In line with that, Parasoft’s Brown expects more SaaS vendors to create test components and perhaps a reusable virtual service that goes along with them.

“We’d love to power people developing software components because it will make their applications better, high quality and less prone to security exploits,” he said. “At the same time, they might be able to differentiate their own products by shipping a component or virtual service that goes hand-in-hand with it that people can test against.”

Component testing is just one of many things service virtualization enables. In the voke survey, participants were asked what they were virtualizing. Participants said they were virtualizing web services, APIs, mobile platforms, embedded systems, IoT-types of sensors and components.

voke views service virtualization as a subset of lifecycle virtualization, which includes service virtualization and virtual or cloud-based lab technology so the environment is as close to a production environment as possible. A third element is test data virtualization that can be shared across a software supply team so companies are not impacting the safety and security of customers by sharing real-life production data and they can avoid shipping terabyte-sized files across the network to teams that may need production data for testing. Network virtualization is also included in the mix so teams can simulate a network and different use cases, such as what happens to a banking transaction if a user goes into a subway. The final element is defect virtualization.

“We’re always going to have defects and we either discover those defects in pre-production or we discover them in production. We need a way to know what defects are in our source code or legacy source code,” said Lanowitz. “Using defect virtualization software in the background, you can understand the point of application failure and where the defect is so you can fix it.”

Meanwhile, current users of service virtualization should endeavor to drive more value from solutions by ensuring that virtual assets are available throughout the software lifecycle, which will result in additional time savings and costs.

Using service virtualization can give you more confidence in the components you’re using in your software and you’ll be more confident about the quality and stability of the software you’re building.

The post Service virtualization keeps software testing on track appeared first on SD Times.

]]>
Building up service virtualization https://sdtimes.com/agile/building-up-service-virtualization/ Thu, 03 Sep 2015 13:00:49 +0000 https://sdtimes.com/?p=14571 Service virtualization has gotten the short shrift over the course of its lengthy history. Whether you chart its inception in 2002 with the release of Parasoft’s Stub Server, or in 2007 when CA took up the banner and market around the term, the entire concept has yet to even take on the status of buzzword. … continue reading

The post Building up service virtualization appeared first on SD Times.

]]>
Service virtualization has gotten the short shrift over the course of its lengthy history. Whether you chart its inception in 2002 with the release of Parasoft’s Stub Server, or in 2007 when CA took up the banner and market around the term, the entire concept has yet to even take on the status of buzzword.

That could be a good thing, however, as buzzwords can burn the ears of any manager distributing his or her budget for the year on new tooling for the team. Rather, service virtualization has remained a somewhat unknown but fairly reliable path toward saving developers time and money.

Theresa Lanowitz, founder of research firm Voke, said that service virtualization is proven to bring ROI to development managers. “We know the return on investment is tremendous. It really enhances that collaboration. Everything we’ve been hearing for the last 10 years is about collaboration between QA and development. Service virtualization takes those barriers down and lets the teams be completely aligned,” she said.

But what is service virtualization, exactly? Essentially, it manifests as a server within your test environment that can replicate the streams of information that make up the various services used in applications. In practice, this means simulating third-party services, internally hosted services, and even broken services for the purpose of testing against real-world traffic scenarios.

Why simulate these services? Robert Wagner, product owner for orchestrated service virtualization at Tricentis, said that the average enterprise is filled with services. “You lose a lot of time testing when you have complex business processes. On average, there are about 30 services in bigger companies.”

With at least 30 services to test against, it just makes sense to automate simulating those streams of data rather than trying to maintain separate codebases for testing versions of various services.

That being said, moving to a testing plan that includes service virtualization is not something that can be done over night. There are many ways to get started, but at the end of the day, the real way to succeed with service virtualization is to treat it as another process in your development life cycle.

Wayne Ariola, chief strategy officer for Parasoft, said that traditional IT is “used to adopting tools in an ad hoc manner, but service virtualization requires a process collaboration. It’s not magic: You have to put the time into it to get the value out of it.”

Once developers have adopted the practice, however, Ariola said they are able to “find bugs in [their] development phase, where everyone is developing their own components isolated from the others.”

Getting started
Building service virtualization into your software development life cycle isn’t nearly as difficult as spreading the capability to an entire organization, thankfully. Lanowitz suggested that the endgame can be intimidating for large enterprises, but the effort is worth it. “There are many organizations that say, ‘I am not ready for this type of thing.’ Ideally and ultimately, what you want is that for every piece of source code checked in, you want that virtualized asset to go with it,” she said.

Lanowitz suggested starting out small. “An easy way to start is by pinpointing what types of base components would benefit from virtualization. You could say, ‘For anything that is fee-based, we’re going to use service virtualization. What types of third-party assets do we use that we don’t own?’ Virtualize those third-party elements.”

Of course, the services don’t have to be external to warrant virtualization. Lanowitz said that an enterprise could also start out by virtualizing its core services—those that are used frequently across the organization. The more widely used the service, the more likely all the corners of the organization will come forward to take advantage of the virtualized version to test against.

Another way to get started is along your supply chain, said Lanowitz. “You could say, ‘We’re going to start with one project and work across our software supply chain and require everyone in the supply chain use service virtualization.’ ”

Stefana Muller, project-management leader at CA Technologies, said that starting out with service virtualization doesn’t have to mean testing it out on smaller projects. She asked, “What is your big transformational project? Find one project you can start with that can show you return on investment quickly. It will prove itself there. Customers are dealing with these constraints in other ways: by building throwaway code, wasting time waiting, and spending money to get things done quickly. The ways we help them achieve the benefit of service virtualization is we find the benefit that will change their business. Once you do that with one project, it’s very easy to expand to others.”

Indeed, the benefits of service virtualization are best felt when the practice is spread to an entire organization. This is because most of those services being virtualized are used by multiple applications, and thus virtualizing them can bring time savings to teams across the organization. But this can lead to complexity as your organization learns how to properly roll out service virtualization as a service itself.

Muller advocated for the creation of a center of excellence within the organization to help push the process through to the edges of the enterprise. “Once you get to a maturity curve, with four or five projects using service virtualization, you’re probably going to want to have a center of excellence so you can share virtual services among teams, rather than building one for each and every one. We sometimes use the term ‘Center of Competency,’ as the center learns how to derive value from service virtualization,” she said.

Whose money?
Perhaps the biggest impediment to service virtualization uptake in the enterprise is that it falls into one of those nebulous gray areas of budgeting. The QA team, the Ops team and the development teams all have their own budgets, yet service virtualization could fall into any of their laps as a responsibility.

Parasoft’s Ariola has his own opinion as to why this is. While he doesn’t speak for Parasoft on this topic, he is of the opinion that “There is no central entity within large development organizations who own the concept of quality. You have a center of testing, but those are usually tool-oriented. There’s this idea that quality is shared, which is great, but nobody owns the definition. If you start asking about non-functional requirements, it’s blown apart across so many different groups [that] it’s not necessarily true.”

Ariola partially blamed agile for this erosion of quality control in the enterprise. “Agile, although valuable, has blown apart the concept of quality because it focuses the team on the user stories in the timeframe they are due, versus thinking more about the programmatic quality it needs to hit as it goes toward production.”

To that end, said Ariola, service virtualization can help spread quality across the development life cycle by driving bug detection into earlier portions of the project. Rather than finding service integration bugs during systems integrations phases, they are found during the standard development process, he said.

Tricentis’ Wagner agrees that finding the right budget to pay for service virtualization has been tricky. “When we got started, the reason that we had problems was because we were focused mainly on test teams. It took a while until companies realized they could save a lot of money with service virtualization,” he said.

This was because the test teams typically relied on the Ops teams to build their environments. Despite trying to sell a tool that was useful to QA, as a server product, Tricentis found Ops was often the buyer that showed up at the table.

Once these companies realized that service virtualization was more appropriately categorized under the testing budget, they also realized they could replace their test labs, said Wagner.

He said that, compared to the cost of a test lab, service virtualization can offer vast savings. “It’s nothing compared to service virtualization. It’s much cheaper and much more flexible, and you can also do negative testing with service virtualization. You can go to your guys running this test lab and ask them to deploy a broken service so you can test negative scenarios,” he said.

Lines of control
One thing that service virtualization can also instigate is a drive for service orchestration in the enterprise. Red Hat’s OpenShift platform includes orchestration through Kubernetes, and thus can handle the deployment duties necessitated by many service virtualization efforts.

Joe Fernandes, OpenShift product director at Red Hat, said that OpenShift’s “latest incarnation, version 3, is completely rebuilt around Docker and Kubernetes. Docker is the container runtime and packaging format, and Kubernetes is the orchestration engine for managing the services and for determining where they should run and how they should run.”

Wagner said that “Orchestration, in modern applications, is really necessary because you have a certain business flow. This can’t come before that, and so on. This business flow needs to go off of a specific description of a business flow. Tricentis OSV can model business scenarios running on the backend over different technologies. OSV proves that the flow is in the right order and distributes the messages to the system where it’s intended to be sent. One difference we have [from] others is we model these business flows. You can run these over multiple systems and mimic stateful behavior.”
The end goal for service virtualization is, as with most tools and practices in software development, to save money and time for everyone involved. “Once you start crossing partners or groups, it becomes really valuable,” said Ariola.

“If you’re developing your system, and everyone is dependent upon system B, having a group of people accessing a simulated instance of system B and its instance is really valuable. They’re all testing against a set of assumptions, so that level of consistency allows for that level of acceleration. If you grow the breadth of your test suite, it allows you to test more in an end-to-end fashion.”

Bringing service virtualization into an enterprise may be intimidating, but once you get going, Lanowitz said it becomes a comfortable part of the development life cycle. “It’s not that difficult. Once you bring service virtualization into your environment, you can very quickly replicate those environments. Software vendors will say ‘It takes this long to create a virtual service,’ and those numbers are accurate,” she said.

Users of service virtualization, said Lanowitz, “All say once they use it in their workflow, they don’t even think about it. You’re able to test more, make changes more easily, and get something you might not think is ready or available yet into the testing cycle. This takes down the whole idea you can never do anything until you have everything, and you never have everything until you’re ready to deploy. Service virtualization gives you access to those things that are unavailable or incomplete.”

Lanowitz sees a bright future ahead for service virtualization. She said that she “hopes it’s going to continue to spread. We’ve done in-depth research on this 2015 and 2012. We saw the adoption rate increase, and as we move to the cloud, I would expect service virtualization would be part and parcel with a larger tool set you’d use, like release automation. We’ll see it integrated with development and test platforms. You might see it integrated with other tools along the way.”

The post Building up service virtualization appeared first on SD Times.

]]>
SD Times March Developer Madness: A Champion is Crowned! https://sdtimes.com/apis/sd-times-march-developer-madness-champion-service-virtualization/ Tue, 07 Apr 2015 17:30:37 +0000 https://sdtimes.com/?p=11353 The championship game is in the books, and the winner of SD Times March Developer Madness is #4 seed Service Virtualization! After a tournament full of surprises, upsets and all eight top seeds going down, Service Virtualization cruised through the Final 4 with convincing wins over cinderella #8 seed NoSQL and #3 seed APIs in … continue reading

The post SD Times March Developer Madness: A Champion is Crowned! appeared first on SD Times.

]]>
The championship game is in the books, and the winner of SD Times March Developer Madness is #4 seed Service Virtualization! After a tournament full of surprises, upsets and all eight top seeds going down, Service Virtualization cruised through the Final 4 with convincing wins over cinderella #8 seed NoSQL and #3 seed APIs in the finals.

The final tally of 219 championship votes was 75% in favor of Service Virtualization!

Screen Shot 2015-04-07 at 11.21.51 AM

Click here to see a larger, more detailed version of the final SD Times March Developer Madness bracket, and how all the games and rounds shook out from beginning to end.

March Developer Madness was a fun way for us to gauge our readership’s interest and fascination with all the established and emerging software development technologies out there, from languages and platforms to development philosophies and methodologies. Thank you to all our readers who participated, voted and engaged with us about the event via social media.

Did your favorite seed or preferred technology lose somewhere along the way? Let us know! Tell us what you’re passionate about; what you want to read more about, whether in comments, emails or tweets. SD Times is always listening.

The post SD Times March Developer Madness: A Champion is Crowned! appeared first on SD Times.

]]>