test Archives - SD Times https://sdtimes.com/tag/test/ Software Development News Fri, 04 Nov 2022 15:43:13 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.1 https://sdtimes.com/wp-content/uploads/2019/06/bnGl7Am3_400x400-50x50.jpeg test Archives - SD Times https://sdtimes.com/tag/test/ 32 32 How service virtualization supports cloud computing: Key use cases https://sdtimes.com/test/how-service-virtualization-supports-cloud-computing-key-use-cases/ Tue, 01 Nov 2022 14:49:35 +0000 https://sdtimes.com/?p=49416 (First of two parts) Several weeks ago, a customer of the Broadcom Service Virtualization solution posed the following question: “Now that we’re moving to the cloud, do we still need Service Virtualization?”  The question struck me as odd. My sense is that this confusion probably stemmed from the misperception that, since cloud environments can be … continue reading

The post How service virtualization supports cloud computing: Key use cases appeared first on SD Times.

]]>
(First of two parts)

Several weeks ago, a customer of the Broadcom Service Virtualization solution posed the following question: “Now that we’re moving to the cloud, do we still need Service Virtualization?” 

The question struck me as odd. My sense is that this confusion probably stemmed from the misperception that, since cloud environments can be spun up quickly, people think they can easily address test environment bottlenecks and, in the process, service virtualization capabilities would be rendered unnecessary. Obviously, that is not the case at all! Being able to spin up infrastructure quickly does not address the issue of what elements need to be established in order to make environments useful for desired testing efforts. 

In fact, all the use cases for the Service Virtualization solution are just as relevant in the cloud as they are in traditional on-premises-based systems. Following are a few key examples of these use cases: 

  1. Simplification of test environments by simulating dependent end points   
  2. Support for early, shift-left testing of application components in isolation 
  3. Support for performance and reliability engineering 
  4. Support for integration testing with complex back-ends (like mainframes) or third-party systems
  5. Simplification of test data management 
  6. Support for training environments
  7. Support for chaos and negative testing 

All of these use cases are documented in detail here.  

Further, what’s more pertinent is that Service Virtualization helps to address many additional use cases that are unique to cloud-based systems. 

Fundamentally, Service Virtualization and cloud capabilities complement each other. Combined, Service Virtualization and cloud services deliver true application development and delivery agility that would not be possible with only one of these technologies. 

Using virtual services deployed to an ephemeral test environment in the cloud makes the setup of the environment fast, lightweight, and scalable. (Especially compared to setting up an entire SAP implementation in the ephemeral cloud environment, for example.) 

Let’s examine some key ways to use Service Virtualization for cloud computing. 

Service Virtualization Use Cases for Cloud Migration 

Cloud migration typically involves re-hosting, re-platforming, re-factoring, or re-architecting existing systems. Regardless of the type of migration, Service Virtualization plays a key role in functional, performance, and integration testing of migrated applications—and the use cases are the same as those for on-premises applications. 

However, there are a couple of special use cases that stand out for Service Virtualization’s support for cloud migration: 

  1. Early Pre-Migration Performance Verification and Proactive Performance Engineering 

In most cases, migrating applications to the cloud will result in performance changes, typically due to differences in application distribution and network characteristics. For example, various application components may reside in different parts of a hybrid cloud implementation, or performance latencies may be introduced by the use of distributed cloud systems. 

With Service Virtualization, we can easily simulate the performance of all the different application components, including their different response characteristics and latencies. Consequently, we can understand the performance impact, including both overall and at the component level, before the migration is initiated.  

This allows us to focus on appropriate proactive performance engineering to ensure that performance goals can be met post migration.  

In addition, Service Virtualization plays a key role in performance testing during and after the migration, which are common, well-understood use cases. 

2. Easier Hybrid Test Environment Management for Testing During Migration 

This is an extension to the common use case of Service Virtualization, which is focused on simplifying testing environments. 

However, during application migration this testing becomes more crucial given the mix of different environments that are involved. Customers typically migrate their applications or workloads to the cloud incrementally, rather than all at once. This means that test environments during migration are much more complicated to set up and manage. That’s because tests may span multiple environments, both cloud, for migrated applications—and on-premises—for pre-migration applications. In some cases, specific application components (such as those residing on mainframes), may not be migrated at all. 

Many customers are impeded from early migration testing due to the complexities of setting up test environments across evolving hybrid systems. 

For example, applications that are being migrated to the cloud may have dependencies on other applications in the legacy environment. Testing of such applications requires access to test environments for applications in the legacy environment, which may be difficult to orchestrate using continuous integration/continuous delivery (CI/CD) tools in the cloud. By using Service Virtualization, it is much easier to manage and provision virtual services that represent legacy applications, while having them run in the local cloud testing environment of the migrated application. 

On the other hand, prior to migration, applications running in legacy environments will have dependencies on applications that have been migrated to the cloud. In these cases, teams may not know how to set up access to the applications running in cloud environments. In many cases, there are security challenges in enabling such access. For example, legacy applications may not have been re-wired for the enhanced security protocols that apply to the cloud applications. 

By using Service Virtualization, teams can provision virtual services that represent the migrated applications within the bounds of the legacy environments themselves, or in secure testing sandboxes on the cloud. 

In addition, Service Virtualization plays a key role in parallel migrations, that is, when multiple applications that are dependent on each other are being migrated at the same time. This is an extension of the key principle of agile parallel development and testing, which is a well-known use case for Service Virtualization.

3. Better Support for Application Refactoring and Re-Architecting During Migration 

Organizations employ various application re-factoring techniques as part of their cloud migration. These commonly include re-engineering to leverage microservices architectures and container-based packaging, which are both key approaches for cloud-native applications. 

Regardless of the technique used, all these refactoring efforts involve making changes to existing applications. Given that, these modifications require extensive testing. All the traditional use cases of Service Virtualization apply to these testing efforts. 

For example, the strangler pattern is a popular re-factoring technique that is used to decompose a monolithic application into a microservices architecture that is more scalable and better suited to the cloud. In this scenario, testing approaches need to change dramatically to leverage distributed computing concepts more generally and microservices testing in particular. Service Virtualization is a key to enabling all kinds of microservices testing. We will address in detail how Service Virtualization supports the needs of such cloud-native applications in section IV below.

4. Alleviate Test Data Management Challenges During Migration 

In all of the above scenarios, the use of Service Virtualization also helps to greatly alleviate test data management (TDM) problems. These problems are complex in themselves, but they are compounded during migrations. In fact, data migration is one of the most complicated and time-consuming processes during cloud migration, which may make it difficult to create and provision test data during the testing process. 

For example, data that was once easy to access across applications in a legacy environment may no longer be available to the migrated applications (or vice-versa) due to the partitioning of data storage. Also, the mechanism for synchronizing data across data stores may itself have changed. This often requires additional cumbersome and laborious TDM work to set up test data for integration testing—data that may eventually be thrown away post migration. With Service Virtualization, you can simulate components and use synthetic test data generation in different parts of the cloud. This is a much faster and easier way to address  TDM problems. Teams also often use data virtualization in conjunction with Service Virtualization to address TDM requirements.  

Service Virtualization Use Cases for Hybrid Cloud Computing 

Once applications are migrated to the cloud, all of the classic use cases for Service Virtualization continue to apply. 

In this section, we will discuss some of the key use cases for supporting hybrid cloud computing. 

  1. Support for Hybrid Cloud Application Testing and Test Environments 

Post migration, many enterprises will operate hybrid systems based on a mix of on-premises applications in private clouds (such as those running on mainframes), different public cloud systems (including AWS, Azure, and Google Cloud Platform), and on various SaaS provider environments (such as Salesforce). See a simplified view in the figure below. 

 

Setting up test environments for these hybrid systems will continue to be a challenge. Establishing environments for integration testing across multiple clouds can be particularly difficult. 

Service Virtualization clearly helps to virtualize these dependencies, but more importantly, it makes virtual services easily available to developers and testers, where and when they need them. 

For example, consider the figure above. Application A is hosted on a private cloud, but dependent on other applications, including E, which is running in a SaaS environment, and J, which is running in a public cloud. Developers and testers for application A depend on virtual services created for E and J. For hybrid cloud environments, we also need to address where the virtual service will be hosted for different test types, and how they will be orchestrated across the different stages of the CI/CD pipeline. 

See figure below.

 

Generally speaking, during the CI process, developers and testers would like to have lightweight synthetic virtual services for E and J, and to have them created and hosted on the same cloud as A. This minimizes the overhead involved in multi-cloud orchestration. 

However, as we move from left to right in the CD lifecycle, we would not only want the virtual services for E and J to become progressively realistic, but also hosted closer to the remote environments, where the “real” dependent applications are hosted. And these services would need to orchestrate a multi-cloud CI/CD system. Service Virtualization frameworks would allow this by packaging virtual services into containers or virtual machines (VMs) that are appropriate for the environment they need to run in. 

Note that it is entirely possible for application teams to choose to host the virtual services for the CD lifecycle on the same host cloud as app A. Service Virtualization frameworks would allow that by mimicking the network latencies that arise from multi-cloud interactions. 

The key point is to emphasize that the use of Service Virtualization not only simplifies test environment management across clouds, but also provides the flexibility to deploy the virtual service where and when needed. 

2. Support for Agile Test Environments in Cloud Pipelines 

In the introduction, we discussed how Service Virtualization complements cloud capabilities. While cloud services make it faster and easier to provision and set up on-demand environments, the use of Service Virtualization complements that agility. With the solution, teams can quickly deploy useful application assets, such as virtual services, into their environments. 

For example, suppose our application under test has a dependency on a complex application like SAP, for which we need to set up a test instance of the app. Provisioning a new test environment in the cloud may take only a few seconds, but deploying and configuring a test installation of a complex application like SAP into that environment would take a long time, impeding the team’s ability to test quickly. In addition, teams would need to set up test data for the application, which can be complex and resource intensive. By comparison, deploying a lightweight virtual service that simulates a complex app like SAP takes no time at all, thereby minimizing the testing impediments associated with environment setup.   

3. Support for Scalable Test Environments in Cloud Pipelines

In cloud environments, virtual service environments (VSEs) can be deployed as containers into Kubernetes clusters. This allows test environments to scale automatically based on testing demand by expanding the number of virtual service instances. This is useful for performance and load testing, cases in which the load level is progressively scaled up. In response, the test environment hosting the virtual services can also automatically scale up to ensure consistent performance response. This can also help the virtual service to mimic the behavior of a real automatically scaling application. 

Sometimes, it is difficult to size a performance testing environment for an application so that it appropriately mimics production. Automatically scaling test environments can make this easier. For more details on this, please refer to my previous blog on Continuous Performance Testing of Microservices, which discusses how to do scaled component testing.

4. Support for Cloud Cost Reduction 

Many studies (such as one done by Cloud4C) have indicated that enterprises often over-provision cloud infrastructure and a significant proportion (about 30%) of cloud spending is wasted. This is due to various reasons, including the ease of environment provisioning, idle resources, oversizing, and lack of oversight. 

While production environments are more closely managed and monitored, this problem is seen quite often in test and other pre-production environments, which developers and teams are empowered to spin up to promote agility. Most often, these environments are over-provisioned (or sized larger than they need to be), contain data that is not useful after a certain time (for example, including aged test data or obsolete builds or test logs), and not properly cleaned up after their use—developers and testers love to quickly move on the next item on their backlog! 

Use of Service Virtualization can help to alleviate some of this waste. As discussed above, replacing real application instances with virtual services helps to reduce the size of the test environment significantly. Compared to complex applications, virtual services are also easier and faster to deploy and undeploy, making it easier for pipeline engineers to automate cleanup in their CI/CD pipeline scripts.  

In many cases, virtual service instances may be shared between multiple applications that are dependent on the same end point. Automatically scaling VSEs can also help to limit the initial size of test environments. 

Finally, VSEs to which actual virtual services are deployed, can be actively monitored to ensure tracking, usage, and de-provisioning when not used. 

(Continue on to Part 2)

 

The post How service virtualization supports cloud computing: Key use cases appeared first on SD Times.

]]>
The importance of tool integration for QA teams https://sdtimes.com/test/the-importance-of-tool-integration-for-qa-teams/ Thu, 06 Oct 2022 13:30:22 +0000 https://sdtimes.com/?p=49113 Everybody cares about software quality (or they ought to, at least), but it’s easier said than done. Lots of factors can cause software to fail, from tools and systems not integrating well to people not communicating well. According to ConnectALL, improving value stream flow can help with these communication breakdowns, tool integration can improve quality … continue reading

The post The importance of tool integration for QA teams appeared first on SD Times.

]]>
Everybody cares about software quality (or they ought to, at least), but it’s easier said than done. Lots of factors can cause software to fail, from tools and systems not integrating well to people not communicating well.

According to ConnectALL, improving value stream flow can help with these communication breakdowns, tool integration can improve quality assurance function, and integrating test management tools with other tools can help provide higher quality test coverage. 

In a recent SD Times Live! event, Lance Knight, president and COO at ConnectALL, and Johnathan McGowan, principal solutions architect at ConnectALL, shared six ways that tool integration can improve test management processes and QA. 

“It’s a very complex area, right? There’s a lot going on here in the testing realm, and different teams are doing different kinds of tests. Your developers are doing those unit tests, your QA team is doing manual automated and regression, and then your security folks are doing something else. And they’ve all each got their own little places that they’re doing all of that in,” said McGowan.

This article first appeared on VSM Times. To read the full article, visit the original post here.

The post The importance of tool integration for QA teams appeared first on SD Times.

]]>
How these solution providers support automated testing https://sdtimes.com/test/how-these-solution-providers-support-automated-testing/ Fri, 01 Apr 2022 18:30:09 +0000 https://sdtimes.com/?p=47118 We asked these tool providers to share more information on how their solutions help companies with automated testing. Their responses are below. Matt Klassen, CMO, Parasoft Quality continues to be the primary metric for measuring the success of software deliveries. With the continued pressure to release software faster and with fewer defects, it’s not just … continue reading

The post How these solution providers support automated testing appeared first on SD Times.

]]>
We asked these tool providers to share more information on how their solutions help companies with automated testing. Their responses are below.


Matt Klassen, CMO, Parasoft

Quality continues to be the primary metric for measuring the success of software deliveries. With the continued pressure to release software faster and with fewer defects, it’s not just about speed — it’s about delivering quality at speed. 

Managers must ask themselves if they are confident in the quality of the applications being delivered by their teams. Continuous quality is a must for every organization to efficiently reduce the risk of costly operational outages and to accelerate time-to-market.

Parasoft’s automated software testing solutions integrate quality into the software delivery process for early prevention, detection, and remediation of defects. From deep code analysis for security and reliability, through unit, API, and UI test automation, to performance testing and service virtualization, Parasoft helps you build quality into your software development process.

Parasoft leverages our deep understanding of DevOps to develop AI-enhanced technologies and strategies that solve complex software problems. Our testing solutions reduce the time, effort, and cost of delivering secure, reliable, and compliant software.

With over 30 years of making testing easier for our customers, we have the innovation you need and the experience you trust. Our extensive continuous quality solution spans every testing need and enables you to deliver with confidence. If you want to improve your software quality while achieving your business goals, partner with Parasoft.

RELATED CONTENT:
Targeting a key to automated testing
A guide to automated testing tools

Jonathon Wright, chief technology evangelist, test automation at Keysight 

Artificial Intelligence (AI) makes the process of designing, developing, and deploying software faster, better and cheaper. AI-powered tools enable project managers, business analysts, software coders and testers to be more productive and effective, allowing them to produce higher-quality software faster and at a lower cost.

At Keysight, our Eggplant intelligent automation platform allows citizen developers to easily use our no-code solution that draws on AI, machine learning, deep learning and analytics to automate test execution across the entire testing process. It empowers and enables domain experts to be automation engineers. The AI and ML take on scriptwriting and maintenance as a machine can create and execute thousands of tests in minutes, unlike a human tester. 

Keysight’s intelligent automation platform is a completely non-invasive testing tool, ensuring comprehensive test coverage without ever touching the source code or installing anything on the system-under-test. The technology sits outside of the application and reports on performance issues, bugs and other errors without the need to understand the underlying technology stack. This is critical for regulated industries such as healthcare, government and defense.

AI-powered automation can test any technology on any device, operating system or browser at any layer, from the UI to APIs to the database. This includes everything from the most modern, highly dynamic website to legacy back-office systems to point of sale, as well as command and control systems.

The overarching goal of Keysight’s intelligent automation is to understand how customer experiences and business outcomes are affected by the behavior of the application or software. More than this, though, it is about identifying opportunities for improvements and predicting the business impact of those changes.

Gev Hovsepyan, head of product, mabl

Software development teams are realizing that automated testing is key to accelerating product velocity and reaching the full potential of DevOps. When fully integrated into a company’s development pipeline, testing becomes an early alert system for short-term defects as well as long-term performance issues that could hurt the user experience. The key to realizing this potential: simple test creation and rich, accessible reporting features. 

Mabl is low-code, intelligent test software that allows everyone, regardless of coding experience, to create automated tests spanning web UIs, APIs, and mobile browsers with 80% less effort. Using machine learning and AI, features like auto-healing and Intelligent Wait help teams create more reliable tests and reduce overall test maintenance. Results from every test are tracked within mabl’s comprehensive suite of reporting features, making it easy to understand product quality trends. With test creation simplified and quality data at their fingertips, everyone can focus on resolving defects quickly and improving product quality. 

Mabl also includes native integrations with tools like Microsoft Teams, Slack, and Jira, so that testing information can be seamlessly integrated into workflows and everyone can benefit from mabl’s rich diagnostic data. These reporting features include immediate test results as well as long-term product trends so that quality engineering teams can support faster bug resolution and monitor their product’s overall performance and functionality. This allows software development teams to shift from reacting to failed tests and customer complaints to proactively managing product quality, enabling them to spend more time improving the customer experience.

The post How these solution providers support automated testing appeared first on SD Times.

]]>
A guide to automated testing tools https://sdtimes.com/test/a-guide-to-automated-testing-tools-3/ Fri, 01 Apr 2022 18:30:07 +0000 https://sdtimes.com/?p=47121 The following is a listing of automated testing tool providers, along with a brief description of their offerings.  Keysight Technologies Eggplant Digital Automation Intelligence (DAI) platform is the first AI-driven test automation solution with unique capabilities that make the testing process faster and easier. With DAI, you can automate 95% of activities, including test-case design, … continue reading

The post A guide to automated testing tools appeared first on SD Times.

]]>
The following is a listing of automated testing tool providers, along with a brief description of their offerings. 


Keysight Technologies Eggplant Digital Automation Intelligence (DAI) platform is the first AI-driven test automation solution with unique capabilities that make the testing process faster and easier. With DAI, you can automate 95% of activities, including test-case design, test execution, and results analysis. This enables teams to rapidly accelerate testing, improve the quality of software and integrate with DevOps at speed. The intelligent automation reduces time to market and ensures a consistent experience across all devices.

mabl is the enterprise SaaS leader of intelligent, low-code test automation that empowers high-velocity software teams to embed automated end-to-end tests into the entire development lifecycle. Customer-centric brands rely on mabl’s unified platform for creating, managing, and running automated tests that result in faster delivery of high-quality, business critical applications. Learn more at https://www.mabl.com; follow @mablhq on Twitter and @mabl on LinkedIn.

Parasoft: Parasoft helps organizations continuously deliver quality software with its market-proven automated software testing solutions. Parasoft’s AI-enhanced technologies reduce the time, effort, and cost of delivering secure, reliable, and compliant software with everything from deep code analysis and unit testing to web UI and API testing, plus service virtualization and merged code coverage. Bringing all this together, Parasoft’s award-winning reporting and analytics dashboard delivers a centralized view of application quality, enabling organizations to deliver with confidence.

RELATED CONTENT:
Targeting a key to automated testing
How these solution providers support automated testing

Appvance is the inventor of AI-driven autonomous testing, which is revolutionizing the $120B software QA industry. The company’s patented platform, Appvance IQ, can generate its own tests, surfacing critical bugs in minutes with limited human involvement in web and mobile applications. 

Applitools: Applitools is built to test all the elements that appear on a screen with just one line of code. Using Visual AI, you can automatically verify that your web or mobile app functions and appears correctly across all devices, all browsers and all screen sizes. Applitools automatically validates the look and feel and user experience of your apps and sites. 

Digital.ai Continuous Testing (formerly Experitest) enables organizations to reduce risk and provide their customers satisfying, error-free experiences — across all devices and browsers. Digital.ai Continuous Testing provides expansive test coverage across 2,000+ real mobile devices and web browsers, and seamlessly integrates with best-in-class tools throughout the DevOps/DevSecOps pipeline.

HPE Software’s automated testing solutions simplify software testing within fast-moving agile teams and for continuous integration scenarios. Integrated with DevOps tools and ALM solutions, HPE automated testing solutions keep quality at the center of today’s modern applications and hybrid infrastructures. 

IBM: Quality is essential and the combination of automated testing and service virtualization from IBM Rational Test Workbench allows teams to assess their software throughout their delivery life cycle. IBM has a market leading solution for the continuous testing of end-to-end scenarios covering mobile, cloud, cognitive, mainframe and more. 

Micro Focus: Accelerate test automation with one intelligent functional testing tool for web, mobile, API and enterprise apps. AI-powered intelligent test automation reduces functional test creation time and maintenance while boosting test coverage and resiliency. 

Mobile Labs (acquired by Kobiton)  Its patented GigaFox is offered on-premises or hosted, and solves mobile device sharing and management challenges during development, debugging, manual testing, and automated testing. A pre-installed and pre-configured Appium server provides “instant on” Appium test automation.

NowSecure identifies the broadest array of security threats, compliance gaps and privacy issues in custom-developed, commercial, and business-critical mobile apps. NowSecure customers can choose automated software on-premises or in the cloud, expert professional penetration testing and managed services, or a combination of all as needed.

Orasi is a leading provider of software testing services, utilizing test management, test automation, enterprise testing, Continuous Delivery, monitoring, and mobile testing technology. 

Perfecto: Users can pair their favorite frameworks with Perfecto to automate advanced testing capabilities, like GPS, device conditions, audio injection, and more. It also includes full integration into the CI/CD pipeline, continuous testing improves efficiencies across all of DevOps.  

ProdPerfect: ProdPerfect is an autonomous, end-to-end (E2E) regression testing solution that continuously identifies, builds and evolves E2E test suites via data-driven, machine-led analysis of live user behavior data. It addresses critical test coverage gaps, eliminates long test suite runtimes and costly bugs in production, and removes the QA burden that consumes massive engineering resources. 

Progress: Telerik Test Studio is a test automation solution that helps teams be more efficient in functional, performance and load testing, improving test coverage and reducing the number of bugs that slip into production. 

Sauce Labs provides the world’s largest cloud-based platform for automated testing of web and mobile applications. Optimized for use in CI and CD environments, and built with an emphasis on security, reliability and scalability, users can run tests written in any language or framework using Selenium or Appium.

SmartBear tools are built to streamline your process while seamlessly working with your existing products. Whether it’s TestComplete, Swagger, Cucumber, ReadyAPI, Zephyr, or one of our other tools, we span test automation, API life cycle, collaboration, performance testing, test management, and more. 

Synopsys: A powerful and highly configurable test automation flow provides seamless integration of all Synopsys TestMAX capabilities. Early validation of complex DFT logic is supported through full RTL integration while maintaining physical, timing and power awareness through direct links into the Synopsys Fusion Design Platform.

SOASTA’s Digital Performance Management (DPM) Platform enables measurement, testing and improvement of digital performance. It includes five technologies: TouchTest mobile functional test automation; mPulse real user monitoring (RUM); the CloudTest platform for continuous load testing; Digital Operation Center (DOC) for a unified view of contextual intelligence accessible from any device; and Data Science Workbench, simplifying analysis of current and historical web and mobile user performance data. 

Testmo: Tracking, reporting and monitoring test automation results become more important as teams invest in and scale their automation suites. The new unified test management tool Testmo was designed to manage automated, manual and exploratory testing all in one platform. To accomplish this, it also directly integrates with popular issue, DevOps and CI tools such as GitHub, GitLab and Jira. It supports submitting and collecting results from any automation tool and platform.

testRigor supports “plain English” language that allows users to describe how to find elements on the screen and what to do with those elements from the end user’s perspective.  testRigor helps teams deploy their analytics library in production that will make systems automatically produce tests reflecting the most frequently used end-to-end flows from production. 

Tricentis Tosca, the #1 continuous test automation platform, accelerates testing with a script-less, AI-based, no-code approach for end-to-end test automation. With support for over 160+ technologies and enterprise applications, Tosca provides resilient test automation for any use case. 

test

The post A guide to automated testing tools appeared first on SD Times.

]]>
Targeting a key to automated testing https://sdtimes.com/test/targeting-a-key-to-automated-testing/ Fri, 01 Apr 2022 18:30:00 +0000 https://sdtimes.com/?p=47115 Getting one’s hands on automated tests for the first time is like being given the keys to a Ferrari. And YouTube is chock-full of videos on what happens when someone gets too comfortable too soon in a Ferrari. Automated tests are fast, but only in the direction that you point them to. And having a … continue reading

The post Targeting a key to automated testing appeared first on SD Times.

]]>
Getting one’s hands on automated tests for the first time is like being given the keys to a Ferrari. And YouTube is chock-full of videos on what happens when someone gets too comfortable too soon in a Ferrari.

Automated tests are fast, but only in the direction that you point them to. And having a lot of them can easily cause a traffic jam so it’s important to first make sure that they are applied in the right areas and in the right way. 

“What I want to achieve is not more and more tests. What I actually want is as few tests as I possibly can because that will minimize the maintenance effort, and still get the kind of risk coverage that I’m looking for,” said Gartner senior director Joachim Herschmann, who is on the App Design and Development team. 

RELATED CONTENT:
How these solution providers support automated testing
A guide to automated testing tools

To get started with automated testing, organizations need to first look at where their tests will deliver the most value to avoid test sprawl and to prevent high maintenance costs. 

“The warm, fuzzy feeling that you’ve got a thousand automated tests per week doesn’t really tell you anything from a risk perspective with risk-based testing,” said Arthur Hicken, the chief evangelist at Parasoft. “So I think this kind of approach to doing value-driven automation as to what’s got the most value and what kind of confidence we need, what kind of coverage we need is important.”

Organizations need to factor in what it costs to create a test and what it costs to maintain a test because often the maintenance winds up costing a lot more than the creation. 

One must also factor in what it costs to execute a test in terms of time. With Big Bang releases a couple of times a year, creating tests is not such a big issue, but if a company is used to rolling out  weekly updates such as with mobile apps it’s really critical to be able to narrow and focus the automation on exactly the right set of tests. 

With a value-driven test automation strategy, organizations can identify full-stack tests that only cover backend business logic and that can be tested more efficiently through API-level integration (or even unit) tests. They can also identify bottlenecks with dependencies that can be virtualized for more efficient testing and automation, according to Broadcom in a blog post

The testers might decide not to automate some tests that they thought were ideal for automation, because having them performed by testers turns out to be more efficient.

Test at the API level

One way to tackle the complexity that comes with automated testing is to test at the API level rather than the UI, according to Hicken.

UI testing, which ensures that an application is performing the right way from the user perspective is notoriously brittle. 

“[UI testing] is certainly the easiest way to get started in the sense that it’s easy to look at a UI and understand what you need to do like start poking things, but at some point, that becomes very hard to continue,” Hicken said. “It’s hard to make boundary cases happen or to simulate error conditions. Also, fundamentally UI testing is the hardest to debug, because you have too much context and it’s the most brittle to maintain.” 

Meanwhile at the unit level, the automated tests are pretty fast to execute and create and are easy to understand and maintain. After unit testing, one can add the simplest functional tests that they have and then go and backfill with the UI. Now, they can make sure that actual business cases and user stories occur and they can implement these tests against the business logic to get the proper blend of testing, Hicken explained.

“It’s not really that top down approach of if I see a system and automate that system, it’s actually now from a bottom up focus of well in which people are approaching automation at an enterprise scale and asking what’s the blueprint or pattern that we’re trying to do?,” Jonathon Wright, the chief technology evangelist of test automation at Keysight said. “It’s incredibly complex states and the devil’s in the details…they’re asking how do you test those things with realistic data rather than a happy path?” 

Wright explained that happy path testing just won’t cut it anymore because people are testing systems upstream and downstream with all the same kind of data and it all works out in the happy path kind of scenario. Even when people are doing contract testing where each one of the systems is tested end-to-end from an API perspective, people are just using one user with one account with one something and then, of course, it works. But this methodology misses the point, according to Wright. 

“Because people are testing in isolation, they’re also testing their shim or stub or their service virtualization component using Wireshark, so that they’re not actually testing against the real API. So they exclude a lot of things by just locking them out,” Wright added. 

Focus on real-user interactions

A good way to set up automated tests is to focus on how real users are interacting with the systems and how the behavior of those systems are being used. 

“It’s quite scary, because obviously, its perception of what the system does, but actually what the system is doing in the live environment and how the customers are using it. But you kind of assume that they’re going to use it in a particular way, when actually the behavior will change. And that will change weekly and monthly,” Wright said. 

That’s why testers can set up a digital twin of the system as it currently is, and then overlay that with what they thought the system was based on. 

“There’s a different type of behavior mapping; it’s learning from the right hand side this kind of shift right to inform the shift left blueprint model of the system which I think actually helps accelerate everything because you don’t need to create an activity,” Wright added. “You can create it all from real users. You just take their exact journey and then within a matter of minutes, we can actually generate all the automation artifacts with it.”

Teams must then slice the user journeys into smaller, more meaningful pieces and automate against those smaller journeys without going too deep. It’s important that they can automate every clique and not merge too many user journeys together in a single test resulting in multiple hundred step tests, according to Gev Hovsepyan, the head of product at mabl. 

That initial setup of the environment proves to be an interesting discussion between quality engineers and software engineers and in the organization as a whole. “I think that initial configuration, especially when onboarding the test automation platform, becomes an important discussion point, because the way you set it up, is going to define how scalable that approach is,” Hovsepyan said. 

The role of service virtualization

The key to unlocking continuous testing is having an available, stable, and controllable test environment. Service virtualization makes it possible to simulate a wide range of constraints in test environments, whether due to unavailability or uncontrollable dependencies. 

The behaviors of various components are mimicked with mock responses that can provide an environment almost identical to a live setting. 

“Service virtualization is an automation tester’s best friend. It can help to resolve roadblocks and allow teams to focus on the tests themselves instead of worrying about whether or not they can get access to a certain environment or third party service,” Amit Bhoraniya, the technical lead at Infostretch wrote in a blog post

Organizations can also prevent having too many automated tests by having a unified platform and by ensuring quality earlier on in the pipeline. 

Companies are looking for an approach that not only helps them with functional testing, but helps them with non-functional testing and scaling across different teams on a single platform, and having visibility across the quality of their product across different teams across different testing domains, according to mabl’s Hovsepyan. 

A unified approach helps because the responsibilities for testing and quality assurance are often shared within an organization, and that varies based on their DevOps maturity. 

At more mature organizations in terms of DevOps adoption, there is often a center of excellence of quality engineering, where they deploy the practices and then everyone in the organization participates in assuring the quality, including engineers, or developers.

Organizations that are still somewhere early or in the middle of their journey of DevOps adoption have a significant amount of ownership of quality assurance and quality automation at the team level. And these teams have added quality engineers, and they are responsible for ensuring the quality through automation as well as for manual testing. 

This collaborative effort to test automation can help ensure that the developers and testers both know how these tests should be created and maintained. 

“Test automation is one of those things that when it’s done it’s a huge enabler and can really give your business a boost,’ Hicken said. “And when it’s done wrong, it’s an absolute nightmare.”

AI can help with test creation and maintenance

The introduction of AI and ML assisting into automated testing makes it easier to shift quality left by providing earlier defect remediation and reducing risk for deliveries. 

By collecting and incorporating test data, machine learning can effectively update and interpret certain software metrics that show the state of the application under test. Machine learning can also quickly gather information from large amounts of data and point developers or testers right to the performance problem. 

AI is also excellent at finding those one-in-a-million anomalies which testers might just not catch, according to Jonathon Wright, chief technology evangelist at testing company Keysight. 

In the blog,  “What is Artificial Intelligence in Software Testing?,” Igor Kirilenko, Parasoft’s VP of Development, explains that these AI capabilities “can review the current state of test status, recent code changes, code coverage, and other metrics, decide which tests to run, and then run them,” while machine learning (ML) “can augment the AI by applying algorithms that allow the tool to improve automatically by collecting the copious amounts of data produced by testing.”

By 2025, 70% of enterprises will have implemented an active use of AI-augmented testing,

up from 5% in 2021, according to Gartner’s “Market Guide for AI-Augmented Software testing Tools.” Also by 2025, organizations that ignore the opportunity to utilize AI-augmented testing will spend twice as much effort on testing and defect remediation compared with their competitors that take advantage of AI. 

AI-augmented software testing tools can provide capabilities for test case and test data generation, test suite optimization and coverage detection, test efficacy and robustness, and much more. 

“AI can change the game here, because even in the decades that we’ve had test automation tools, there’s very little that it offered you regarding any guidance like how do I determine the test cases that I need?” Herschmann said. 

The post Targeting a key to automated testing appeared first on SD Times.

]]>
Disrupting the economics of software testing through AI https://sdtimes.com/test/disrupting-the-economics-of-software-testing-through-ai/ Fri, 14 Jan 2022 21:20:27 +0000 https://sdtimes.com/?p=46357 EMA (Enterprise Management Associates) recently released a report titled “Disrupting the Economics of Software Testing Through AI.” In this report, author Torsten Volk, managing research director at EMA, discusses the reasons why traditional approaches to software quality cannot scale to meet the needs of modern software delivery. He highlights five key categories of AI and … continue reading

The post Disrupting the economics of software testing through AI appeared first on SD Times.

]]>
EMA (Enterprise Management Associates) recently released a report titled “Disrupting the Economics of Software Testing Through AI.” In this report, author Torsten Volk, managing research director at EMA, discusses the reasons why traditional approaches to software quality cannot scale to meet the needs of modern software delivery. He highlights five key categories of AI and six critical pain points of test automation that AI addresses. 

We sat down with Torsten and talked about the report and his insights into the impact that AI is having in Software Testing:

Q: What’s wrong with the current state of testing? Why do we need AI?

Organizations reliant upon traditional testing tools and techniques fail to scale to the needs of today’s digital demands and are quickly falling behind their competitors. Due to increasing application complexity and time to market demands from the business, it’s difficult for software delivery teams to keep up. There is a growing need to optimize the process with AI to help root out the mundane and repetitive tasks and control the costs of quality that have gotten out of control.

Q: How can AI help and with what?

There are five key capabilities where AI can help: smart scrawling/Natural Language Process (NLP) driven test creation, self healing, coverage detection, anomaly detection, and visual inspection. The report I wrote highlights six critical pain points where these capabilities can help. For example: false positives, test maintenance, inefficient feedback loops, rising application complexity, device sprawl, and tool chain complexity.

Leading organizations have already adopted some level of self-healing and AI driven test creation but by far the most impactful is Visual Inspection (or Visual AI), which provides complete and accurate coverage of the user experience. It is able to learn and adapt to new situations without the need to write and maintain code-based rules. 

Q: Are people adopting AI?

Yes, AI adoption is on the rise for many reasons, but for me, it’s not that people are not adopting AI – they’re adopting the technical capabilities that are based on AI. For example, people want the ability to do NLP-based test automation for a specific use case. People are more interested in the ROI gained from the speed and scalability of leveraging AI in the development process, and not necessarily how the sausage is being made.

Q: How does the role of the developer / tester change with the implementation of AI?

When you look at test automation, developers and testers need to make a decision about what belongs under test automation. How is it categorized, for example. Then all you need to do is basically set the framework for the AI to operate and provide it with feedback to continuously enhance its performance over time.

Once this happens, developers and testers are freed up to do more creative, interesting and valuable work by eliminating the toil of mundane or repetitive work – the work that isn’t valuable in and of itself but has to be done correctly every time. 

For example, reviewing thousands of webpage renderings. Some of them have little differences, but they don’t matter. If I can have the machine filter out all of the ones that don’t matter and just highlight the few that may or may not be a defect, I’ve now cut my work down from thousands to a very small handful. 

Auto-classification is a great example of being able to reduce your work. If you’re reducing repetitive work, it means you don’t miss things. Whereas, if I’m looking at the same, what looks like the same page each time, I might miss something. Whereas if I can have the AI tell me this one page is slightly different than the other ones you’ve been looking at, and here’s why, iit eliminates repetitive, mundane tasks and reduces the possibilities of error-prone outcomes.

Q: Do I need to hire AI experts or develop an internal AI practice?

The short answer is no. There are lots of vendor solutions available that give you the ability to take advantage of the AI, machine learning and training data already in place.

If you want to implement AI yourself, then you actually need people with two sets of domain knowledge: first, the domain that you want for the application of AI, but second, a deep understanding of the possibilities with AI and how you can chain those capabilities together. Oftentimes, that is too expensive and too rare.

If your core deliverable is not the deliverable of the AI but the deliverable of the ROI that the AI can deliver, then it’s much better to find a tool or service that can do it for you, and allow you to focus on your domain expertise. This will make life much easier because there will be a lot more people in a company that understand that domain and just a small handful of people that will only understand AI.

Q: You talk about the Visual Inspection capability being the highest impact – how does that help?

Training deep learning models to inspect an application through the eyes of the end user is critical to removing a lot of the mundane repetitive tasks that cause humans to be inefficient. 

Smart crawling, self healing, anomaly detection, and coverage detection each are point solutions that help organizations lower their risk of blind spots while decreasing human workload. But, visual inspection goes even further by aiming to understand application workflows and business requirements.

Q: Where should I start today? Can I integrate AI into my existing Test Automation practice?

Yes – example of Applitools Visual AI.

Q: What’s the future state?

Autonomous testing is the vision for the future, but we have to ask ourselves, why don’t we have an autonomous car yet? It’s because today, we’re still chaining together models and models of models. But ultimately, where we’re striving to get to is AI is taking care of all of the tactical and repetitive decisions and humans are thinking more strategically at the end of the process, where they are more valuable from a business-focused perspective.

Thanks to Torsten for spending the time with us and if you are interested in reading the full report http://applitools.info/sdtimes .

The post Disrupting the economics of software testing through AI appeared first on SD Times.

]]>
How transformation works in practice https://sdtimes.com/ab-testing/how-transformation-works-in-practice/ Tue, 10 Aug 2021 16:32:30 +0000 https://sdtimes.com/?p=44989 Transformations take time. People think you can bring in a tech transformation coach, change everybody’s job title, and get them on a quick “sheep dip” of a certified scrum team member. But in organizations, transformation happens incrementally. The good news is that the benefits of transformation can start to be delivered straight away. The goal … continue reading

The post How transformation works in practice appeared first on SD Times.

]]>
Transformations take time. People think you can bring in a tech transformation coach, change everybody’s job title, and get them on a quick “sheep dip” of a certified scrum team member. But in organizations, transformation happens incrementally. The good news is that the benefits of transformation can start to be delivered straight away.

The goal is to find a way of producing the software that’s needed at an affordable cost with high enough quality that it stays useful over time. This is where Behavior-driven development (BDD) and automated testing and quality practices come in.

Go Fast, Start Slow

One challenge to achieving quality at speed is that people want to dive into a project without knowing what they need to do. Discovery, the first practice of BDD, helps you work in a more effective way and focus on the most important aspects of your project. Discovery ensures that we don’t start doing something and then say, “Oh, I didn’t think about that,” or, “I misunderstood what I was being asked to do, so I need to throw away what I just did and start again.”

Discovery builds on the Agile techniques of deferring detailed requirements planning until the last responsible moment. Essentially, Discovery lets us slice our user stories into the smallest practical increments, and then study those in detail to figure out how much work each of them requires. We then prioritize, which means we might only work on a few of them.

Discovery Accelerates Quality

Someone might object to this process because if you take the time to break everything down before you start, that doesn’t seem speedy enough. But in reality, you work in a far more efficient manner after completing these steps. You begin the project by cutting out the stuff you don’t need to do, before you waste any time doing it. By prioritizing the most important features and reaching a shared understanding of them, you maximize the amount of work you don’t need to spend time on. 

So, you do less work. More importantly, you do the right work.

Customers often come to BDD because they’re looking to automate their testing so they can release more quickly and with higher quality. But there’s no return on investment. In fact, there’s a cost, because that level of automation is time-consuming to write, hard to debug, and costly to maintain. 

With BDD, on the other hand, you start with Discovery, which means you get to a shared understanding. You prioritize just what you need and no more. You formulate it in business language, so it has meaning to everyone who understands the business domain, and the automation gives you the opportunity to increase throughput. By applying BDD within an Agile context, you get efficiency, throughput, and quality.

Achieve Quality in Spite of Risk

When delivering software at speed it’s not enough to develop the required software. You need the confidence that the quality level is compatible with your risk appetite. This is where the secondary output of BDD comes in: the automated tests.

Different businesses have different risk profiles. It’s not a disaster if a pizza order goes missing, but if your business is subject to governmental health regulations, getting a small thing wrong means people might die. We need to understand our risk profile and make sure we’ve got processes in place to ensure the software we deliver matches that risk profile. Automated testing can be an important part of that. BDD gives you automated acceptance tests that verify the software behaves correctly and delivers the functionality required by the business.

Start With Documentation

Everyone who’s ever put together a piece of flat pack furniture, or bought electronic goods off the internet, knows that the instructions often don’t appear to relate to the device that you’ve been shipped. Anyone who works in software has dealt with documentation that is clearly incorrect. It may have been correct once, but it’s not correct anymore. 

With BDD, because you’re specifying requirements using business language, those specifications are the documentation. There are tools that automate that documentation, so you immediately see when the documentation and the system implementation diverge. They may diverge because someone’s introduced a defect, or they may have diverged because the documentation is out of date. Whatever the reason, you’re automatically notified when they diverge. Then you can act, rather than needing to proactively schedule time every week, or every release, to review the documentation and work out whether it’s still correct. 

End with the Language that Everyone Uses

Industries that deal with external regulators can particularly benefit from using BDD, which writes the specifications in business language. Those specifications, directly automated, verify the software is behaving as expected. Running these specifications can be helpful to non-technical people. Because it’s written in business language, people can see which scenarios are being checked and the outcome of those scenarios. 

The regulatory authorities are thrilled to get this in business language, so they don’t have to go through hideous spreadsheets. There’s potential time and cost savings for customers who adopt business language and tools that support BDD.

The post How transformation works in practice appeared first on SD Times.

]]>
Guest View: Use hackathons to validate your product https://sdtimes.com/test/guest-view-use-hackathons-to-validate-your-product/ Fri, 07 May 2021 16:37:14 +0000 https://sdtimes.com/?p=43951 You think you have a great product. Your product manager thinks you have a great product. Your developers think they have created a great product. The question is – how do you prove this before you send it out to your alpha and beta testers for real-world feedback?  Therefore, we recommend the multistage hackathon approach … continue reading

The post Guest View: Use hackathons to validate your product appeared first on SD Times.

]]>
You think you have a great product. Your product manager thinks you have a great product. Your developers think they have created a great product. The question is – how do you prove this before you send it out to your alpha and beta testers for real-world feedback? 

Therefore, we recommend the multistage hackathon approach to ensure product-market fit and usability. With multistage hackathons, you can start them earlier than the “final product” stage to get more useful feedback. While the “final product” stage is not as well defined in our days of agile development and CI/CD, we’re defining “final product” as something that is generally agreed upon to be ready for market launch.

RELATED CONTENT: How to coordinate an exciting and productive hackathon

Nevertheless, using a series of hackathons can make it easier to verify that you are solving the customers’ problem that you intended to solve. What you think you accomplished in the lab isn’t always the case in the real world. Use hackathons to inject a bit of the “real world” in the development process.

You want to have at least three hackathons for three main reasons: 1) You won’t catch everyone in a given day. 2) You won’t catch everything in a given day. 3) You need time to iterate and incorporate feedback. 

Individual preparation

Hackathon #1 needs to focus on the use-case level. For example, you want someone to test a car by driving to a specific location. During hackathon #1, you give them GPS and detailed instructions.

For Hackathon #2, the task is the same, but instead of GPS and instructions, you give them a road atlas and some verbal directions. Hackathon #2 is more of a guided, end-to-end test.

Hackathon #3 is a true, open-ended usability test. Hand them the car keys and tell them to get to the destination. The goal of hackathon #3 is to determine whether, without any specific guidance, the user can easily achieve the objective using the product. This allows them to spend more time exploring and comprehensively stress-testing the application.

Tasks for all hackathons

The hackathon management team needs to have real-time visibility into what people are doing – either recording the sessions or via “feet on the ground” – when in-person hackathons come back. The managers should anticipate and prepare for questions related to the hackathon tasks but should also “hold back” guidance to make sure they don’t interfere with the process they are trying to test.

For all hackathons, prepare a way to measure results. Results come in two flavors, supervised and unsupervised metrics. Unsupervised metrics include basic system metrics, such as request latency, error rates, etc. Supervised metrics include data collected from the participants as well as more qualitative feedback, such as time to complete each step, individual videos of use-case execution, comments, complaints, and exit interviews.

Hackathon #1

The first hackathon should be small. Consider hackathon #1 to be your initial product focus group. You may have the most amazing back-end technology, but that’s pretty useless if no one can leverage it. The focus of the first hackathon should be usability.

The task should provide a “sample” of what the participants should expect to accomplish at the end. Can they get there? Is the product easy to use? Difficult? Was a user able to achieve what the UX manager set out to do?

Hackathon #2

The second hackathon needs to consist of a large crowd, the bigger the better. Again, make it simple by asking them to accomplish a specific task, but more complex than the first one. One goal of the second hackathon is to test performance. If it slows down when only an internal group is using it, degradation of performance will be an even greater issue when it’s being used by the “general” public.

Hackathon #3

Outcomes are tested during hackathon #3. Instead of assigning a single task, the hackathon manager needs to provide a series of objectives, without going into detail what the end products should look like. The results then need to be examined to make sure the teams could accomplish the individual objectives.

Post-hackathon analyses

While the hackathon easily allowed for supervised metrics, the real metrics come after the hackathon is over.

How useful is the product for the long-term? While some software is completely unique, with no other options on the market, most applications have alternatives. Once the hackathon is over, the product development team needs to track usage. Did the participants continue to use the product once the hackathon is over? Is it delivering results for them? Or did they use it for the hackathon and never log in again?

Each member of the development team is working on a specific task, in a silo, during the product development process. With the hackathon, they get the opportunity to see what their peers have accomplished and get introduced to the big picture, the full end result of their work.

While the hackathons help drive success for individual products, having hackathons as a regular part of the product stress testing reinforces the big picture to the entire team.

The post Guest View: Use hackathons to validate your product appeared first on SD Times.

]]>
SmartBear expands support of codeless, automated testing for mobile and ERP applications https://sdtimes.com/test/smartbear-expands-support-of-codeless-automated-testing-for-mobile-and-erp-applications/ Thu, 22 Apr 2021 17:30:33 +0000 https://sdtimes.com/?p=43750 SmartBear, a leading provider of software development and quality tools, has integrated TestComplete, its UI test automation tool, with BitBar, its native mobile device cloud. TestComplete users are now able to create a codeless mobile test and then use these tests in BitBar across devices. Additionally, TestComplete increases support for testing enterprise applications like Salesforce, Oracle EBS, … continue reading

The post SmartBear expands support of codeless, automated testing for mobile and ERP applications appeared first on SD Times.

]]>
SmartBear, a leading provider of software development and quality tools, has integrated TestComplete, its UI test automation tool, with BitBar, its native mobile device cloud. TestComplete users are now able to create a codeless mobile test and then use these tests in BitBar across devices. Additionally, TestComplete increases support for testing enterprise applications like Salesforce, Oracle EBS, and SAP. As businesses continue to accelerate digital transformation, this new version of the company’s test automation tool helps ensure web and mobile apps work as expected across devices, as well as the availability of critical applications needed by businesses.

“The DevOps motion is truly underway, and testing can no longer be a bottleneck,” said Prashant Mohan, Senior Product Manager at SmartBear. “Whether you are a developer, tester, or business analyst, you need to test, and you need to do it quickly. By scaling tests across several browsers and devices in a matter of clicks or testing complex applications such as SAP and Salesforce, TestComplete provides a complete platform for automated testing of every application type, leading the industry in breadth of capabilities.”

Every company is now an ecommerce company and with the proliferation of web and mobile applications, the TestComplete BitBar integration, along with the existing TestComplete CrossBrowserTesting integration, adds incredible scale and ensures quality across all platforms and devices. Non-technical users and citizen testers can now use TestComplete to access device labs in the cloud, making them more efficient and able to do more with the time that they have for testing.

The growth of enterprise applications necessitates the need to introduce test automation to introduce efficiencies and timely deployment. With the new integration with BitBar and CrossBrowserTesting, business analysts can ensure that their mission critical business applications like SAP and Salesforce work as expected across all browsers and devices.

TestComplete offers breadth and depth of support, and delivers a seamless experience for testing web, mobile, and desktop applications. For more information on the new release of TestComplete, go to: https://smartbear.com/product/testcomplete/overview/.

The post SmartBear expands support of codeless, automated testing for mobile and ERP applications appeared first on SD Times.

]]>
Report: 90% of organizations are implementing test automation https://sdtimes.com/test/report-90-of-organizations-are-implementing-test-automation/ Fri, 09 Apr 2021 16:08:03 +0000 https://sdtimes.com/?p=43601 Automation is becoming increasingly tied to the testing process. According to PractiTest’s recently released State of Testing report, 90% of organizations implement test automation into their processes. Ninety-seven percent of respondents said that functional testing automation was important for success, and 96% said test automation patterns, principles, and practices were also critical.  This automation isn’t … continue reading

The post Report: 90% of organizations are implementing test automation appeared first on SD Times.

]]>
Automation is becoming increasingly tied to the testing process. According to PractiTest’s recently released State of Testing report, 90% of organizations implement test automation into their processes.

Ninety-seven percent of respondents said that functional testing automation was important for success, and 96% said test automation patterns, principles, and practices were also critical. 

This automation isn’t necessarily leading to shrinking test teams. In fact, the number of companies that have test teams of 16 or more people grew by 10% in 2020, bringing the total to 34%. According to PractiTest, this indicates that companies are becoming more reliant on their testing teams and are investing in their growth. 

RELATED CONTENT: 
How to overcome the top 3 pain points of test automation
Applitools serves up new test automation recipes in Automation Cookbook

The report also found that while 59% of testing teams are shifting left, about 40% are shifting right, implementing practices such as testing in production or chaos engineering. PractiTest noted that while a large number of companies are shifting right, it is on a downward trend, which is contradictory with the increase in chaos engineering. 

PractiTest also looked into how COVID-19 impacted testers. It found that 71% didn’t report any income changes as a result of the pandemic, but the past year did affect how they learn new skills, with the number attending online conferences, meetups, and seminars increasing to 49.5%, up from 40.5%. In addition, the use of online communities grew from 32.5% to 44%, and the use of formal training dropped by 16%. 

“Testing still seems strong and it looks like we are in our way to increasing the value we provide in our teams by perfecting the operations we perform day to day, and also by expanding towards additional areas of the process where the increase of visibility and faster understanding of issues arising can become critical,” PractiTest wrote in the State of Testing report. 

The post Report: 90% of organizations are implementing test automation appeared first on SD Times.

]]>