software testing Archives - SD Times https://sdtimes.com/tag/software-testing/ Software Development News Thu, 30 Jun 2022 18:33:27 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.1 https://sdtimes.com/wp-content/uploads/2019/06/bnGl7Am3_400x400-50x50.jpeg software testing Archives - SD Times https://sdtimes.com/tag/software-testing/ 32 32 Report: 85% of CEOs may not be properly testing software before its release https://sdtimes.com/software-testing/report-85-of-ceos-may-not-be-properly-testing-software-before-its-release/ Thu, 30 Jun 2022 18:33:27 +0000 https://sdtimes.com/?p=48160 New research released by the no-code software test automation company Leapwork has revealed that 85% of U.S. CEOs do not see a problem with releasing software that has not been properly tested, so long as it is patch tested later. On top of this, 79% of testers reported that 40% of software is sent to … continue reading

The post Report: 85% of CEOs may not be properly testing software before its release appeared first on SD Times.

]]>
New research released by the no-code software test automation company Leapwork has revealed that 85% of U.S. CEOs do not see a problem with releasing software that has not been properly tested, so long as it is patch tested later. On top of this, 79% of testers reported that 40% of software is sent to market without sufficient testing being completed.  

Ultimately, this has led to 52% of testers claiming that their teams spend 5 to 10 days per year patching software. 

The report also showed that despite the majority of testers expressing concern that insufficiently tested software is going to market, 94% of CEOs still say that they are confident that their software is tested regularly.

Additionally, 95% of CEOs and 76% of testers surveyed reported that they have concerns in regards to losing their jobs in the wake of a software failure. Both groups also agreed on the fact that insufficiently tested software poses a risk to the company as a whole, with 77% of CEOs saying that software failures have harmed their company’s reputation in the last 5 years. 

“Our research shows the widespread issues that exist in software testing today. While CEOs and testers understand the consequences of releasing software that hasn’t been tested properly, an alarming number still think it’s acceptable to issue it and prefer to rely on patch testing afterwards to fix any problems,” said Christian Brink Frederiksen, co-founder and CEO at Leapwork. “This often comes down to not thinking there is a viable option and choosing speed over stability – a devil’s dilemma. But what’s more concerning is the disconnect between CEOs and their developer teams, indicating that testing issues are falling under the radar and not being escalated until it’s too late.”

When asked why software was not being properly tested before it was released, 39% of CEOs cited ‘reliance on manual testing’ as the main reason. However, many testers blamed a failure to invest in test automation with only 43% claiming that they are using some element of automation. 

Testers also reported that there is a lack of time (34%), and an inability to test all software because of the increased frequency of development (29%). 

CEOs and testers both found fault in the lack of skilled developers as 34% and 42% respectively cited this as a key issue. 

Lastly, more than one third of CEOs said that the ‘underinvestment in testing personnel including continuous professional development’ is the main reason why software is not tested properly.

“We’ve seen the implications of huge software failures in the news, so on the current trajectory, more and more companies will struggle with failures and outages which could cost them a significant amount in financial and reputational damage. Businesses need to urgently consider a different approach and embrace no code test automation systems that don’t require coding skills and free up their skilled teams to focus on the most high-value tasks,” Frederiksen said.

 

The post Report: 85% of CEOs may not be properly testing software before its release appeared first on SD Times.

]]>
Software test automation for the survival of business https://sdtimes.com/test/software-test-automation-for-the-survival-of-business/ Tue, 06 Jul 2021 13:15:35 +0000 https://sdtimes.com/?p=44626 In this two part series, we explore the two sides of testing: automated and manual. In this article, we examine why automated testing should be done. To read the other side of the argument, go here.  In today’s business environment, stakeholders rely on their enterprise applications to work quickly and efficiently, with absolutely no downtime. … continue reading

The post Software test automation for the survival of business appeared first on SD Times.

]]>
In this two part series, we explore the two sides of testing: automated and manual. In this article, we examine why automated testing should be done. To read the other side of the argument, go here

In today’s business environment, stakeholders rely on their enterprise applications to work quickly and efficiently, with absolutely no downtime. Anything short of that could result in a slew of business performance issues and ultimately lost revenue. Take the recent incident in which CDN provider Fastly failed to detect a software bug which resulted in massive global outages for government agencies, news outlets and other vital institutions. 

Effective and thorough testing is mission-critical for software development across categories including business software, consumer applications and IoT solutions. But as continuous deployment demands ramp up and companies face an ongoing tech talent shortage, inefficient software testing has become a serious pain point for enterprise developers, and they’ve needed to rely on new technologies to improve the process.

The Benefits of Test Automation

As with many other disciplines, the key to quickly implementing continuous software development and deployment is robust automation. Converting manual tests to automated tests not only reduces the amount of time it takes to test, but it can also reduce the chance of human error and allows minimal defects to escape into production. Just by converting manual testing to automated testing, companies can reduce three to four days of manual testing time to one, eight-hour overnight session. Therefore, testing does not even have to be completed during peak usage hours.

Automation solutions also allow organizations to test more per cycle in less time by running tests across distributed functional testing infrastructures and in parallel with cross-browser and cross-device mobile testing. Furthermore, if a team lacks mobile devices to test on, it can leverage solutions to enable devices and emulators to be controlled through an enterprise-wide mobile lab manager.

Challenges in Test Automation

Despite all the benefits of automated software testing, many companies are still facing challenges that prevent them from reaping the full benefits of automation. One of those key challenges is managing the complexities of today’s software testing environment, with an increasing pace of releases and proliferation of platforms on which applications need to run (native Android, native iOS, mobile browsers, desktop browsers, etc.). With so many conflicting specifications and platform-specific features, there are many more requirements for automated testing – meaning there are just as many potential pitfalls.

Software releases and application upgrades are also happening at a much quicker pace in recent years. The faster rollout of software releases, while necessary, can break test automation scripts due to fragile, properties-based object identification, or even worse, bitmap-based identification. Due to the varying properties across platforms, tests must be properly replicated and administered on each platform – which can take immense time and effort.

Therefore, robust, and effective test automation also requires an elevated skill set, especially in today’s complex, multi-ecosystem application environment. Record-and-playback testing, a tool which records a tester’s interactions and executes them many times over, is no longer sufficient.

With all of these challenges to navigate, including how difficult it can be to find the right talent, how can companies increase release frequency without sacrificing quality and security?

Ensuring Robust Automation with Artificial Intelligence

To meet the high demands of software testing, automation must be coupled with Artificial Intelligence (AI). Truly robust automation must be resilient, and not rely on product code completion to be created. It must be well-integrated into an organization’s product pipelines, adequately data-driven and in full alignment with the business logic.

Organizations can allow quality assurance teams to begin testing earlier – even in the mock-up phase – through the use of AI-enabled capabilities for the creation of single script that will automatically execute on multiple platforms, devices and browsers. With AI alone, companies can experience major increases in test design speed as well as significant decreases in maintenance costs.

Furthermore, with the proliferation of low-code/no-code solutions, AI-infused test automation is even more critical for ensuring product quality. Solutions that infuse AI object recognition can enable test automation to be created from mockups, facilitating test automation in the pipeline even before product code has been generated or configured. These systems can provide immediate feedback once products are initially released into their first environments, providing for more resilient, successful software releases.

To remain competitive, all businesses need to be as productive and efficient as possible, and the key to that lies in properly tested, functioning, performant enterprise applications. Cumbersome, manual testing is no longer sufficient, and enterprises that continue to rely on it will be caught flat-footed and getting outperformed and out-innovated. Investing in automation and AI-powered development tools will give enterprises the edge they need to stay ahead of the competition.

The post Software test automation for the survival of business appeared first on SD Times.

]]>
The Open Testing Platform https://sdtimes.com/test/the-open-testing-platform/ Wed, 13 Jan 2021 19:30:55 +0000 https://sdtimes.com/?p=42661 This is a rather unique time in the evolution of software testing.  Teams worldwide are facing new challenges associated with working from home. Digital transformation initiatives are placing unprecedented pressure on innovation.  Speed is the new currency for software development and testing. The penalty for software failure is at an all-time high as news of … continue reading

The post The Open Testing Platform appeared first on SD Times.

]]>
This is a rather unique time in the evolution of software testing.  Teams worldwide are facing new challenges associated with working from home. Digital transformation initiatives are placing unprecedented pressure on innovation.  Speed is the new currency for software development and testing. The penalty for software failure is at an all-time high as news of outages and end-user frustration go viral on social media. Open-source point tools are good at steering interfaces but are not a complete solution for test automation.

Meanwhile, testers are being asked to do more while reducing costs.

Now is the time to re-think the software testing life cycle with an eye towards more comprehensive automation. Testing organizations need a platform that enables incremental process improvement, and data curated for the purpose of optimizing software testing must be at the center of this solution. Organizations that leverage multiple open-source or proprietary testing tools must consider an Open Testing Platform to keep pace with Agile and enterprise DevOps initiatives.   

What is an Open Testing Platform?
An Open Testing Platform (OTP) is a collaboration hub that assists testers to keep pace with change. It transforms observations into action – enabling organizations to inform testers about critical environment and system changes, act upon observations to zero in on ‘what’ precisely needs to be tested, and automate the acquisition of test data required for effective test coverage.

RELATED CONTENT:
Testing tools deliver quality – NOT!
The de-evolution of software testing

The most important feature of an Open Testing Platform is that it taps essential information across the application development and delivery ecosystem to effectively test software. Beyond accessing an API, an OTP leverages an organization’s existing infrastructure tools without causing disruption—unlocking valuable data across the infrastructure. An OTP allows any tester (technical or non-technical) to access data, correlate observations and automate action. 

Model in the middle
At the core of an Open Testing Platform is a model. The model is an abstracted representation of the transactions that are strategic to the business. The model can represent new user stories that are in-flight, system transactions that are critical for business continuity, and flows that are pivotal for the end-user experience.

In an OTP, the model is also the centerpiece for collaboration. All tasks and data observations either optimize the value of the model or ensure that the tests generated from the model can execute without interruption.  Since an OTP is focused on the software testing life cycle, we can take advantage of known usage patterns and create workflows to accelerate testing. For example, with a stable model at the core of the testing activity:

  •   The impact of change is visualized and shared across teams
  •   The demand for test data is established by the model and reused for team members
  •   The validation data sets are fit to the logic identified by the model
  •   The prioritization of test runs can dynamically fit the stage of the process for each team, optimizing for vectors such as speed, change, business-risk, maintenance, etc.

Models allow teams to identify critical change impacts quickly and visually. And since models express test logic abstracted from independent applications or services, they also provide context to help testers collaborate across team boundaries.

Data curated for testing software
Automation must be driven by data. An infrastructure that can access real-time observations as well as reference a historical baseline is required to understand the impact of change. Accessing data within the software testing life cycle does not have to be intrusive or depend on a complex array of proprietary agents deployed across an environment. In an overwhelming majority of use cases, accessing data via an API provides enough depth and detail to achieve significant productivity gains.  Furthermore, accessing data via an API from the current monitoring or management infrastructure systems eliminates the need for additional scripts or code that require maintenance and interfere with overall system performance.

 Many of the data points required to optimize the process of testing exist, but they are scattered across an array of monitoring and infrastructure management tools such as Application Performance Monitoring (APM), Version Control, Agile Requirements Management, Test Management, Web Analytics, Defect Management, API Management, etc.

An Open Testing Platform curates data for software testing by applying known patterns and machine learning to expose change. This new learning system turns observations into action to improve the effectiveness of testing and accelerate release cycles. 

Why is an Open Testing Platform required today?
Despite industry leaders trying to posture software testing as value-added, the fact is that an overwhelming majority of organizations identify testing as a cost center. The software testing life cycle is a rich target for automation since any costs eliminated from testing can be leveraged for more innovative initiatives.

If you look at industry trends in automation for software testing, automating test case development hovers around 30%.  If you assess the level of automation across all facets of the software testing life cycle, then automation averages about 20%.  This low average automation rate highlights that testing still requires a high degree of manual intervention which slows the software testing process and therefore delays software release cycles.

But why have automation rates remained so low for software testing when initiatives like DevOps have focused on accelerating the release cycle? There are four core issues that have impacted automation rates:

  •   Years of outsourcing depleted internal testing skills
  •   Testers had limited access to critical information
  •   Test tools created siloes
  •   Environment changes hampered automation

Outsourcing depleted internal testing skills
The general concept here is that senior managers traded domestic, internal expertise in business and testing processes for offshore labor, reducing Opex . With this practice known as labor arbitrage, an organization could reduce headcount and shift the responsibility for software testing to an army of outsourced resources trained on the task of software testing. This shift to outsourcing had three main detrimental impacts to software testing: the model promoted manual task execution, the adoption of automation was sidelined and there was a business process “brain-drain” or knowledge drain. 

 With the expansion of Agile and the adoption of enterprise DevOps, organizations must execute the software testing life cycle rapidly and effectively. Organizations will need to consider tightly integrating the software testing life cycle within the development cycle which will challenge organizations using an offshore model for testing.  Team must also think beyond the simple bottom-up approach to testing and re-invent the software testing life cycle to meet increasing demands of the business. 

Testers had limited access to critical information 
Perhaps the greatest challenge facing individuals responsible for software testing is staying informed about change. This can be requirements-driven changes of dependent application or services, changes in usage patterns, or late changes in the release plan which impact the testers’ ability to react within the required timelines. 

Interestingly, most of the data required for testers to do their job is available in the monitoring and infrastructure management tools across production and pre-production. However, this information just isn’t aggregated and optimized for the purpose of software testing. Access to APIs and advancements in the ability to manage and analyze big data changes this dynamic in favor of testers. 

Although each organization is structurally and culturally unique, the one commonality found among Agile teams is that the practice of testing software has become siloed. The silo is usually constrained to the team or constrained to a single application that might be built by multiple teams. These constraints create barriers since tests must execute across componentized and distributed system architectures.

Ubiquitous access to best-of-breed open-source and proprietary tools also contributed to these silos. Point tools became very good at driving automated tests. However, test logic became trapped as scripts across an array of tools. Giving self-governing teams the freedom to adopt a broad array of tools comes at a cost:  a significant degree of redundancy, limited understanding of coverage across silos, and a high amount of test maintenance. 

The good news is that point tools (both open-source and proprietary) have become reliable to drive automation. However, what’s missing today is an Open Testing Platform that assists to drive productivity across teams and their independent testing tools.  

Environment changes hampered automation
Remarkably, the automated development of tests hovers at about 30% but the automated execution of tests is half the rate at 15%. This means that tests that are built to be automated are not likely to be executed automatically – manual intervention is still required. Why? It takes more than just the automation to steer a test for automation to yield results. For an automated test to run automatically, you need: 

  •   Access to a test environment
  •   A clean environment, configured specifically for the scope of tests to be executed
  •   Access to compliant test data
  •   Validation assertions synchronized for the test data and logic

 As a result, individuals who are responsible for testing need awareness of broader environment data points located throughout the pre-production environment. Without automating the sub-tasks across the software testing life cycle, test automation will continue to have anemic results.

An Open Testing Platform levels the playing field 
Despite the hampered evolution of test automation, testers and software development engineers in test (SDETs) are being asked to do more than ever before. As systems become more distributed and complex, the challenges associated with testing compounds. Yet the same individuals are under pressure to support new applications and new technologies – all while facing a distinct increase in the frequency of application changes and releases. Something has got to change. 

An Open Testing Platform gives software testers the information and workflow automation tools to make open-source and proprietary testing point tools more productive in light of constant change.  An OTP provides a layer of abstraction on top of the teams’ point testing tools, optimizing the sub-tasks that are required to generate effective test scripts or no-code tests. This approach gives organizations an amazing degree of flexibility while significantly lowering the cost to construct and maintain tests.

 An Open Testing Platform is a critical enabler to both the speed and effectiveness of testing.  The OTP follows a prescriptive pattern to assist an organization to continuously improve the software testing life cycle.  This pattern is ‘inform, act and automate.’ An OTP offers immediate value to an organization by giving teams the missing infrastructure to effectively manage change. 

The value of an Open Platform

Inform the team as change happens
What delays software testing? Change, specifically late changes that were not promptly communicated to the team responsible for testing. One of the big differentiators for an Open Testing Platform is the ability to observe and correlate a diverse set of data points and inform the team of critical changes as change happens. An OTP automatically analyzes data to alert the team of specific changes that impact the current release cycle.

 Act on observations

Identifying and communicating change is critically important, but an Open Testing Platform has the most impact when testers are triggered to act. In some cases, observed changes can automatically update the test suite, test execution priority or surrounding sub-tasks associated with software testing. Common optimizations such as risk-based prioritization or change-based prioritization of test execution can be automatically triggered by the CI/CD pipeline.  Other triggers to act are presented within the model-based interface as recommendations based on known software testing software algorithms.

Automate software testing tasks 
When people speak of “automation” in software testing they are typically speaking about the task of automating test logic versus a UI or API.  Of course, the scope of tests that can be automated goes beyond the UI or API but also it is important to understand that the scope of what can be automated in the software testing life cycle (STLC) goes far beyond the test itself.  Automation patterns can be applied to:

  •   Requirements analysis
  •   Test planning
  •   Test data
  •   Environment provisioning
  •   Test prioritization
  •   Test execution
  •   Test execution analysis
  •   Test process optimization

Key business benefits of an Open Testing Platform
By automating or augmenting with automation functions within the software testing life cycle, an Open Testing Platform can provide significant business benefits to an organization. For example:

  • Accelerating testing will improve release cycles
  • Bringing together data that had previously been siloed allows more complete insight
  • Increasing the speed and consistency of test execution builds trust in the process
  • Identifying issues early improves capacity
  • Automating repetitive tasks allows teams to focus on higher-value optimization
  • Eliminating mundane work enables humans to focus on higher-order problems, yielding greater productivity and better morale

Software testing tools have evolved to deliver dependable “raw automation.” Meaning that the ability to steer an application automatically is sustainable with either open-source or commercial tools.  If you look across published industry research, you will find that software testing organizations report test automation rates to be (on average) 30%.  These same organizations also report that automated test execution is (on average) 16%.  This gap between the creation of an automated test and the ability to execute it automatically lies in the many manual tasks required to run the test. Software testing will always be a delay in the release process if organizations cannot close this gap.  

Automation is not as easy as applying automated techniques for each of the software testing life cycle sub-processes.  There are really three core challenges that need to be addressed:

  1. Testers need to be informed about changes that impact testing efforts.  The requires interrogating the array of  monitoring and infrastructure tools and curating data that impacts testing.
  2. Testers need to be able to act on changes as fast as possible. This means that business rules will automatically augment the model that drives testing – allowing the team to test more effectively.
  3. Testers need to be able to automate the sub-tasks that exist throughout the software testing lifecycle.  Automation must be flexible to accommodate each team need yet simple enough to make incremental changes as the environment and infrastructure shifts.

Software testing needs to begin its own digital transformation journey. Just as digital transformation initiatives are not tool initiatives, the transformation to sustainable continuous testing will require a shift in mindset.  This is not shift-left.  This is not shift-right. It is really the first step towards Software Quality Governance.  Organizations that leverage multiple open-source or proprietary testing tools must consider an Open Testing Platform to keep pace with Agile and enterprise DevOps initiatives.  

The post The Open Testing Platform appeared first on SD Times.

]]>
SD Times Open-Source Project of the Week: VHS https://sdtimes.com/open-source/sd-times-open-source-project-of-the-week-vhs/ Fri, 18 Dec 2020 14:44:59 +0000 https://sdtimes.com/?p=42481 Performance testing company Stormforge has launched a new open-source project designed to improve and advance application performance and optimization test creation. The project, VHS, records live traffic to test performance against “reality instead of just an educated guess,” Noah Abrahams, open source advocate at StormForge, explained in a post.  “VHS started as a project that … continue reading

The post SD Times Open-Source Project of the Week: VHS appeared first on SD Times.

]]>
Performance testing company Stormforge has launched a new open-source project designed to improve and advance application performance and optimization test creation. The project, VHS, records live traffic to test performance against “reality instead of just an educated guess,” Noah Abrahams, open source advocate at StormForge, explained in a post. 

“VHS started as a project that filled a need related to our performance testing and optimization portfolio, namely, accurate load generation,” Abrahams said. “Our mission as a company is to extend the concept of application performance from being a reactive mindset focused on operations teams, to a proactive, automatic and continuous process that includes and empowers the application developers themselves. Part of that mission is ensuring that developers in the community are not only aware that proactive solutions are available to them, but that they’re able to contribute and help build tomorrow’s application performance solutions.”

According to the company, current methods for recording and replaying app traffic did not provide clear enough or precise results. VHS aims to provide load generational aligned with actual live production to better guarantee performance testing and forecasted traffic. 

As part of the community-driven project initiative, StormForge is asking the open-source community to help rename the project in Q1 of 2021. “The name VHS wouldn’t be particularly easy to find in a Google search, anyway, and the acronym is already taken in most places that matter, so the rename will be happening sooner rather than later,” Abrahams wrote. 

The post SD Times Open-Source Project of the Week: VHS appeared first on SD Times.

]]>
premium App testing: how companies are getting it right — and wrong https://sdtimes.com/test/app-testing-how-companies-are-getting-it-right-and-wrong/ Fri, 20 Nov 2020 19:05:29 +0000 https://sdtimes.com/?p=42211 As we enter the fourth quarter of an explosively eventful year, important trends are emerging within the app testing industry – trends that will surely extend into 2021. The most important is the accelerated pace at which companies are moving to the cloud. The speed-up is being driven by the need to support remote teams … continue reading

The post <span class="sdt-premium">premium</span> App testing: how companies are getting it right — and wrong appeared first on SD Times.

]]>
As we enter the fourth quarter of an explosively eventful year, important trends are emerging within the app testing industry – trends that will surely extend into 2021.

The most important is the accelerated pace at which companies are moving to the cloud. The speed-up is being driven by the need to support remote teams that no longer have physical access to in-house device labs due to COVID-19. This move was driven by the pandemic, but it will have benefits that extend beyond the current state of affairs. Remote work is here to stay, and having a test infrastructure in the cloud allows anywhere, anytime access, which can quickly translate into productivity.

A second trend is an increase in the speed at which teams are moving to automate their testing. While manual testing will still play an important role – not everything can be automated – it’s clear that automation is crucial for companies that want to scale the quick release of new versions without compromising quality. 

Speed vs. quality: A false choice
The quality bar has been set very high by industry leaders, and the days of moving fast and breaking things are long gone. In fact, “breaking things” – releasing code that has not been properly tested – can have horrendous consequences. For example, a software error at Knight Capital Group resulted in a $460 million loss, leading to the company’s bankruptcy. Provident Financial Group lost $2.2 billion in market value due to an app failure. These are extreme cases of what can go wrong when companies release buggy code, but untested code hurts many more companies in ways that don’t make the headlines.

Today’s users are unforgiving, and bugs can kill any momentum an app may have. According to one survey, a single negative review drives away 22 percent of prospective customers, and three bad reviews lead to a loss of almost 60 percent. Nonetheless, many companies still feel they need to choose between quality and speed. All too often, quality loses the battle. This can mean rushing the testing teams, or it can mean limiting the scope of testing and ignoring the wide variety of devices used around the world. Either way, the result is unhappy users, negative reviews, poor sales and ultimately poor financial performance. 

There are two best practices that can address the speed vs. quality challenge. The first is automating as much of the testing process as possible. Automation doesn’t replace human judgment. Rather, it frees test engineers from repetitive, time-consuming tasks so they can do a better job. 

A second best practice is breaking down silos and eliminating the “toss-it-over-the-wall” attitude towards testing. Instead of receiving finished code, test engineers should work hand-in-hand with developers in an agile fashion while features are being developed. This ensures that quality is built into the product rather than bolted on as an afterthought. 

The automation scorecard
At BrowserStack, we have classified companies into innovators and late adopters of automation.. The results clearly indicate the value of automation. Specifically, innovators:

  • run 6X fewer manual tests
  • run 12X more tests per day
  • produce 40X builds per day
  • produce each build 9X faster and 5X smaller
  • have failure rates that are 4X lower

To summarize, innovators produce more builds per day, run more tests with more coverage, and have lower failure rates. 

Speed and quality can co-exist. Netflix and Amazon, for example, release code hundreds of times every day without introducing severe bugs. A combination of collaboration and automation are behind that success, and these best practices are available to any company that wants to eliminate developer pain and boost quality output.

The post <span class="sdt-premium">premium</span> App testing: how companies are getting it right — and wrong appeared first on SD Times.

]]>
premium Testing tools deliver quality – NOT! https://sdtimes.com/test/testing-tools-deliver-quality-not/ Thu, 19 Nov 2020 17:00:13 +0000 https://sdtimes.com/?p=42137 I was recently hired to do an in-depth analysis of the software testing tool marketplace.  By the way, there are more tools in the software testing space than a do-it yourself, home improvement warehouse.  Given this opportunity to survey a broad set of software testing tool vendors, it was pretty interesting to look at the … continue reading

The post <span class="sdt-premium">premium</span> Testing tools deliver quality – NOT! appeared first on SD Times.

]]>
I was recently hired to do an in-depth analysis of the software testing tool marketplace.  By the way, there are more tools in the software testing space than a do-it yourself, home improvement warehouse.  Given this opportunity to survey a broad set of software testing tool vendors, it was pretty interesting to look at the promises they make to the market. These promises can be split up into four general categories:

  • We provide better quality
  • We have AI and we are smarter than you
  • We allow you to do things faster
  • We are open-source – give it a go

What struck me most was the very large swath of software testing tool vendors who are selling the idea of delivering or providing “quality.”  Placing this into pointed analogy: claiming that a testing tool provides quality is like claiming that COVID testing prevents you from being infected.  The fact is when a testing tool finds a defect, then “quality” has already been compromised.  Just as when you receive a positive COVID test, you are already infected.

Let’s get this next argument out of the way.  Yes, testing is critical in the quality process; however the tool that detects the defect DOES NOT deliver quality.  Back to the COVID test analogy, the action of wearing masks and limiting your exposure to the public prevents the spread of the infection. A COVID test can assist you to make a downstream decision to quarantine in order to stop the spread of infection or an upstream decision to be more vigilant in wearing a mask or limiting your exposure to high-risk environments.  I’m going to drop the COVID example at this point out of sheer exhaustion on the topic.  

But let’s continue the analogy with weight loss – a very popular topic as we approach the holidays.  Software testing is like a scale, it can give you an assessment of your weight.  Software delivery is like the pair of pants you want to wear over the holidays. Weighing yourself is a pretty good indicator of your chances to fit into the pair of pants at a particular point in time.  

Using the body weight analogy is interesting because a single scale might not give you all the information you need, and you might have the option to wear a different pair of pants.  Let me unpack this a bit.  

The scale(s)
We cannot rely on a single measurement nor a single instance of that measurement to make an assessment of the quality of an application.  In fact, it requires the confluence of many measurements both quantitative and qualitative to assess the quality of software at any particular point in time. At a very high level there are really only three types of software defects:

  • Bad Code
    • The code is poorly written
    • The code does not implement the user story as defined
  • Bad User Story
    • The user story is wrong or poorly defined
  • Missing User Story 
    • There is missing functionality that is critical for the release

Using this high-level framework there are radically different testing approaches required.  If we want to assess bad code, then we would rely on development testing techniques like static code analysis to measure the quality of the code.  We would use unit testing or perhaps test-driven development (TDD) as a preliminary measurement to understand if the code is aligned to critical function or component of the user story.  If we want to assess a bad user story, this is where BDD, manual testing, functional testing (UI and API) and non-functional testing takes over to assess if the user story is adequately delivered in the code.  And finally, if we want to understand if there is a missing user story this is usually an outcome of exploratory testing when you get that ‘A-ha’ moment that something critical is missing.

The pants
Let’s refresh the analogy quickly.  The scale is like a software testing tool and we want to weigh ourselves to make sure we can fit into our pants, which is our release objective. The critical concept here is that not all pants are designed to fit the same and the same is true for software releases.  Let’s face it, our software does not have to be perfect and, to be blunt, “perfection” comes at a cost that is far beyond an organization’s resources to achieve. Therefore, we have to understand that some pants are tight with more restrictions and some pants are loose, which give you more comfort. So, you might have a skinny jeans release or a sweatpants release.    

Our challenge in the software development and delivery industry is that we don’t differentiate between skinny jeans and sweatpants. This leads us to a test-everything approach ,which is a distinct burden to both speed and costs. The alternative, which is the “test what we can” approach, is also suboptimal.  

So, what’s the conclusion?  I think we need to worry about fitting into our pants at a particular point in time.  There is enough information that currently exists throughout the software development life cycle and production that can guide us to create and execute the optimal set of tests. The next evolution of software testing will not solely be AI.  The next evolution will be using the data that already exists to optimize both what to test and how to test it.  Or in other terms, we will understand the constraints associated with each pair of pants and we will use our scale effectively to make sure we fit in them in time for the holiday get- together of less than 10 close family members.

The post <span class="sdt-premium">premium</span> Testing tools deliver quality – NOT! appeared first on SD Times.

]]>
premium Get back to the fun part of testing https://sdtimes.com/test/get-back-to-the-fun-part-of-testing/ Thu, 22 Oct 2020 16:40:37 +0000 https://sdtimes.com/?p=41793 In an ideal world, software testing is all about bringing vital information to light so our teams can deliver amazing products that grow the business (to paraphrase James Bach). Investigation and exploration lie at the heart of testing. The obsession with uncovering critical defects before they unfurl into business problems is what gets under our … continue reading

The post <span class="sdt-premium">premium</span> Get back to the fun part of testing appeared first on SD Times.

]]>
In an ideal world, software testing is all about bringing vital information to light so our teams can deliver amazing products that grow the business (to paraphrase James Bach). Investigation and exploration lie at the heart of testing. The obsession with uncovering critical defects before they unfurl into business problems is what gets under our skin and makes us want to answer all those “what if…” questions before sending each release off into the world. 

But before the exploring can begin, some work is required. If you’re tracking down literal bugs in the wilderness, you’re not going to experience any gratification until after you check the weather forecasts, study your maps and field guides, gear up, slather on the sunscreen and mosquito repellant, and make it out into the field. If your metaphorical hunting grounds are actually software applications, these mundane tasks are called “checking.” This includes both the rote work of ensuring that nothing broke when you last made a code change (regression testing) and that the basic tenets of the requirement are actually met (progression testing).

RELATED CONTENT: Testing in a complex digital world

This work is rarely described as “fun.” It’s not what keeps us going through those late-night bug hunts (along with pizza and beverages of choice). So, what do we do? We automate it! Now we’re talking… There is always primal joy in creating something, and automation is no different. The rush you get when your cursor moves, the request is sent, the API is called…all without you moving a finger…can make you feel fulfilled. Powerful, even. For a moment, the application is your universe, and you are its master.

You now breathe a sigh of relief and put your feet up, satisfied with your efforts. Tomorrow is now clear to be spent exploring. Back to the bug hunting! The next day, you flip open your laptop, ready to roll up your sleeves and dive into the fun stuff. But what’s that? Build failed? Awesome! Your work is already paying off. Your automated checks have already surfaced some issues…or have they?

No… not really. It was just an XPath change. No problem; you won’t make that mistake again. You fix it up, and run the tests again. Wait, that element has a dynamic ID? Since when? Ok ok ok…fine! You utter the incantation and summon the arcane power of Regex, silently praying that you never have to debug this part of your test again. At some point, you glance at the clock. Another day has passed without any time for real exploration. This work was not fun. It was frustrating. No longer are you the master of this universe, but an eternal servant at the whims of an ever-growing list of flaky, capricious tests. 

Turns out, the trick to getting past all the mind-numbing grunt work isn’t outsmarting the traditional script-based UI test automation that everyone’s been battling for years. It’s enlisting innately smarter automation—automation that understands what you need it to do so you can focus on what you actually want to do: explore!  

With the latest generation of AI-driven test automation based on optical recognition, you can delegate all the automation logistics to a machine—so you can focus on the creative aspects that truly make a product successful. (Full disclosure: Several companies offer AI-driven UI test automation based on optical recognition…and I’m leading the development of this technology at one of them.)

The idea behind this approach is to tap an engine that can understand and drive the UI like a human would. This human behavior is simulated using various AI and machine learning strategies—for example, deep convolutional neural networks combined with advanced heuristics—to deliver stable, self-healing, platform-agnostic UI automation. 

From the tester perspective, you provide a natural language description of what actions to perform, and the engine translates that to the appropriate UI interactions. UI elements are identified based on their appearance rather than their technical properties. If some UI element is redesigned or the entire application is re-implemented using a new technology, it doesn’t matter at all. Like a human, the automation will simply figure it out and adapt.

Making sure this works with the necessary speed and accuracy across all the technologies you need to test is the hard part—but that’s our job, not your problem. You can just roll up your sleeves, tell it what you want to test, and let the automation handle the rest. Then the fun can begin. 

Here are two core ways that this “AI-driven UI test automation” approach helps you get back to the fun part of testing…

Automation without aggravation
I’m no stranger to automation. I’ve worked in automation for over a decade, managing automation teams, implementing automation, and even doing some work on the Selenium project. Building stable automation at a technical level is invariably tedious, no matter what you’re automating. And aside from some very high-level guiding principles like separation of concerns, data abstraction, and design patterns, your mastery of automating one technology doesn’t really translate when it’s time to automate another. 

Automatically driving a browser or mobile interface is a lot different than “steering” a desktop application, a mainframe, or some custom/packaged app that’s highly specialized. Technologies like model-based test automation remove complexity, adding an abstraction layer that lets you work at the business layer instead of the technical layer. However, it’s not always feasible to apply model-based approaches to extremely old applications, applications running on remote environments (e.g., accessed via Citrix), highly-specialized applications for your company/industry, etc.

With image-based test automation, the underlying technology is irrelevant. If you can exercise the application via a UI, you can build automation for it. No scripting. No learning curve. Just exercise it naturally—like you probably already do as you’re checking each new user story—and you can get all the repeatability and scalability of automation without any of the work or hassle. 

Technology Stockholm Syndrome
Back when I was a university student, I “learned” that nothing would ever be developed that wasn’t a big heavy C thick client. There was some talk of thin clients, but of course those wouldn’t last. After I graduated, everyone was scrambling to rewrite their thick clients using the shiny new service-oriented architecture. Then came mobile. And containerization and Kubernetes.

By this time, I figured out that I have a problem: let’s call it Technology Stockholm Syndrome. I was held captive by the ever-changing landscape of technology. And the strange thing was that I kind of liked it because this ever-changing, ever-shifting set of goalposts was so much fun.

This is a good problem to have in terms of ensuring the continued value and viability of your organization’s applications. You want your dev teams to stay on top of the latest trends and take advantage of new approaches that improve application flexibility, reliability, security, and speed.  But if you’re responsible for building and maintaining the test automation, each change can be torture. In most cases, a technical shift means you need to scrap the existing test automation (which likely represents a significant investment of time and resources) and start over—rebuilding what’s essentially the same thing from scratch. Not fun.

 Also, not really required anymore. You’d be surprised at how few fundamental changes are introduced into an application’s UI from generation to generation. (Don’t believe it? Just take a trip back in web history on The Wayback Machine and see for yourself.)

Although the underlying implementation and specific look and feel might shift dramatically, most of the same core test sequences typically still apply (for example, enter maxmusterman under username, enter 12345 under password, and click the login button). If those same general actions remain valid, then image-based test automation should still be able to identify the appropriate UI elements and complete the required set of actions.

Remember Java’s mantra of “write once, run anywhere”?” (Go ahead, insert your own Java joke here). Done right, this testing approach should actually deliver on that promise. Of course, you’ll probably want to add/extend/prune some tests as the app evolves from generation to generation—but you certainly don’t need to start with a blank slate each time the application is optimized or re-architected.

Deep down, testing is truly fun
UI test automation is undeniably a critical component of a mature enterprise test strategy.  But wrestling with all the nitty gritty bits and bytes of it isn’t fun for anyone. Go search across engineers, professional testers, business users, and all sorts of other project stakeholders, and I guarantee you won’t find anyone with a burning desire to deal with it. But testing as a discipline, as a creative, problem-solving process, is truly fun at its core. 

Peel away the layers of scripting, flakiness, and constant failures, and you can (re)focus on the experimentation, exploration, questioning, modeling… all the things that make it fun and fulfilling.

The post <span class="sdt-premium">premium</span> Get back to the fun part of testing appeared first on SD Times.

]]>
Testing in a complex digital world https://sdtimes.com/test/testing-in-a-complex-digital-world/ Thu, 01 Oct 2020 14:32:47 +0000 https://sdtimes.com/?p=41512 About a decade ago, application testing was fairly straightforward, albeit a manual effort and somewhat of a drag on delivery. Tests cases were written, functional and UI tests were done, regression, pen and load testing would happen, and the application was deemed ‘good to go.’ Today’s digital world of APIs, open-source components, mobile devices, IoT … continue reading

The post Testing in a complex digital world appeared first on SD Times.

]]>
About a decade ago, application testing was fairly straightforward, albeit a manual effort and somewhat of a drag on delivery. Tests cases were written, functional and UI tests were done, regression, pen and load testing would happen, and the application was deemed ‘good to go.’

Today’s digital world of APIs, open-source components, mobile devices, IoT endpoints, DevOps pipelines and containers — not to mention the squeezed timelines for application delivery — render manual testing almost completely ineffective.

Yes, the testing world has evolved. We’re seeing automated testing, continuous testing and security testing emerge, as well as non-traditional testing such as feature experimentation and chaos engineering advancing to keep pace with organizational demands in the digital age.

This showcase is a guide to some of the companies that provide testing tools, and each comes at the issue from a different perspective. We hope you find it useful, and encourage you to reach out to these solution providers to learn more.

The Future of Testing is AI: Visual AI
Parasoft Leads Testing Innovation
Supercharge Testing with Mobile Labs
Automate Mobile Testing with Kobiton
Fix Penetration Testing Finds Faster
Software Testing Showcase

The post Testing in a complex digital world appeared first on SD Times.

]]>
Supercharge Testing with Mobile Labs https://sdtimes.com/test/supercharge-testing-with-mobile-labs/ Mon, 28 Sep 2020 13:00:51 +0000 https://sdtimes.com/?p=41519 Effective mobile app testing is even more important today than it was before COVID-19 hit. These days, mobile app experiences impact which brands and services customers choose and how productive work-from-home employees can be. To ensure the highest performance and scalability of apps ranging from enterprise productivity to games, teams serious about product quality choose … continue reading

The post Supercharge Testing with Mobile Labs appeared first on SD Times.

]]>
Effective mobile app testing is even more important today than it was before COVID-19 hit. These days, mobile app experiences impact which brands and services customers choose and how productive work-from-home employees can be. To ensure the highest performance and scalability of apps ranging from enterprise productivity to games, teams serious about product quality choose Mobile Labs.

“COVID-19 has accelerated the digitalization of businesses in every industry because it’s the only way for them to engage with their customers. Instead of going into physical businesses, consumers are using mobile apps to check their bank balance, apply for loan or to order fast food for curbside pickup,” said Dan McFall, president and CEO of Mobile Labs. “Before the pandemic, a lot of mobile testers were relying on the few physical devices they kept in a drawer at work. Now, they’re realizing they need access to a device cloud that provides the same interactive capabilities desktop and web developers get using virtual machines.”

Improve Mobile Test Coverage and Automation
While testing an app on a wider range of devices, operating system versions and browser versions helps ensure better app experiences across more customers, most developers, testers and QA engineers say they still need to improve test coverage and the velocity of testing. While many have adopted the Appium open source automation tool, Appium can be difficult to scale and manage because, as an open source project, it’s not clear when updates will occur and the documentation leaves much to be desired.

To help customers get more from their automation efforts, Mobile Labs created its own Appium server that benefits customers irrespective of whether they’re using Appium yet or not. Current Appium users benefit from improved Appium performance and reliability. They also discover it’s easier to manage and support their automation infrastructure. Those without Appium find they can start scripting immediately without downloading, installing or configuring Appium. A surprising benefit of Mobile Labs’ Appium server is the 4X or greater scalability it provides.

“One of the challenges with Appium is if you try to run more than 8, 10, 12 concurrent tests, you’re going to need a lot of hardware,” said McFall. “We can increase that to 40, 48 concurrent tests running against a single server, making it easier to scale. Since it’s hard for people to go into the office and set up new hardware now, the more they can get out of what they have, the better.”

What’s more, Appium users don’t have to wait for community fixes because Mobile Labs handles them proactively.

Customers facing script automation issues, which commonly arise as the result of internal skills shortages, can find intelligent scripting solutions in Mobile Labs’ partner network. Like robust software development IDEs that pinpoint coding errors, intelligent scripting solutions rapidly identify test automation script errors.

Test Your Way
Many enterprises have Mobile Labs running on-premises behind a firewall. However, with COVID-19 remote work trends, Mobile Labs’ hosting has mushroomed.

“We’ve been getting rave reviews on our hosting, so more people are starting to realize we’re not just the on-premises people,” said McFall. “In fact, we’ve seen a lot of growth in the gaming sector because of the device performance we provide.”

Customers who choose Mobile Labs hosting can always move their environment on-premises at any time without fear of vendor lock-in.

Teams that want to tame the chaos of testing multiple apps across multiple platforms, operating systems, and device types behind a firewall tend to choose Mobile Labs’ GigaFox Red mobile device testing cloud. Teams that need access to more devices or enhanced graphics features behind a firewall or hosted choose GigaFox Silver. Small teams or teams that are just getting started with mobile app testing can jumpstart their journey with GigaFox StarterKit. All GigaFox versions help accelerate development and continuous testing because they provide developers, testers and QA with access to the same devices. Moreover, those devices are the actual devices customers use.

Another benefit of Mobile Labs is future-proof testing. Instead of purchasing new equipment because Apple just released its latest versions of iPhone, for example, customers can simply subscribe to the Mobile Labs Device Refresh Program, which allows them to swap old devices for new ones. When paired with GigaFox Red or GigaFox Silver mobile device clouds, teams can be sure they always have the devices they need to ensure the best quality user experiences.

Learn more at www.mobilelabsinc.com

The post Supercharge Testing with Mobile Labs appeared first on SD Times.

]]>
Parasoft Leads Testing Innovation https://sdtimes.com/test/parasoft-leads-testing-innovation/ Mon, 28 Sep 2020 13:00:42 +0000 https://sdtimes.com/?p=41513 The COVID-19 pandemic has caused organizations to accelerate their digital transformation strategies. Two of the major trends are supporting a remote workforce and engaging customers primarily, if not exclusively, through digital channels. Critical to employee productivity and customer experience is adequate software testing that requires a high level of automation. “Organizations are figuring out how … continue reading

The post Parasoft Leads Testing Innovation appeared first on SD Times.

]]>
The COVID-19 pandemic has caused organizations to accelerate their digital transformation strategies. Two of the major trends are supporting a remote workforce and engaging customers primarily, if not exclusively, through digital channels. Critical to employee productivity and customer experience is adequate software testing that requires a high level of automation.

“Organizations are figuring out how to enable the digital side of their business to both protect existing markets and take advantage of new ones,” said Mark Lambert, VP of strategic initiatives at Parasoft. “A lot of them are leveraging low-code platforms like Salesforce Lightning to accelerate digital transformation and their movement into the cloud.”

Faced with rapidly evolving circumstances, businesses need to ensure that the software they develop, customize and deliver provides the needed functionality and meets all compliance requirements.

“Compliance is a critical consideration for many enterprise organizations, especially those in financial [services], health care and insurance. We help organizations establish the processes and practices that put compliance checks and functional verification in place in the most effective way possible,” said Lambert. 

Low-Code Use Is Increasing
Many developers originally dismissed low code as simple tools built for those who are unable to code. However, as software development and delivery cycles continued to shrink with the 2020 COVID-19 pandemic, more organizations are now using low code to become more Agile than they were before.

“As organizations mature in their adoption of low code, they’re creating teams that include both citizen developers and professional developers to support the increasing scope of functionality, which tends to increase pretty quickly.” said Lambert. “It’s the job of the technical team to create building blocks that enable the less technical business developers across the organization.”

However, low-code projects need to be tested like other software projects to avoid unintended consequences such as non-compliance or outages of business-critical capabilities. Professional developers and testers choose Parasoft to ensure the delivery of reliable and scalable low-code apps.

The Pandemic Is Driving Greater Test Automation
Since entire white-collar workforces are now working remotely, more teams are containerizing their DevOps pipelines and pushing them into the cloud.

“Test automation is the only way you can efficiently verify functionality and ensure compliance. It’s the key to unlocking Agile and DevOps,” said Lambert. “You need a robust delivery pipeline and process. Quality needs to be built into that pipeline to accelerate delivery with confidence.”

Since one small code change can have a negative cascading effect across an application, other applications and use cases, organizations are figuring out how they can create a scalable test automation strategy that integrates with their ecosystem.

Parasoft covers everything from code analysis and automated unit testing to automated API and UI testing. By leveraging AI and machine learning across these quality practices, Parasoft’s suite of test automation products assist developers and testers with the underlying activities. The result is streamlined test automation that improves overall ROI.

Service Virtualization Speeds Development and Testing
“When you’re connecting across different services and providers, the level of interconnectedness explodes the complexity of your test environment. You need a way to control that environment,” said Lambert. 

Service virtualization decreases the time and cost required to test due to constrained dependencies. With today’s remote work mandates, more organizations want to emulate backend system dependencies and control their functional behavior. With Parasoft service virtualization, developers and testers can create synthetic data and virtual services that behave like their real counterparts. 

“Like many others, one of our Canadian partners now has fully remote software development teams. Their challenge is giving teams access to systems without leveraging on-network devices,” said Lambert. “They’re rolling out service virtualization now to address this and unblock the teams.”

Why Customers Love Parasoft
Parasoft has a 30-year history of innovation. Unlike point solutions, Parasoft continues to be recognized by industry analysts and customers as the platform of choice for testing throughout the development pipeline—from code analysis that ensures security compliance and reliability to automated unit, API, UI and load and performance testing. Parasoft integrates with open source testing frameworks like Selenium, TestNG and JUnit, providing complementary functionality that assists with the difficult tasks of creation and maintenance. 

“We’re recognized as a leader and visionary by Forrester and Gartner, respectively. We pride ourselves on delivering innovations that go beyond continuous testing to help clients achieve continuous quality. We’re also honored to have received the Gartner Peer Insights Customer Choice award in both 2019 and 2020, validating that we’re building products that make teams more effective,” said Lambert. 

Learn more at www.parasoft.com.

The post Parasoft Leads Testing Innovation appeared first on SD Times.

]]>