QA Archives - SD Times https://sdtimes.com/tag/qa/ Software Development News Wed, 09 Nov 2022 17:18:50 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.1 https://sdtimes.com/wp-content/uploads/2019/06/bnGl7Am3_400x400-50x50.jpeg QA Archives - SD Times https://sdtimes.com/tag/qa/ 32 32 Report: Test automation coverage has rebounded after a dip last year https://sdtimes.com/test/report-test-automation-coverage-has-rebounded-after-a-dip-last-year/ Wed, 09 Nov 2022 17:18:50 +0000 https://sdtimes.com/?p=49547 Test automation coverage has rebounded after a dip last year, according to SmartBear’s State of Quality Testing 2022 report.  SmartBear conducted a global online survey over the course of five weeks earlier this year. The findings are based upon aggregated responses from more than 1,500 software developers, testers, IT/operations professionals, and business leaders across many … continue reading

The post Report: Test automation coverage has rebounded after a dip last year appeared first on SD Times.

]]>
Test automation coverage has rebounded after a dip last year, according to SmartBear’s State of Quality Testing 2022 report. 

SmartBear conducted a global online survey over the course of five weeks earlier this year. The findings are based upon aggregated responses from more than 1,500 software developers, testers, IT/operations professionals, and business leaders across many different industries.

Last year saw the amount of companies performing manual tests at 11%, while that number dwindled to 7% this year, almost returning to pre-pandemic levels of 5% of all tests being performed completely manually. 

This year also saw slightly higher numbers than ever before for respondents that said 50-99%  of their tests are automated across the board. The biggest jump happened in the 76-99% group which jumped over 10% to 16% over the last year. The amount of respondents that said their tests are all automated regained some ground to the pre-pandemic level of 4%.

When looking at the different types of tests and how they are performed, over half of respondents reported using manual testing for usability and user acceptance tests. Unit tests, performance tests, and BDD framework tests were highest among all automated testing. 

Another finding is that the time spent testing increased for traditional testers but decreased for developers. However, the average percentage of time spent testing remained the same as last year, at 63% across the organization.

QA engineers/automation engineers spend the most time testing, averaging 76% of their weeks on testing up from 72% last year. While the trend for developer testing inched up between 2018 to 2021, reaching 47%, it sank to 40% this year. Testing done by architects plummeted from 49% to 30% over the last year. 

This year, the most time-consuming activity was performing manual and exploratory tests, jumping to 26% from 18% last year as the most time-consuming task. In the same time period, learning how to use test tools as the most time-consuming challenge with testing fell from 22% to just 8%. 

The biggest challenges that organizations reported for test automation varied by company size.  Companies with 1-25 employees cite “not having the correct tools” as their biggest challenge, while companies with 501-1,000 employees cite “not having the right testing environments available” as their biggest challenge. These are different from the biggest problem that was cited last year “not enough time to test” at 37%.

The post Report: Test automation coverage has rebounded after a dip last year appeared first on SD Times.

]]>
The importance of tool integration for QA teams https://sdtimes.com/test/the-importance-of-tool-integration-for-qa-teams/ Thu, 06 Oct 2022 13:30:22 +0000 https://sdtimes.com/?p=49113 Everybody cares about software quality (or they ought to, at least), but it’s easier said than done. Lots of factors can cause software to fail, from tools and systems not integrating well to people not communicating well. According to ConnectALL, improving value stream flow can help with these communication breakdowns, tool integration can improve quality … continue reading

The post The importance of tool integration for QA teams appeared first on SD Times.

]]>
Everybody cares about software quality (or they ought to, at least), but it’s easier said than done. Lots of factors can cause software to fail, from tools and systems not integrating well to people not communicating well.

According to ConnectALL, improving value stream flow can help with these communication breakdowns, tool integration can improve quality assurance function, and integrating test management tools with other tools can help provide higher quality test coverage. 

In a recent SD Times Live! event, Lance Knight, president and COO at ConnectALL, and Johnathan McGowan, principal solutions architect at ConnectALL, shared six ways that tool integration can improve test management processes and QA. 

“It’s a very complex area, right? There’s a lot going on here in the testing realm, and different teams are doing different kinds of tests. Your developers are doing those unit tests, your QA team is doing manual automated and regression, and then your security folks are doing something else. And they’ve all each got their own little places that they’re doing all of that in,” said McGowan.

This article first appeared on VSM Times. To read the full article, visit the original post here.

The post The importance of tool integration for QA teams appeared first on SD Times.

]]>
Automated testing still lags https://sdtimes.com/test/automated-testing-still-lags/ Tue, 02 Aug 2022 20:20:17 +0000 https://sdtimes.com/?p=48461 Automated testing initiatives still lag behind in many organizations as increasingly complex testing environments are met with a lack of skilled personnel to set up tests.  Recent research conducted by Forrester and commissioned by Keysight found that while only 11% of respondents had fully automated testing, 84% percent of respondents said that the majority of … continue reading

The post Automated testing still lags appeared first on SD Times.

]]>
Automated testing initiatives still lag behind in many organizations as increasingly complex testing environments are met with a lack of skilled personnel to set up tests. 

Recent research conducted by Forrester and commissioned by Keysight found that while only 11% of respondents had fully automated testing, 84% percent of respondents said that the majority of testing involves complex environments. 

For the study, Forrester conducted an online survey in December 2021 that involved 406 test operations decision-makers at organizations in North America, EMEA, and APAC to evaluate current testing capabilities for electronic design and development and to hear their thoughts on investing in automation.

The complexity of testing has increased the number of tests, according to 75% of the respondents. Sixty-seven percent of respondents said the time to complete tests has risen too.

Challenges with automated testing 

Those that do utilize automated testing often have difficulty making the tests stable in these complex environments, according to Paulina Gatkowska, head of quality assurance at STX Next, a Python software house. 

One such area where developers often find many challenges is in UI testing in which the tests work like a user: they use the browser, click through the application, fill fields, and more. These tests are quite heavy, Gatkowska continued, and when a developer finishes their test on a local environment, sometimes it fails in another environment, or only works 50% times, or a test works the first week, and then starts to be flaky. 

“What’s the point of writing and running the tests, if sometimes they fail even though there is no bug? To avoid this problem, it’s important to have a good architecture of the tests and good quality of the code. The tests should be independent, so they don’t interfere with each other, and you should have methods for repetitive code to change it only in one place when something changes in the application,” Gatkowska said. “You should also attach great importance to ‘waits’ – the conditions that must be met before the test proceeds. Having this in mind, you’ll be able to avoid the horror of maintaining flaky tests.”

Then there are issues with the network that can impede automated tests, according to Kavin Patel, founder and CEO of Convrrt, a landing page builder. A common difficulty for QA teams is network disconnection, which makes it difficult for them to access databases, VPNs, third-party services, APIs, and certain testing environments, because of shaky network connections, adding needless time to the testing process. The inability to access virtual environments, which are typically utilized by testers to test programs, is also a worry. 

Because some teams lack the expertise to implement automated testing, manual testing is still used as a correction for any automation gaps. This creates a disconnect with the R&D team, which is usually two steps ahead, according to Kenny Kline, president of Barbend, an online platform for strength sports training and nutrition.

“To keep up with them, testers must finish their cycles within four to six hours, but manual testing cannot keep up with the rate of development. Then, it is moved to the conclusion of the cycle,” Kline said. “Consequently, teams must include a manual regression, sometimes known as a stabilization phase, at the end of each sprint. They extend the release cadence rather than lowering it.”

Companies are shifting towards full test automation 

Forrester’s research also found that 45% of companies say that they’re willing to move to a fully automated testing environment within the next three years to increase productivity, gain the ability to simulate product function and performance, and shorten the time to market. 

The companies that have implemented automated testing right have reaped many rewards, according to Michael Urbanovich, head of the testing department at a1qa, an international quality assurance company. The ones relying on robotic process automation (RPA), AI, ML, natural language processing (NLP), and computer vision for automated testing have attained greater efficiency, sped up time to market, and freed up more resources to focus on strategic business initiatives. RPA alone can lower the time required for repetitive tasks up to 25%, according to research by Automation Alley. 

For those looking to gain even more from their automation initiatives, a1qa’s Urbanovich suggests looking into continuous test execution, implementing self-healing capabilities, RPA, API automation, regression testing, and UAT automation. 

Urbanovich emphasized that the decision to introduce automated QA workflows must be conscious. Rather than running with the crowd to follow the hype, organizations must calculate ROI based on their individual business needs and wisely choose the scope for automation and a fit-for-purpose strategy. 

“To meet quality gates, companies need to decide which automated tests to run and how to run them in the first place, especially considering that the majority of Agile-driven sprints last for up to only several weeks,” Urbanovich said. 

Although some may hope it were this easy, testers can’t just spawn automated tests and sit back like Paley’s watchmaker gods. The tests need to be guided and nurtured. 

“The number one challenge with automated testing is making sure you have a test for all possibilities. Covering all possibilities is an ongoing process, but executives especially hear that you have automated testing now and forget that it only covers what you actually are testing and not all possibilities,” said David Garthe, founder of Gravyware, a social media management tool. “As your application is a living thing, so are the tests that are for it. You need to factor in maintenance costs and expectations within your budget.” 

Also, just because a test worked last sprint, doesn’t mean it will work as expected this sprint, Garthe added. As applications change, testers have to make sure that the automated tests cover the new process correctly as well. 

Garthe said that he has had a great experience using Selenium, referring to it as the “gold standard” with regard to automated testing. It has the largest group of developers that can step in and work on a new project. 

“We’ve used other applications for testing, and they work fine for a small application, but if there’s a learning curve, they all fall short somewhere,” Garthe said. “Selenium will allow your team to jump right in and there are so many examples already written that you can shortcut the test creation time.”

And, there are many other choices to weave through to start the automated testing process.

“When you think about test automation, first of all you have to choose the framework. What language should it be? Do you want to have frontend or backend tests, or both? Do you want to use gherkin in your tests?,” STX Next’s Gatkowska said. “Then of course you need to have your favorite code editor, and it would be annoying to run the tests only on your local machine, so it’s important to configure jobs in the CI/CD tool. In the end, it’s good to see valuable output in a  reporting tool.”

Choosing the right tool and automated testing framework, though, might pose a challenge for some because different tools excel at different conditions, according to Robert Warner, Head of Marketing at VirtualValley, a UK-based virtual assistant company.

“Testing product vendors overstate their goods’ abilities. Many vendors believe they have a secret sauce for automation, but this produces misunderstandings and confusion. Many of us don’t conduct enough study before buying commercial tools, that’s why we buy them without proper evaluation,” Warner said. “Choosing a test tool is like marrying, in my opinion. Incompatible marriages tend to fail. Without a good test tool, test automation will fail.”

AI is augmenting the automated testing experience

In the next three years 52% of companies that responded to the Forrester report said they would consider using AI for integrating complex test suites.

The use of AI for integrated testing provides both better (not necessarily more) testing coverage and the ability to support agile product development and release, according to the Forrester report.

Companies are also looking to add AI for integrating complex test suites, an area of test automation that is severely lacking, with only 16% of companies using it today. 

a1qa’s Urbanovich explained that one of the best ways to cope with boosted software complexity and tight deadlines is to apply a risk-based approach. For that, AI is indispensable. Apart from removing redundant test cases, generating self-healing scripts, and predicting defects, it streamlines priority-setting. 

“In comparison with the previous year, the number of IT leaders leveraging AI for test prioritization has risen to 43%. Why so?” Urbanovich continued, alluding to the World Quality Report 2021-2022. “When you prioritize automated tests, you put customer needs FIRST because you care about the features that end users apply the most. Another vivid gain is that software teams can organize a more structured and thoughtful QA strategy. Identifying risks makes it easier to define the scope and execution sequence.”

Most of the time, companies are looking to implement AI in testing to leverage the speed improvements and increased scope of testing, according to Kevin Surace, CTO at Appvance, an AI-driven software testing provider

“You can’t write a script in 10 minutes, maybe one if you’re a Selenium master. Okay, the machine can write 5,000 in 10 minutes. And yes, they’re valid. And yes, they cover your use cases that you care about. And yes, they have 1,000s of validations, whatever you want to do. And all you did was spend one time teaching it your application, no different than walking into a room of 100 manual testers that you just hired, and you’re teaching them the application: do this, don’t do this, this is the outcome, these are the outcomes we want,” Surace said. “That’s what I’ve done, I got 100 little robots or however many we need that need to be taught what to do and what not to do, but mostly what not to do.”

QA has difficulty grasping how to handle AI in testing 

Appvance’s Surace said that the overall place of where testing needs to go is to be completely hands off from humans.

“If you just step back and say what’s going on in this industry, I need a 4,000 times productivity improvement in order to find essentially all the bugs that the CEO wants me to find, which is find all the bugs before users do,” Surace said. “Well, if you’ve got to increase productivity 4,000 times you cannot have people involved in the creation of very many use cases, or certainly not the maintenance of them. That has to come off the table just like you can’t put people in a spaceship and tell them to drive it, there’s too much that has to be done to control it.”  

Humans are still good at prioritizing which bugs to tackle based on what the business goals are

because only humans can really look at something and say, well, we’ll just leave it, it’s okay, we’re not gonna deal with it or say this is really critical and push it to the developers side to fix it before release, Surace continued. 

“A number of people are all excited about using AI and machine learning to prioritize which tests you should run, and that entire concept is wrong. The entire concept should be, I don’t care what you change in application, and I don’t understand your source code enough to know the impacts and on every particular outcome. Instead, I should be able to create 10,000 scripts and run them in the next hour, and give you the results across the entire application,” Surace said. “Job one, two, and three of QA is to make sure that you found the bugs before your users do. That’s it, then you can decide what to do with them. Every time a user finds a bug, I can guarantee you it’s in something you didn’t test or you chose to let the bug out. So when you think about it, that way users find bugs and the things we didn’t test. So what do we need to do? We need to test a lot more, not less.”

A challenge with AI is that it is a foreign concept to QA people so teaching them how to train AI is a whole different field, according to Surace. 

First off, many people on the QA team are scared of AI, Surace continued, because they see themselves as QA people but really have the skillset of a Selenium tester that writes Selenium scripts and tests them. Now, that has been taken away similar to how RPA disrupted many industries such as customer support and insurance claims processing. 

The second challenge is that they’re not trained in it.

“So one problem that we see that we have is you explain how the algorithms work?,” Surace said. “In AI, one of the challenges we have in QA and across the AI industry is how do we make people comfortable that here’s a machine that they may not ever be able to understand. It’s beyond their skillset to actually understand the algorithms at work here and why they work and how neural networks work so they now have to trust that the machine will get them from point A to point B, just like we trust the car gets from point A to point B.”

However, there are some areas of testing in which AI is not as applicable, for example, in a form-based application where there is nothing else for the application to do than to guide you through the form such as in a financial services application. 

“There’s nothing else to do with an AI that can add much value because one script that’s data-driven already handles the one use case that you care about. There are no more use cases. So AI is used to augment your use cases, but if you only have one, you should write it. But, that’s few and far between and most applications have hundreds of 1,000s of use cases perhaps or 1,000s of possible combinatorial use cases,” Surace said. 

According to Eli Lopian, CEO at Typemock, a provider of unit testing tools to developers worldwide, QA teams are still very effective at handling UI testing because the UI can often change without the behavior changing behind the scenes. 

“The QA teams are really good at doing that because they have a feel for the UI, how easy it is for the end user to use that code, and they can see the thing that is more of a product point of view and less of doesn’t work or does it not work point of view, which now is really it’s really essential if you want to an application to really succeed,” Lopian said. 

Dan Belcher, the co-founder at mabl, said that there is still plenty of room for a human in the loop when it comes to AI-driven testing. 

“So far, what we’re doing is supercharging quality engineers so human is certainly in the loop, It’s eliminating repetitive tasks where their intellect isn’t adding as much value and doing things that require high speed, because when you’re deploying every few minutes, you can’t really rely on a human to be involved in that in that loop of executing tests. And so what we’re empowering them to do is to focus on higher level concerns, like do I have the right test coverage? Are the things that we’re seeing good or bad for the users?,” Belcher said.

AI/ML excels at writing tests from unit to end-to-end scale

One area where AI/ML in testing excels at is in unit testing on legacy code, according to Typemock’s Lopian.

“Software groups often have this legacy code which could be a piece of code that maybe they didn’t do a unit test beforehand, or there was some kind of crisis, and they had to do it quickly, and they didn’t do the test. So you had this little piece of code that doesn’t have any unit tests. And that grows,” Lopian said. “Even though it’s a difficult piece of code, it wasn’t built for testability in mind, we have the technology to both write those tests for those kinds of code and to generate them in an automatic manner using the ML.”

The AI/ML can then make sure that the code is running in a clean and modernized way. The tests can refactor the code to work in a secure manner, Lopian added. 

AI-driven testing is also beneficial for UI testing because the testers don’t have to explicitly design the way that you reference things in the UI, you can let the AI figure that out, according to mabl’s Belcher. And then when the UI changes, typical test automation results in a lot of failures, whereas the AI can learn and improve the tests automatically, resulting in 85-90% reduction in the amount of time engineers spend creating and maintaining tests with AI. 

In the UI testing space, AI can be used for auto healing, intelligent timing, detecting visual changes automatically in the UI, and detecting anomalies and performance. 

According to Belcher, AI can be the vital component in creating a more holistic approach to end-to-end testing. 

“We’ve all known that the answer to improving quality was to bring together the insights that you get when you think about all facets of quality, whether that’s functional or performance, or accessibility, or UX. And, and to think about that holistically, whether it’s API or web or mobile. And so the area that will see the most innovation is when you can start to answer questions like, based on my UI tests, what API tests should I have? And how do they relate? So when the UI test fails? Was it an API issue? And then, when a functional test fails, did anything change from the user experience that could be related to that?,” Belcher said. “And so the key is to do this is we have to bring kind of all of the kind of end-to-end testing together and all the data that’s produced, and then you can really layer in some incredibly innovative intelligence, once you have all of that data, and you can correlate it and make predictions based on that.”

6 types of Automated Testing Frameworks 
  1. Linear Automation Framework – Also known as a record-and-playback framework in which testers don’t need to write code to create functions and the steps are written in a sequential order. Testers record steps such as navigation, user input, or checkpoints, and then plays the script back automatically to conduct the test.
  2.  Modular Based Testing Framework – one in which testers need to divide the application that is being tested into separate units, functions, or sections, each of which can then be tested in isolation. Test scripts are created for each part and then combined to build larger tests. 
  3. Library Architecture Testing Framework – in this testing framework, similar tasks within the scripts are identified and later grouped by function, so the application is ultimately broken down by common objectives. 
  4. Data-Driven Frameworktest data is separated from script logic and testers can store data externally. The test scripts are connected to the external data source and told to read and populate the necessary data when needed. 
  5. Keyword-Driven Framework – each function of the application is laid out in a table with instructions in a consecutive order for each test that needs to be run. 
  6. Hybrid Testing Framework – a combination of any of the previously mentioned frameworks set up to leverage the advantages of some and mitigate the weaknesses of others.

Source: https://smartbear.com/learn/automated-testing/test-automation-frameworks/

The post Automated testing still lags appeared first on SD Times.

]]>
SAST, SCA & QA are the best tools to combat hackers’ smaller, more sophisticated attacks https://sdtimes.com/security/sast-sca-qa-are-the-best-tools-to-combat-hackers-smaller-more-sophisticated-attacks/ Thu, 21 Jul 2022 18:51:48 +0000 https://sdtimes.com/?p=48339 As many organizations are bolstering up their security measures, hackers have shifted their focus to smaller and more concentrated attacks, according to Daniel Fonseca, senior solutions engineer at Kiuwan in the webinar “Preventing common vulnerabilities with Kiuwan’s SAST, SCA, and QA tools.” The National Vulnerability Database (NVD) said there were over 20,000 security vulnerabilities CVE … continue reading

The post SAST, SCA & QA are the best tools to combat hackers’ smaller, more sophisticated attacks appeared first on SD Times.

]]>
As many organizations are bolstering up their security measures, hackers have shifted their focus to smaller and more concentrated attacks, according to Daniel Fonseca, senior solutions engineer at Kiuwan in the webinar “Preventing common vulnerabilities with Kiuwan’s SAST, SCA, and QA tools.”

The National Vulnerability Database (NVD) said there were over 20,000 security vulnerabilities CVE published in 2021 – a 15% increase from 2020. The top five vulnerabilities that came up in 2021 were broken access control, cryptographic failures, injections, insecure design and security misconfigurations in the OWASP Top 10.

“In order to prevent such vulnerabilities, companies need to shift their priorities and infuse best practices within each organization. For starters, a product roadmap is a high-level summary that visualizes product direction over time, and it’s great for implementing best practices within development,” Fonseca said. 

Another great way to tackle these problems is by employing a Static Application Security Testing (SAST) tool or other tools that can identify risks early in the CI pipeline or within the IDE. 

As security moves right, coverage becomes increasingly challenging by implementing security earlier in the development cycle with the use of SAST, SCA & QA  – it automatically reduces the remediation work that can arise later in the cycle, according to Fonseca. Because half of web application vulnerabilities are critical or high-risk, this raises an important challenge for developers. Time to remediation for vulnerabilities is over 60 days, with significant cost accrued during the remediation process.

Watch this on-demand webinar to find out more about how Kiuwan’s two-part platform can be used to prevent security breaches. Kiuwan is composed of the cloud component, which serves as the web console, and the Local Analyzer, which performs scans and looks for patterns within source code in applications. 

“Basically, the reason why we have the Local Analyzer is because we want to make sure that you don’t need to ship or upload the source code of your applications anywhere in the cloud. But that intellectual property is going to remain on your premises, and the local analyzer will run the scans locally and upload only the results to the cloud,” Fonseca said. 

The post SAST, SCA & QA are the best tools to combat hackers’ smaller, more sophisticated attacks appeared first on SD Times.

]]>
Testing in DevOps https://sdtimes.com/test/testing-in-devops/ Wed, 01 Dec 2021 14:00:53 +0000 https://sdtimes.com/?p=45927 Testing in DevOps is as much about the people that are behind the tools as it is about the tools themselves. When they work in synchrony, organizations can see major benefits in the quality of their applications and software development life cycle processes.  A recent report called The Role of Testing in a DevOps Environment … continue reading

The post Testing in DevOps appeared first on SD Times.

]]>
Testing in DevOps is as much about the people that are behind the tools as it is about the tools themselves. When they work in synchrony, organizations can see major benefits in the quality of their applications and software development life cycle processes. 

A recent report called The Role of Testing in a DevOps Environment found that more than 50% of survey respondents said the greatest value of testing is that it enables teams to release updates and applications faster with confidence. In addition, nearly 60% of respondents experienced a reduction in issues once applications were in production.  

The survey was conducted during July and August of 2021 by Techstrong Research. Over 550 individuals that are familiar with application testing and DevOps completed the survey. 

RELATED CONTENT: 
How these companies help organizations test applications in DevOps environments
A guide to testing tools for use in DevOps environments

However some organizations still struggle with how to advance their DevOps testing initiatives because they are also implementing containerization, microservices, and other cloud-native methods that can sometimes complicate the environment. 

According to the survey, a combined 82% of respondents experienced either frequent or some slowdown in testing new software releases. Most organizations defer releases until testing is done because quality trumps speed, according to the report. 

In some organizations, those responsible for testing need to keep up with changes forced onto them by other teams, third-party applications, and platforms and also keep up with the growing list of regulatory compliance.

Since most of the applications rest on the cloud, businesses also must quickly react when cloud-based platforms receive updates. “For example, if APIs and other pre-built connectors are no longer working with a cloud-based office productivity suite, employees don’t want to hear ‘It’s not our fault, our cloud vendor had a major update.’ It’s up to internal IT teams to make sure applications work as expected,” the report stated. 

The demand for speed and quality has prompted organizations to look towards a way to automate many of the facets of testing and changing the way that they define value. 

“DevOps requires that testing is fast, accurate, meaning low false positive and low false negative rates, and runs without human intervention. Fast can be achieved with more compute power but for the tests to be accurate they need to handle the dynamic and evolving nature of modern applications,” said Gil Sever, co-founder and CEO of Applitools.

Traditional test automation requires frequent and human intervention to update the tests through assertions and navigation, but AI has the ability to learn how the application behaves and respond appropriately, reducing the human intervention. “This makes AI essential for modern software development teams to keep pace with increased release frequency,” Sever added.

But shifting everything to the DevOps mentality of automation is not an overnight process and in some cases, the ideal delivery story won’t even apply to every company or any project, according to Marcus Merrell, senior director of Technology Strategy at Sauce Labs.

“Not all systems can do true DevOps,” said Gareth Smith, general manager of Keysight Technologies. “If I am building a retail website, and it just requires a simple thing, then that’s fine. But if I’m rolling out something that needs to work with various IoT connectors, then not all platforms are able to automate all that.”

QA brings all hands on deck for testing

Testing in DevOps has also seen the growing importance of QA teams in handling the responsibilities of testing.

Quality engineering is being elevated because the C-level sees quality engineering as a key enabler. While developers used to throw things over the wall to QA, they’re bringing QA into the conversation and the industry is seeing much more collaborative DevOps teams, where quality is a shared responsibility between developers and QA and even product owners, according to Dan Belcher, co-founder of mabl.

The interweaving of the maintenance and automation aspects of testing with the speed of DevOps has led to the new term QAOps. 

“Much in the same way that we would think of shifting left as looking at those defects early on because they are then cheaper to fix, now it’s a much greater level of having the whole structure of QA early on and throughout the DevOps cycle,” Belcher said. 

Belcher added that now the CTOs are driving the transformations. “Now it’s a mandate coming from the C-level, to make investments in quality engineering to enable these transformations, whether it’s digital, or DevOps, or UX.”

While many large organizations keep a central QA department, we’re seeing more and more of a shift to automation developers and manual testers being assigned to individual Squads, with a Center of Excellence to support the tools. This allows testers to remain focused on business needs and not worry so much about test infrastructure or tooling, according to Sauce Labs’ Merrell.

While there are still people in the organizations who are responsible for testing as part of their job title, it has also become much more of an all-hands on deck approach in DevOps. 

In leading organizations, software quality has become everyone’s responsibility and has expanded beyond “does it work” to “is it the best customer experience”. Developers are increasingly involved, as well as others such as UI/UX designers and domain experts, to ensure the digital experience is not only working but that it is delivering on the goals of the business, according to Applitools’ Sever. 

“This approach of having all hands on deck is beneficial because with the fast feedback cycles of DevOps, it’s much easier for a developer to understand the impact of a change that they’ve made, possibly before it’s gone through a dedicated QA cycle,” said Chris Haggan, product management lead at HCL OneTest. “If it breaks a regression test in a pack, they can see that instantly, and be in there fixing it really quickly, whereas if you still have those handoff processes, it slows things down and you don’t get that feedback.”

AI and automation are key components of testing in DevOps

AI automation tools are necessary to provide insight by ingesting data from a plethora of data sources.

“Once you move to automated testing and a more integrated process, it enables you to check on things every step of the way and see whether you’re still on the right track,” said Joachim Herschmann, senior director and analyst on the Application Design and Development team at Gartner. “I can see the direct impact that my development, bug fixing and enhancements have whether they improve or make it worse.”  

The more data that can be thrown at AI, the better the result is because it includes all of the subtle variants and different data from all the different sites that one connects it to. 

“You can also use it right now to auto generate the test asset universe, what we refer to as the digital twin,” Keysight’s Smith said. Users of the ‘digital twin’ can define what type of test they want and the AI will work out what the best test scenario for that situation is. 

Execution speed can be increased by assigning more resources to the problem, and the key benefit to AI is its ability to learn and improve the tests over time with minimal human intervention, Applitools’ Sever said. 

There are several areas where AI has the potential to help with testing: smart crawling although it is still in its infancy, self-healing which is already well established and understood, and visual validation.

“For visual validation to be effective, it must be accurate to ensure the team is not overwhelmed with false positives — a problem with the traditional pixel-based approach. It needs to handle dynamic content, shifting elements, responsive designs across different screen sizes and device/browser combinations – as well as provide developers and testers ways to optimize the review and maintenance of regressions,” Sever said. 

Automation can also help with typically manual-centric types of tests such as UX testing. UX testing still requires manual input because here the outcomes of a test are subjective. However, testers don’t need to run the tests manually for every device because they can watch tests being run on a desktop app and then decide whether the quality is acceptable or not in an assisted manual testing fashion, mabl’s Belcher explained. 

“A real simple example is if I’m halfway through entering my credit card details, and I talk to somebody, I roll forward my device, my device goes flat, it rotates and then I come back. Now with that accidental rotation of the device and back, does that still work,” Keysight’s Smith said. “And in many cases, that particular use case or between those, between filling in field six and field seven on a form, then you rotate the device; no one will test that particular combination, but those happen in the real world. That’s where AI can help look at those different combinations as you’re going through the usual continuous tests.”

DevSecOps now a top priority 

One of the biggest trends of 2021 is that security became a top priority for testing in the wake of massive breaches that resulted in tremendous costs. 

The Executive Order on cybersecurity that the Biden administration signed in May helped to put security awareness in the spotlight, according to Jeff Williams, the co-founder and CTO of Contrast Security.

“I think it’s a real harbinger of better security for apps in the future that they require a minimum standard for AppSec testing, much-improved visibility into what you’ve done to secure your code, including things like security labels,” Williams said. “I look forward to a day when you can go to your online bank, insurance company, social media, or your election system and if you want to know a little bit about how that software was built, and how it was tested for security, it should be available to you; that should actually be a fundamental right. If you’re trusting your life, or your healthcare, or your finances, or your government to a piece of software, I think you have the right to know a little bit about how it was tested for security.”

However, security isn’t always handled with the utmost care at organizations. A lot of this comes down to a lack of security expertise, according to Williams. 

There’s never enough attention being paid to security, in testing, or in development. As hard as test/security vendors work to keep up, the bad actors always seem to be one step ahead–aided by the fact that they’ve been every bit as institutionalized as the products they’re subverting, according to Sauce Labs’ Merrell. 

Security testing has traditionally required a lot of expertise to run tools such as SaaS or desktop scanners, or even SCA scanning tools. 

Because there are not enough security experts, people don’t try to shift that security testing left and distribute it across their projects. Instead, they keep it centralized and do it once towards the end of the development process in some gate before code visible production, which is super inefficient, Contrast Security’s Williams added. 

“You can’t just take tools designed for security experts and hand them to developers early in the process and just say ‘Go,’” Williams said. “They’ll end up with tons of false alarms and tons of wasted time, they won’t be able to tailor the tools properly, and they’ll end up really frustrated with security.” 

This has created a need for tools that can be packaged in a way and in the right place for developers to use. 

“There still is a role for expert-based pentesting and expert threat modeling and things like that. But they should work at the margin. Instead of trying to do everything with a pen test, including the stuff that your tools already did a great job at, have your pen testers focus on the things that are hard and difficult for tools,” Williams said. 

For example, a pen tester can come in and look at the access control scheme to find ways to bypass access controls by accessing as admin. That can then be used as an opportunity to strengthen the pipeline and build automated tests, Williams added. 

Evolving testing in DevOps is primarily a people process

Although tooling is necessary, testing in DevOps is also about a mindset shift on the part of the people in an organization and on making the process easier. After all, they will still have a major part to play in testing in the near future. 

Organizations are showing a strong preference for low code and test automation solutions as opposed to script-based solutions. They are also looking for unified quality engineering platforms, rather than best-of-breed point solutions for various aspects of testing, according to mabl’s Belcher. 

Although AI is being applied to a growing number of use cases as part of testing in DevOps, some experts agree that there will always be humans in the loop and that the purpose of those underlying frameworks is to supercharge those people. 

The next leap in the field is going to be autonomous testing where the team will steer the AI at a high level, review if the AI did the right thing and then spend most of their time focused on more strategic work, such as the usability of the application, according to Sever. 

“AI is still an emerging technology, and its role in testing is evolving constantly. The most visible type of AI tooling we see is around AI-assisted automated test creation,” Merrell said. “These tools, while extremely useful, are still no substitute for the human mind of a tester, nor do they take the place of a skilled test automation developer.”

 

The post Testing in DevOps appeared first on SD Times.

]]>
A guide to testing tools for use in DevOps environments https://sdtimes.com/test/a-guide-to-testing-tools-for-use-in-devops-environments/ Wed, 01 Dec 2021 14:00:43 +0000 https://sdtimes.com/?p=45934 The following is a listing of testing tool providers for DevOps environments, along with a brief description of their offerings.  Applitools is built to test all the elements that appear on a screen with a single line of code. Using Applitools’ Visual AI, you can automatically verify that your web or mobile app both functions … continue reading

The post A guide to testing tools for use in DevOps environments appeared first on SD Times.

]]>
The following is a listing of testing tool providers for DevOps environments, along with a brief description of their offerings. 


Applitools is built to test all the elements that appear on a screen with a single line of code. Using Applitools’ Visual AI, you can automatically verify that your web or mobile app both functions correctly and that the digital experience is visually perfect across all devices, all browsers and all screen sizes. Applitools is designed to integrate with your existing test automation rather than requiring you to adopt a new tool and supports all major test automation frameworks and programming languages covering web, mobile, and desktop apps.

Contrast Security is the industry’s most comprehensive Application Security Platform, removing inefficiencies and empowering enterprises to write and release secure code faster. The Contrast platform automatically detects vulnerabilities while developers write code, eliminates false positives, and guides fast vulnerability remediation, which enables application and development teams to collaborate more effectively. This is why many of the world’s largest organizations rely on Contrast to secure their applications in development and in production.

Keysight Technologies Eggplant Digital Automation Intelligence (DAI) platform is the first AI-driven test automation solution with unique capabilities that make the testing process faster and easier. With DAI, you can automate 95% of activities, including test-case design, test execution, and results analysis. This enables teams to rapidly accelerate testing and integrate with DevOps at speed.

HCL OneTest provides UI, API, and performance testing, as well as service virtualization and synthetic data fabrication, to support testers throughout the project lifecycle. It features a script-less, wizard-driven test authoring environment and support for more than 100 technologies and protocols. HCL OneTest belongs to the Secure DevOps portfolio of HCL Software, which is a division of HCL Technologies (HCL). HCL Software develops, markets, sells and supports more than 20 product families in the areas of Customer Experience, Digital Experience, Digital Solutions, Secure DevOps, Security, and Automation.

mabl is the intelligent test automation company that empowers high-velocity software development teams to integrate automated end-to-end testing into the entire development lifecycle. Mabl users benefit from a unified platform for easily creating, executing, and maintaining reliable tests that result in faster delivery of high quality, business critical applications. Learn more at https://www.mabl.com; follow @mablhq on Twitter and @mabl on LinkedIn.

Sauce Labs is the leading provider of continuous testing solutions that enable customers to deliver digital confidence. The Sauce Labs Continuous Testing Cloud delivers a 360-degree view of a customer’s application experience, ensuring that web and mobile applications look, function, and perform exactly as they should on every browser, OS, and device, every single time.

RELATED CONTENT: 
How these companies help organizations test applications in DevOps environments
Testing in DevOps

Appvance is the inventor of AI-driven autonomous testing, which is revolutionizing the $120B software QA industry. The company’s patented platform, Appvance IQ, can generate its own tests, surfacing critical bugs in minutes with limited human involvement in web and mobile applications. AIQ empowers enterprises to improve the quality, performance and security of their most critical applications, while transforming the efficiency and output of their testing teams and lowering QA costs.

Digital.ai Continuous Testing (formerly Experitest) enables organizations to reduce risk and provide their customers satisfying, error-free experiences — across all devices and browsers. Digital.ai Continuous Testing provides expansive test coverage across 2000+ real mobile devices and web browsers, and seamlessly integrates with best-in-class tools throughout the DevOps/DevSecOps pipeline so developers can get test results faster and fix defects earlier in the process, allowing them to deliver secure, high-quality applications at-speed and at-scale. Learn more at  www.digital.ai/continuous-testing

HPE Software’s automated testing solutions simplify software testing within fast-moving agile teams and for continuous integration scenarios. Integrated with DevOps tools and ALM solutions, HPE automated testing solutions keep quality at the center of today’s modern applications and hybrid infrastructures. 

IBM: Quality is essential and the combination of automated testing and service virtualization from IBM Rational Test Workbench allows teams to assess their software throughout their delivery life cycle. IBM has a market leading solution for the continuous testing of end-to-end scenarios covering mobile, cloud, cognitive, mainframe and more. 

Micro Focus: Accelerate test automation with one intelligent functional testing tool for web, mobile, API and enterprise apps. AI-powered intelligent test automation reduces functional test creation time and maintenance while boosting test coverage and resiliency. Users can test both the front-end functionality and back-end service parts of an application to increase test coverage across the UI and API.

Microsoft’s Visual Studio helps developers create, manage, and run unit tests by offering the Microsoft unit test framework or one of several third-party and open-source frameworks. The company provides a specialized tool set for testers that delivers an integrated experience 

starting from Agile planning to test and release management, on-premises or in the cloud. 

Mobile Labs (acquired by Kobiton)  Mobile Labs remains the leading supplier of in-house mobile device clouds that connect remote, shared devices to Global 2000 mobile web, gaming, and app engineering teams. Its patented GigaFox is offered on-premises or hosted, and solves mobile device sharing and management challenges during development, debugging, manual testing, and automated testing. A pre-installed and pre-configured Appium server provides “instant on” Appium test automation.

NowSecure: NowSecure is the mobile app security software company trusted by the world’s most demanding organizations. Through the industry’s most advanced static, dynamic, behavioral and interactive mobile app security testing on real Android and iOS devices, NowSecure identifies the broadest array of security threats, compliance gaps and privacy issues in custom-developed, commercial, and business-critical mobile apps. NowSecure customers can choose automated software on-premises or in the cloud, expert professional penetration testing and managed services, or a combination of all as needed. NowSecure offers the fastest path to deeper mobile app security and privacy testing and certification.

Orasi is a leading provider of software testing services, utilizing test management, test automation, enterprise testing, Continuous Delivery, monitoring, and mobile testing technology. 

Parasoft: Parasoft helps organizations continuously deliver quality software with its market-proven, integrated suite of automated software testing tools. Supporting the embedded, enterprise, and IoT markets, Parasoft’s technologies reduce the time, effort, and cost of delivering secure, reliable, and compliant software by integrating everything from deep code analysis and unit testing to web UI and API testing, plus service virtualization and complete code coverage, into the delivery pipeline. 

Perfecto: Users can pair their favorite frameworks with Perfecto to automate advanced testing capabilities, like GPS, device conditions, audio injection, and more. It also includes full integration into the CI/CD pipeline, and continuous testing improves efficiencies across all of DevOps.  With Perfecto’s cloud-based solution, you can boost test coverage for fewer escaped defects while accelerating testing. 

ProdPerfect: ProdPerfect is an autonomous, end-to-end (E2E) regression testing solution that continuously identifies, builds and evolves E2E test suites via data-driven, machine-led analysis of live user behavior data. It addresses critical test coverage gaps, eliminates long test suite runtimes and costly bugs in production, and removes the QA burden that consumes massive engineering resources. ProdPerfect was founded in January 2018 by startup veterans Dan Widing (CEO), Erik Fogg (CRO), and Wilson Funkhouser (Head of Data Science).

Progress: Telerik Test Studio is a test automation solution that helps teams be more efficient in functional, performance and load testing, improving test coverage and reducing the number of bugs that slip into production. 

SmartBear focuses on your one priority that never changes: quality. Our tools are built to streamline your process while seamlessly working with your existing products. Whether it’s TestComplete, Swagger, Cucumber, ReadyAPI, Zephyr, or one of its other tools, SmartBear spans test automation, API life cycle, collaboration, performance testing, test management, and more. They’re easy to try, buy, and integrate, and are used by 15 million developers, testers, and operations engineers at 24,000+ organizations.

SOASTA’s Digital Performance Management (DPM) Platform enables measurement, testing and improvement of digital performance. It includes five technologies: TouchTest mobile functional test automation; mPulse real user monitoring (RUM); the CloudTest platform for continuous load testing; Digital Operation Center (DOC) for a unified view of contextual intelligence accessible from any device; and Data Science Workbench, simplifying analysis of current and historical web and mobile user performance data. 

Synopsys: A powerful and highly configurable test automation flow provides seamless integration of all Synopsys TestMAX capabilities. Early validation of complex DFT logic is supported through full RTL integration while maintaining physical, timing and power awareness through direct links into the Synopsys Fusion Design Platform.

testRigor helps organizations dramatically reduce time spent on test maintenance, improve test stability, and dramatically improve the speed of test creation. This is achieved through its support of “plain English” language that allows users to describe how to find elements on the screen and what to do with those elements from the end-user’s perspective. People creating tests on their system build 2,000+ tests per year per person. On top of it,  testRigor helps teams deploy their analytics library in production that will make systems automatically produce tests reflecting the most frequently used end-to-end flows from production. 

Tricentis Tosca, the #1 continuous test automation platform, accelerates testing with a script-less, AI-based, no-code approach for end-to-end test automation. With support for over 160+ technologies and enterprise applications, Tosca provides resilient test automation for any use case. 

The post A guide to testing tools for use in DevOps environments appeared first on SD Times.

]]>
How these companies help organizations test applications in DevOps environments https://sdtimes.com/test/how-these-companies-help-organizations-test-applications-in-devops-environments/ Wed, 01 Dec 2021 14:00:24 +0000 https://sdtimes.com/?p=45932 We asked these tool providers to share more information on how their solutions help organizations test applications in their DevOps environments. Their responses are below. Gil Sever, co-founder and CEO of Applitools Modern software development teams are rapidly delivering innovation to market through more frequent and shorter release cycles, but they struggle to fully test … continue reading

The post How these companies help organizations test applications in DevOps environments appeared first on SD Times.

]]>
We asked these tool providers to share more information on how their solutions help organizations test applications in their DevOps environments. Their responses are below.


Gil Sever, co-founder and CEO of Applitools

Modern software development teams are rapidly delivering innovation to market through more frequent and shorter release cycles, but they struggle to fully test the customer experience due to increasing application complexity and an explosion of device/browser combinations

Applitools is helping over 400 of the world’s top digital brands accelerate the delivery of visually perfect digital experiences across all browsers, devices and screens through AI-powered test automation.

Trained on 1B+ images to deliver 99.9999% accuracy, Applitools’ Visual AI mimics the human eye and brain to deliver reliable full page validation that integrates into your existing test automation — with 50+ SDKs supporting open source frameworks (such as Selenium, Cypress, Playwright, Appium, etc.) and integrations with commercial test automation tools. 

Applitools Eyes provides users with the ability to perform complete validation of the end user experience with a single line of code. Tests utilizing Applitools are 5.8x faster to create, 3.8x more stable and catch 45% more defects.

The Applitools Ultrafast Test Cloud combines Applitools Eyes with the Applitools Ultrafast Grid to deliver a modern approach to cross-browser / cross-device testing that executes tests 18.2x faster than legacy cloud execution grids or device farms.

Applitools’ Visual AI modernizes critical test automation use cases — functional and visual regression testing, web and mobile UI/UX testing, cross browser / cross device testing, localization testing, PDF testing, digital accessibility and legal/compliance testing — to transform the way businesses deliver innovation at the speed of DevOps without jeopardizing their brand.

You can see for yourself and sign up for a free Applitools account at https://www.applitools.com/signup.

RELATED CONTENT: 
Testing in DevOps
A guide to testing tools for use in DevOps environments

Jeff Williams, co-founder and CTO of Contrast Security

We provide a platform of products that are designed to help companies become good at building secure code, doing it fast and reliably. And we do it by giving instant feedback to the folks that need it through the tools they’re already using.

Unlike scanners that plow through your whole application portfolio, Contrast runs in the background, a lot like an APM tool. It gathers a ton of telemetry across all your applications in parallel – APIs, cloud-native, and serverless functions – brings that all together, and gives you dashboards to show you exactly what you need.

Most developers don’t really want another dashboard, what they’d really like is their security results, right in JIRA or they’d like to fail a build or get Slack alerts or in their IDE. There’s a million ways to consume the data that we generate, but I think the most important thing is that we have super accurate data based on observing the actual application run. We’re not guessing about vulnerabilities.

We offer Contrast Assess, which runs within the application and uses instrumentation to find vulnerabilities in your custom code and in your libraries. 

We also have Contrast OSS, which finds the known vulnerabilities in all your open source so that you’re not using libraries that have known vulnerabilities. 

Then we added Contrast Protect. It’s the same instrumentation approach, but now we applied it in production so that it’s super high performance and it prevents vulnerabilities from being exploited. 

We also added Contrast Scan, which is a static analysis tool with a new algorithm called demand-driven static analysis, making it much more efficient at finding vulnerabilities and you can run it in your pipeline. As a result of the tremendous uptake in serverless, we launched our first security for serverless offering for AWS Lambda.

——————–

Gareth Smith, general manager of Keysight Technologies

Using artificial intelligence (AI), machine learning (ML) and real user data, Keysight’s Eggplant solution automates test creation and execution. The Eggplant Digital Automation Intelligence (DAI) platform tests and monitors user interface (UI) performance to improve software development, enhance quality, and elevate the customer experience at DevOps speed.  

Instead of testing the code, the DAI platform focuses on the end-to-end customer experience. It provides teams with unparalleled intelligence on where problems lie, significantly reducing the time to resolve these issues. This means organizations can meet customer experience demands and continuously deliver innovation faster while devising strategies to expand DevOps.  

Customers across aerospace and defense, automotive, education, financial services, healthcare, retail, and telecoms rely on the intelligent automation. The DAI platform automates over 95% of activities, including test-case design, test execution, and results analysis. This allows teams to rapidly accelerate testing and integrate with DevOps at speed.

As environments grow more complex and interconnected and with workers distributed, organizations need continuous intelligent test automation that is easy to integrate and scale. Keysight Technologies’ Eggplant automation helps businesses rapidly create products that delight users, test the entire customer experience across any technology, and predict the quantified impact of new product versions on the user before release. 

By partnering with Keysight Technologies, enterprises can deliver better software at a faster pace that delights users.

——————–

Chris Haggan, Product Management Lead,  HCL OneTest

A product that is rushed to market with little time for quality assurance can massively damage the reputation of even well-established organizations. Adopting new technologies, and the fast-paced work environment driven by users who expect more from the applications they work with, will not change. It is time to find testing solutions that evolve with changing landscapes.

HCL OneTest supports UI, performance and API testing along with synthetic data generation and service virtualization to help meet the challenge of testing highly-integrated and complex applications. It features a script-less, wizard-driven, test authoring environment, and supports more than 100 technologies and protocols. HCL OneTest helps with the connections and dependencies between services and components to plan integration test strategies, and generates coverage reports to help identify which processes and services require further testing. Together, these HCL OneTest components help automate and run tests earlier and more frequently to discover errors sooner when they are less costly to fix.

To achieve a successful DevOps strategy, software testing teams must automate regression testing to reduce the risk of deploying poor quality software into production. Effective test automation includes application programming interface (API) testing, user interface testing, and overall system testing. Employing service virtualization in conjunction with test automation allows these tests to be executed earlier, while covering a wider range of scenarios. HCL OneTest has all these features and more, and enables organizations to implement continuous testing within their DevOps strategy and bridge the gap between speed and complete testing.

——————–

Mabl – Dan Belcher, co-founder of Mabl

We’ve seen a profound shift in how organizations view software testing and quality assurance. Historically, QA received less focus and investment than other functions, but that is changing: CTOs and engineering leaders are looking to quality engineering as a key enabler of DevOps and digital transformation, which require a broader mandate to ensure that quality is embedded deeply throughout the software delivery pipeline. Mabl is the only test automation platform designed to fulfill this new mandate in the enterprise. 

Mabl features a low-code UI and framework that allows everyone, regardless of coding experience, to create automated tests with 80% less effort, spanning web UIs, APIs, and mobile browsers. Using artificial intelligence, mabl reduces test maintenance with autohealing, which detects and adapts to changes automatically. With functional test creation and maintenance streamlined, QE can spend time on broader quality attributes – including performance, accessibility, and UX – while keeping pace with DevOps.

Mabl also integrates with popular tools such as Microsoft Teams, Slack, and Jira, so that users can incorporate testing information seamlessly into their workflows and benefit from rich diagnostic data from mabl. Rich reporting supports continuous improvement and improved collaboration across the software development pipeline by addressing one of the biggest inhibitors to DevOps: process changes. 1-in-3 software development professionals cite the slow pace of change as their biggest DevOps challenge, making easy-to-adopt tools essential for success. Using mabl, quality teams are able to expand test coverage, support faster defect resolution, and ultimately enable digital transformation and DevOps across their organization.

——————–

Marcus Merrell, senior director of Technology Strategy at Sauce Labs

Continuous testing is a key enabler of digital confidence — the knowledge that you’re delivering the best possible user experience to your customers. Digitally confident organizations know that their web and mobile applications look, function and perform exactly as intended, every single time they’re used. 

Sauce Labs gives companies the confidence to deliver a flawless digital brand experience to their customers. The Sauce Labs Continuous Testing Cloud is designed to quickly identify code errors, accelerating the ability to release and update web and mobile applications that look, function and perform exactly as they should on every browser, operating system and device, every single time.  Sauce Labs dramatically reduces the time and effort required to discover and fix errors using automated or manual tests, multiple frameworks, leading operating systems, and on real or virtual devices for faster, cleaner releases and more successful, trusted customer experiences.

The post How these companies help organizations test applications in DevOps environments appeared first on SD Times.

]]>
testRigor helps to convert manual testers to QA automation engineers https://sdtimes.com/test/testrigor-helps-to-convert-manual-testers-to-qa-automation-engineers/ Tue, 01 Dec 2020 14:47:52 +0000 https://sdtimes.com/?p=42274 Testing is a crucial piece of the software life cycle but QA teams often can’t produce enough test coverage quickly enough because they are bogged down by test maintenance. Test maintenance often takes more than 50% of a QA team’s time. testRigor, a plain English-based testing system, is easing the pain points of QA teams … continue reading

The post testRigor helps to convert manual testers to QA automation engineers appeared first on SD Times.

]]>
Testing is a crucial piece of the software life cycle but QA teams often can’t produce enough test coverage quickly enough because they are bogged down by test maintenance. Test maintenance often takes more than 50% of a QA team’s time.

testRigor, a plain English-based testing system, is easing the pain points of QA teams by reducing grunt work and overhead, and accelerating the speed of delivery with its test maintenance and test generation solution. 

testRigor was designed to autonomously generate tests and dramatically reduce the need to maintain those tests. “Our tool is a functional end-to-end testing tool for web and mobile designed to automate away the work of manual testers,” said Artem Golubev, CEO of testRigor.

Test maintenance doesn’t have to be time consuming
When the company set out in 2015, it had a mission to help customers expand their test coverage and generate thousands of tests, but the company quickly found test maintenance as the number one problem it needed to solve. “We realized we needed to help our customers maintain our tests, otherwise those tests would be completely useless and thrown away,” said Golubev.

What the company saw was QA teams spending man-years building out hundreds of tests and then being completely bogged down maintaining the code. If there was a major change to their solution such as changing the checkout process from just clicking “buy” to a flow involving clicking “add to cart” and “Cart,” then a majority of the tests would fail and the company would a need to invest more man-months adapting thousands of tests to the new flow —  only to resort to manual testing only in the meantime. 

“You have a huge test suite with tons of useless code that almost all fails. Do you want to spend years fixing it or do you want to try to figure something else out? So people figure something else, and basically get back to manual testing because that is the only way they can fix them,”  Golubev said.

This results in QA teams having to use valuable time to maintain regression tests, or the business decides it’s not worth the time and falls back to manual regression. In fact more than 50% of QA engineering time is spent on test maintenance. As a result, according to Golubev, about 70% of functionality is still tested manually. “It’s as if you’d need to stop and fix your car every mile on the road,” Golubev added. 

“That is a big problem for companies. We want to help companies solve that in a meaningful way,” Golubev continued. According to Golubev, testRigor can help manual QA testers build tests 20 times faster than with Selenium and reduce test maintenance by 200 times vs Selenium. 

On top of the plain English, testRigor provides a browser plugin that can be used to record tests while testers are performing their manual regression tests that further speeds up the test creation.

Test generation that ensures you’re covered
Another problem QA engineering teams face is making sure they have created enough tests that cover all the functionality within a solution. Test creation can be hard, expensive and inefficient if not done properly, Golubev explained. 

With testRigor, customers can not only build up to 1,000 tests anywhere between two weeks and two months, but it uses artificial intelligence to constantly learn from end users and create tests based on most frequently used end-to-end flows from production. Because it is not dependent on white-box information like XPaths, tests are also more stable and adaptable.

“Tests are automatically created based on mirroring how your end-users are using your application in your production plus tests which are produced to map your most important functionality out of the box. It is achieved by using our JavaScript library in your production environment to capture metadata around the functionality & flows your users are taking,” according to testRigor’s website. 

This helps also to ensure that the tests that are generated are actually helpful and reflect the most important areas, eliminating the question of what needs to be covered. According to Golubev, legacy test approaches usually struggle to provide more than 30% of test coverage, while testRigor provides more than 90% of click-through coverage out of the box. 

The solution leverages the same low-code, low-maintenance, plain English-based platform that customers can use to build their tests manually. Its JavaScript library and browser plugin ensures tests cover the most frequent and business critical functionality and flows, and tests are run in parallel so prioritization doesn’t become an issue.

Additionally, because testRigor uses plain English-based tests, users can easily see what happened, and explore the tree of paths to find out which paths are covered without having any coding experience. 

All the tests can be run on a branch, test or production environment within minutes by running tests in parallel.

“It’s like if you had to pedal your bike to go faster. It is a known problem for you unless you invent an engine. We are that engine for pedaling your bicycle,” Golubev added. 

A new era of testing
With the test maintenance, creation and automation issues solved, testRigor is hoping to usher in a new era of intelligent testing. 

For instance, many automated testers look to the testing pyramid as an example on how to create a balanced testing strategy. Typically, at the top of the pyramid, you have end-to-end testing, followed by integration testing and unit testing. The wide-spread belief is that you can’t have a lot of end-to-end tests because they are flaky, slow to create and pain to maintain.

testRigor wants to take the testing pyramid and mold it into a testing hourglass, expanding the end-to-end testing portion. The reason why end-to-end testing is just a small portion of the testing pyramid is because typically end-to-end tests are too hard to build and too hard to maintain, according to Golubev. With testRigor, Golubev explained it has solved the stability and maintainability aspects of end-to-end testing as well as the ability to generate tests based on actual end-user behavior in production, paving the way for more end-to-end testing. 

“We are allowing customers to have a lot of end-to-end tests because we believe this is exactly how systems should be tested otherwise you don’t get actual proof that stuff actually works on behalf of your end users,” Golubev said. “In 2020 and beyond, it is paramount to be able to move faster. You can’t resort back to manual testing anymore. People that move faster end up not only with less issues, but also have positive business impacts.”

testRigor’s four principles for any testing system in 2020 (and beyond) are:

  • No setup required, eliminating painful onboarding and test environment creation 
  • The ability to generate tests when possible
  • Never failing without a good reason. Flakiness is not a good reason for test failure.
  • Ability to maintain tests easily. 

“We believe that a great software testing system should help you to move faster, not slower. And this is why and how we built testRigor,” the company wrote in a post

The testRigor system is particularly good at acceptance-level functional UI-level regression tests. It is complementary, not a replacement, for unit and API tests, and supports calling APIs and APIs testing. Being able to perform exploratory tests and regression testing faster can be a huge differentiation factor between your competitors, Golubev explained.

“Rather than dedicating testing resources to this type of tedious, repetitive & time-consuming work, you can offload that testing to testRigor and re-deploy your testers to your core testing needs,” the company wrote. 

Other features include SMS, phone call, audio, email, and downloaded files testing. It can also provide tests for systems teams don’t control the underlying code for such as Salesforce, MS Dynamics, SAP implementations and RPA scenarios. 

Learn more at testrigor.ai

The post testRigor helps to convert manual testers to QA automation engineers appeared first on SD Times.

]]>
Taming heterogeneous tooling into cohesion https://sdtimes.com/agile/taming-heterogeneous-tooling-into-cohesion/ Mon, 24 Aug 2020 17:54:21 +0000 https://sdtimes.com/?p=41114 Over the last two decades, the swing of the pendulum from monolithic global tooling to highly specific tooling unique to each group and their needs has led to the birth of the tooling suite; the daisy chain of tools never intended to be linked together. One of the main challenges that has emerged is a … continue reading

The post Taming heterogeneous tooling into cohesion appeared first on SD Times.

]]>
Over the last two decades, the swing of the pendulum from monolithic global tooling to highly specific tooling unique to each group and their needs has led to the birth of the tooling suite; the daisy chain of tools never intended to be linked together. One of the main challenges that has emerged is a plethora of disparate, often highly manual approaches to getting enough data to drive any kind of informed and cohesive decision making across value streams and portfolios.

The swing took us to today’s reality where large global organizations, traditionally manufacturing titans or insurance household names, are now software businesses at their core. According to Greg Gould, head of product for Rally at Broadcom, “These transformations were inevitable for companies to stay relevant in the landscapes of digital shifts and disruptions that upended entire industries. Companies have the increasing need to stay relevant and visible to end customers that have a wealth of options and information at their fingertips.” These transformations have also caused a shift in the industry from the days of expansive IT organizations that owned all of the tooling decisions across monolithic engineering groups, to a world of modular R&D organizations that are nimble and performant. These types of modern engineering organizations require the exact right set of tooling for each team’s particular needs.

Next the advent of Agile, and Agile specific tooling, contributed to huge gains to what we now know as modern product development practices. With a focus on team autonomy, small dollar value purchases were decentralized and could be made online with the development managers P card without requiring long procurement and approval processes. It was awesome. Teams could use whatever tools helped them be the most productive and new and upcoming startups offered slick solutions that could be purchased and implemented in no time.

However, there was a realization that teams of 7-9 people that were highly empowered and autonomous could only take a product and an organization so far. There was a need to scale Agile beyond the single team to effectively plan and execute at a large scale. Some tooling options took the challenge of scaling Agile in stride, gracefully bringing capabilities to the market to address the needs of not just the development team, but also the needs of Agile release trains, program and portfolio teams, product teams, QA teams, and DevOps teams. Some did not, but were already incumbents at the team level, and hard to replace without significant disruption to productivity in engineering.

Organizations were struggling to maintain dozens of implementations of various tools, typically highly customized to each group and discipline. While the teams were effective at the team level, it was complete spaghetti for anyone even one layer above the day to day of a team. Large organizations also faced increasing challenges in attracting and retaining engineering talent and attempted to solve the issue by forcing tooling onto developers, causing mutiny in the ranks. At the cost of hiring a new engineer, this was creating a situation organizations could not afford.

This conundrum continues to haunt large scale organizations and has been doing so for as long as Agile has been around. The concepts of interoperability in a tooling ecosystem were never new, but they were not applied up-front to this challenge. How could they have been? Disciplines have continued to evolve even after the arrival of the market standard tools that govern Agile development. Organizations are trying to run their companies and anticipate the shifts market without a holistic view of the data needed to make informed decisions. Companies struggle with poorly bolted on tools as a result of acquisitions and team preference for flashy options to suit every need. Gould has said of some of the largest Agile organizations in the market, “They manage financials and budgeting in one tool, Roadmapping and Agile program and portfolio level planning in another, and development and execution in multiple others. And while all of these tools are daisy chained together, the manual effort to pass data between them is cumbersome and a full-time job. That is what we sought to solve in our recent product the Rally Adapter for Jira”

However, there is light at the end of this tunnel, and we are beginning to see cohesion in at least some of the major players of these spaces. Notably, Broadcom’s suite of products including Rally, Clarity and the newly released premium add-on that brings visibility from the entire execution layer, even if it exists in Jira, has hit the market and found traction. These early and tangible solutions from a major player in the market demonstrate that with the right product and customer focus even a decade plus of silos in tooling can be tamed to provide a highly valuable set of data to drive a business’ decision making to achieve their overall goals and objectives. Gould noted that “Data normalization isn’t sexy, but the Adapter we provide between Rally and Jira is instrumental in the organizations that use it, driving visibility across silos in their value streams to make confident data driven business decisions.”

The post Taming heterogeneous tooling into cohesion appeared first on SD Times.

]]>
Qualitest brings AI to its software testing and QA suite https://sdtimes.com/test/qualitest-brings-ai-to-its-software-testing-and-qa-suite/ Fri, 12 Jun 2020 17:24:22 +0000 https://sdtimes.com/?p=40346 Qualitest has announced the release of Qualisense, a new AI-powered software testing and QA toolkit. Qualisense is the next iteration of the company’s Qualisense Test Predictor service, and will be a standalone product.  The new solution will leverage machine learning to optimize testing and quality delivery, remove bottlenecks, reduce the need for certain tests, help … continue reading

The post Qualitest brings AI to its software testing and QA suite appeared first on SD Times.

]]>
Qualitest has announced the release of Qualisense, a new AI-powered software testing and QA toolkit. Qualisense is the next iteration of the company’s Qualisense Test Predictor service, and will be a standalone product. 

The new solution will leverage machine learning to optimize testing and quality delivery, remove bottlenecks, reduce the need for certain tests, help quality engineers be more efficient, and enhance risk-based testing protocols. According to the company, companies using Qualisense have seen a more than 6x increase in release velocity. 

“Testing was once something that was done at the end of the software development process, however with the advances in testing methodologies, we have been able to entrench it earlier within the process, making it more accurate, quicker, and more effective. Expanding the Qualisense toolkit will allow our clients to embrace best practice quality engineering, and ensure that Qualitest remains on the cutting-edge of software testing methodologies,” said Norm Merrit, CEO of Qualitest.

Other features include a simple UI with easy integration, ability to learn from test and data to improve accuracy, and tester/manager sentiment. 

More information is available here

The post Qualitest brings AI to its software testing and QA suite appeared first on SD Times.

]]>