Sauce Labs Archives - SD Times https://sdtimes.com/tag/sauce-labs/ Software Development News Wed, 13 Apr 2022 19:29:15 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.1 https://sdtimes.com/wp-content/uploads/2019/06/bnGl7Am3_400x400-50x50.jpeg Sauce Labs Archives - SD Times https://sdtimes.com/tag/sauce-labs/ 32 32 How these solution providers support automated testing https://sdtimes.com/test/how-these-solution-providers-support-automated-testing/ Fri, 01 Apr 2022 18:30:09 +0000 https://sdtimes.com/?p=47118 We asked these tool providers to share more information on how their solutions help companies with automated testing. Their responses are below. Matt Klassen, CMO, Parasoft Quality continues to be the primary metric for measuring the success of software deliveries. With the continued pressure to release software faster and with fewer defects, it’s not just … continue reading

The post How these solution providers support automated testing appeared first on SD Times.

]]>
We asked these tool providers to share more information on how their solutions help companies with automated testing. Their responses are below.


Matt Klassen, CMO, Parasoft

Quality continues to be the primary metric for measuring the success of software deliveries. With the continued pressure to release software faster and with fewer defects, it’s not just about speed — it’s about delivering quality at speed. 

Managers must ask themselves if they are confident in the quality of the applications being delivered by their teams. Continuous quality is a must for every organization to efficiently reduce the risk of costly operational outages and to accelerate time-to-market.

Parasoft’s automated software testing solutions integrate quality into the software delivery process for early prevention, detection, and remediation of defects. From deep code analysis for security and reliability, through unit, API, and UI test automation, to performance testing and service virtualization, Parasoft helps you build quality into your software development process.

Parasoft leverages our deep understanding of DevOps to develop AI-enhanced technologies and strategies that solve complex software problems. Our testing solutions reduce the time, effort, and cost of delivering secure, reliable, and compliant software.

With over 30 years of making testing easier for our customers, we have the innovation you need and the experience you trust. Our extensive continuous quality solution spans every testing need and enables you to deliver with confidence. If you want to improve your software quality while achieving your business goals, partner with Parasoft.

RELATED CONTENT:
Targeting a key to automated testing
A guide to automated testing tools

Jonathon Wright, chief technology evangelist, test automation at Keysight 

Artificial Intelligence (AI) makes the process of designing, developing, and deploying software faster, better and cheaper. AI-powered tools enable project managers, business analysts, software coders and testers to be more productive and effective, allowing them to produce higher-quality software faster and at a lower cost.

At Keysight, our Eggplant intelligent automation platform allows citizen developers to easily use our no-code solution that draws on AI, machine learning, deep learning and analytics to automate test execution across the entire testing process. It empowers and enables domain experts to be automation engineers. The AI and ML take on scriptwriting and maintenance as a machine can create and execute thousands of tests in minutes, unlike a human tester. 

Keysight’s intelligent automation platform is a completely non-invasive testing tool, ensuring comprehensive test coverage without ever touching the source code or installing anything on the system-under-test. The technology sits outside of the application and reports on performance issues, bugs and other errors without the need to understand the underlying technology stack. This is critical for regulated industries such as healthcare, government and defense.

AI-powered automation can test any technology on any device, operating system or browser at any layer, from the UI to APIs to the database. This includes everything from the most modern, highly dynamic website to legacy back-office systems to point of sale, as well as command and control systems.

The overarching goal of Keysight’s intelligent automation is to understand how customer experiences and business outcomes are affected by the behavior of the application or software. More than this, though, it is about identifying opportunities for improvements and predicting the business impact of those changes.

Gev Hovsepyan, head of product, mabl

Software development teams are realizing that automated testing is key to accelerating product velocity and reaching the full potential of DevOps. When fully integrated into a company’s development pipeline, testing becomes an early alert system for short-term defects as well as long-term performance issues that could hurt the user experience. The key to realizing this potential: simple test creation and rich, accessible reporting features. 

Mabl is low-code, intelligent test software that allows everyone, regardless of coding experience, to create automated tests spanning web UIs, APIs, and mobile browsers with 80% less effort. Using machine learning and AI, features like auto-healing and Intelligent Wait help teams create more reliable tests and reduce overall test maintenance. Results from every test are tracked within mabl’s comprehensive suite of reporting features, making it easy to understand product quality trends. With test creation simplified and quality data at their fingertips, everyone can focus on resolving defects quickly and improving product quality. 

Mabl also includes native integrations with tools like Microsoft Teams, Slack, and Jira, so that testing information can be seamlessly integrated into workflows and everyone can benefit from mabl’s rich diagnostic data. These reporting features include immediate test results as well as long-term product trends so that quality engineering teams can support faster bug resolution and monitor their product’s overall performance and functionality. This allows software development teams to shift from reacting to failed tests and customer complaints to proactively managing product quality, enabling them to spend more time improving the customer experience.

The post How these solution providers support automated testing appeared first on SD Times.

]]>
A guide to automated testing tools https://sdtimes.com/test/a-guide-to-automated-testing-tools-3/ Fri, 01 Apr 2022 18:30:07 +0000 https://sdtimes.com/?p=47121 The following is a listing of automated testing tool providers, along with a brief description of their offerings.  Keysight Technologies Eggplant Digital Automation Intelligence (DAI) platform is the first AI-driven test automation solution with unique capabilities that make the testing process faster and easier. With DAI, you can automate 95% of activities, including test-case design, … continue reading

The post A guide to automated testing tools appeared first on SD Times.

]]>
The following is a listing of automated testing tool providers, along with a brief description of their offerings. 


Keysight Technologies Eggplant Digital Automation Intelligence (DAI) platform is the first AI-driven test automation solution with unique capabilities that make the testing process faster and easier. With DAI, you can automate 95% of activities, including test-case design, test execution, and results analysis. This enables teams to rapidly accelerate testing, improve the quality of software and integrate with DevOps at speed. The intelligent automation reduces time to market and ensures a consistent experience across all devices.

mabl is the enterprise SaaS leader of intelligent, low-code test automation that empowers high-velocity software teams to embed automated end-to-end tests into the entire development lifecycle. Customer-centric brands rely on mabl’s unified platform for creating, managing, and running automated tests that result in faster delivery of high-quality, business critical applications. Learn more at https://www.mabl.com; follow @mablhq on Twitter and @mabl on LinkedIn.

Parasoft: Parasoft helps organizations continuously deliver quality software with its market-proven automated software testing solutions. Parasoft’s AI-enhanced technologies reduce the time, effort, and cost of delivering secure, reliable, and compliant software with everything from deep code analysis and unit testing to web UI and API testing, plus service virtualization and merged code coverage. Bringing all this together, Parasoft’s award-winning reporting and analytics dashboard delivers a centralized view of application quality, enabling organizations to deliver with confidence.

RELATED CONTENT:
Targeting a key to automated testing
How these solution providers support automated testing

Appvance is the inventor of AI-driven autonomous testing, which is revolutionizing the $120B software QA industry. The company’s patented platform, Appvance IQ, can generate its own tests, surfacing critical bugs in minutes with limited human involvement in web and mobile applications. 

Applitools: Applitools is built to test all the elements that appear on a screen with just one line of code. Using Visual AI, you can automatically verify that your web or mobile app functions and appears correctly across all devices, all browsers and all screen sizes. Applitools automatically validates the look and feel and user experience of your apps and sites. 

Digital.ai Continuous Testing (formerly Experitest) enables organizations to reduce risk and provide their customers satisfying, error-free experiences — across all devices and browsers. Digital.ai Continuous Testing provides expansive test coverage across 2,000+ real mobile devices and web browsers, and seamlessly integrates with best-in-class tools throughout the DevOps/DevSecOps pipeline.

HPE Software’s automated testing solutions simplify software testing within fast-moving agile teams and for continuous integration scenarios. Integrated with DevOps tools and ALM solutions, HPE automated testing solutions keep quality at the center of today’s modern applications and hybrid infrastructures. 

IBM: Quality is essential and the combination of automated testing and service virtualization from IBM Rational Test Workbench allows teams to assess their software throughout their delivery life cycle. IBM has a market leading solution for the continuous testing of end-to-end scenarios covering mobile, cloud, cognitive, mainframe and more. 

Micro Focus: Accelerate test automation with one intelligent functional testing tool for web, mobile, API and enterprise apps. AI-powered intelligent test automation reduces functional test creation time and maintenance while boosting test coverage and resiliency. 

Mobile Labs (acquired by Kobiton)  Its patented GigaFox is offered on-premises or hosted, and solves mobile device sharing and management challenges during development, debugging, manual testing, and automated testing. A pre-installed and pre-configured Appium server provides “instant on” Appium test automation.

NowSecure identifies the broadest array of security threats, compliance gaps and privacy issues in custom-developed, commercial, and business-critical mobile apps. NowSecure customers can choose automated software on-premises or in the cloud, expert professional penetration testing and managed services, or a combination of all as needed.

Orasi is a leading provider of software testing services, utilizing test management, test automation, enterprise testing, Continuous Delivery, monitoring, and mobile testing technology. 

Perfecto: Users can pair their favorite frameworks with Perfecto to automate advanced testing capabilities, like GPS, device conditions, audio injection, and more. It also includes full integration into the CI/CD pipeline, continuous testing improves efficiencies across all of DevOps.  

ProdPerfect: ProdPerfect is an autonomous, end-to-end (E2E) regression testing solution that continuously identifies, builds and evolves E2E test suites via data-driven, machine-led analysis of live user behavior data. It addresses critical test coverage gaps, eliminates long test suite runtimes and costly bugs in production, and removes the QA burden that consumes massive engineering resources. 

Progress: Telerik Test Studio is a test automation solution that helps teams be more efficient in functional, performance and load testing, improving test coverage and reducing the number of bugs that slip into production. 

Sauce Labs provides the world’s largest cloud-based platform for automated testing of web and mobile applications. Optimized for use in CI and CD environments, and built with an emphasis on security, reliability and scalability, users can run tests written in any language or framework using Selenium or Appium.

SmartBear tools are built to streamline your process while seamlessly working with your existing products. Whether it’s TestComplete, Swagger, Cucumber, ReadyAPI, Zephyr, or one of our other tools, we span test automation, API life cycle, collaboration, performance testing, test management, and more. 

Synopsys: A powerful and highly configurable test automation flow provides seamless integration of all Synopsys TestMAX capabilities. Early validation of complex DFT logic is supported through full RTL integration while maintaining physical, timing and power awareness through direct links into the Synopsys Fusion Design Platform.

SOASTA’s Digital Performance Management (DPM) Platform enables measurement, testing and improvement of digital performance. It includes five technologies: TouchTest mobile functional test automation; mPulse real user monitoring (RUM); the CloudTest platform for continuous load testing; Digital Operation Center (DOC) for a unified view of contextual intelligence accessible from any device; and Data Science Workbench, simplifying analysis of current and historical web and mobile user performance data. 

Testmo: Tracking, reporting and monitoring test automation results become more important as teams invest in and scale their automation suites. The new unified test management tool Testmo was designed to manage automated, manual and exploratory testing all in one platform. To accomplish this, it also directly integrates with popular issue, DevOps and CI tools such as GitHub, GitLab and Jira. It supports submitting and collecting results from any automation tool and platform.

testRigor supports “plain English” language that allows users to describe how to find elements on the screen and what to do with those elements from the end user’s perspective.  testRigor helps teams deploy their analytics library in production that will make systems automatically produce tests reflecting the most frequently used end-to-end flows from production. 

Tricentis Tosca, the #1 continuous test automation platform, accelerates testing with a script-less, AI-based, no-code approach for end-to-end test automation. With support for over 160+ technologies and enterprise applications, Tosca provides resilient test automation for any use case. 

test

The post A guide to automated testing tools appeared first on SD Times.

]]>
Targeting a key to automated testing https://sdtimes.com/test/targeting-a-key-to-automated-testing/ Fri, 01 Apr 2022 18:30:00 +0000 https://sdtimes.com/?p=47115 Getting one’s hands on automated tests for the first time is like being given the keys to a Ferrari. And YouTube is chock-full of videos on what happens when someone gets too comfortable too soon in a Ferrari. Automated tests are fast, but only in the direction that you point them to. And having a … continue reading

The post Targeting a key to automated testing appeared first on SD Times.

]]>
Getting one’s hands on automated tests for the first time is like being given the keys to a Ferrari. And YouTube is chock-full of videos on what happens when someone gets too comfortable too soon in a Ferrari.

Automated tests are fast, but only in the direction that you point them to. And having a lot of them can easily cause a traffic jam so it’s important to first make sure that they are applied in the right areas and in the right way. 

“What I want to achieve is not more and more tests. What I actually want is as few tests as I possibly can because that will minimize the maintenance effort, and still get the kind of risk coverage that I’m looking for,” said Gartner senior director Joachim Herschmann, who is on the App Design and Development team. 

RELATED CONTENT:
How these solution providers support automated testing
A guide to automated testing tools

To get started with automated testing, organizations need to first look at where their tests will deliver the most value to avoid test sprawl and to prevent high maintenance costs. 

“The warm, fuzzy feeling that you’ve got a thousand automated tests per week doesn’t really tell you anything from a risk perspective with risk-based testing,” said Arthur Hicken, the chief evangelist at Parasoft. “So I think this kind of approach to doing value-driven automation as to what’s got the most value and what kind of confidence we need, what kind of coverage we need is important.”

Organizations need to factor in what it costs to create a test and what it costs to maintain a test because often the maintenance winds up costing a lot more than the creation. 

One must also factor in what it costs to execute a test in terms of time. With Big Bang releases a couple of times a year, creating tests is not such a big issue, but if a company is used to rolling out  weekly updates such as with mobile apps it’s really critical to be able to narrow and focus the automation on exactly the right set of tests. 

With a value-driven test automation strategy, organizations can identify full-stack tests that only cover backend business logic and that can be tested more efficiently through API-level integration (or even unit) tests. They can also identify bottlenecks with dependencies that can be virtualized for more efficient testing and automation, according to Broadcom in a blog post

The testers might decide not to automate some tests that they thought were ideal for automation, because having them performed by testers turns out to be more efficient.

Test at the API level

One way to tackle the complexity that comes with automated testing is to test at the API level rather than the UI, according to Hicken.

UI testing, which ensures that an application is performing the right way from the user perspective is notoriously brittle. 

“[UI testing] is certainly the easiest way to get started in the sense that it’s easy to look at a UI and understand what you need to do like start poking things, but at some point, that becomes very hard to continue,” Hicken said. “It’s hard to make boundary cases happen or to simulate error conditions. Also, fundamentally UI testing is the hardest to debug, because you have too much context and it’s the most brittle to maintain.” 

Meanwhile at the unit level, the automated tests are pretty fast to execute and create and are easy to understand and maintain. After unit testing, one can add the simplest functional tests that they have and then go and backfill with the UI. Now, they can make sure that actual business cases and user stories occur and they can implement these tests against the business logic to get the proper blend of testing, Hicken explained.

“It’s not really that top down approach of if I see a system and automate that system, it’s actually now from a bottom up focus of well in which people are approaching automation at an enterprise scale and asking what’s the blueprint or pattern that we’re trying to do?,” Jonathon Wright, the chief technology evangelist of test automation at Keysight said. “It’s incredibly complex states and the devil’s in the details…they’re asking how do you test those things with realistic data rather than a happy path?” 

Wright explained that happy path testing just won’t cut it anymore because people are testing systems upstream and downstream with all the same kind of data and it all works out in the happy path kind of scenario. Even when people are doing contract testing where each one of the systems is tested end-to-end from an API perspective, people are just using one user with one account with one something and then, of course, it works. But this methodology misses the point, according to Wright. 

“Because people are testing in isolation, they’re also testing their shim or stub or their service virtualization component using Wireshark, so that they’re not actually testing against the real API. So they exclude a lot of things by just locking them out,” Wright added. 

Focus on real-user interactions

A good way to set up automated tests is to focus on how real users are interacting with the systems and how the behavior of those systems are being used. 

“It’s quite scary, because obviously, its perception of what the system does, but actually what the system is doing in the live environment and how the customers are using it. But you kind of assume that they’re going to use it in a particular way, when actually the behavior will change. And that will change weekly and monthly,” Wright said. 

That’s why testers can set up a digital twin of the system as it currently is, and then overlay that with what they thought the system was based on. 

“There’s a different type of behavior mapping; it’s learning from the right hand side this kind of shift right to inform the shift left blueprint model of the system which I think actually helps accelerate everything because you don’t need to create an activity,” Wright added. “You can create it all from real users. You just take their exact journey and then within a matter of minutes, we can actually generate all the automation artifacts with it.”

Teams must then slice the user journeys into smaller, more meaningful pieces and automate against those smaller journeys without going too deep. It’s important that they can automate every clique and not merge too many user journeys together in a single test resulting in multiple hundred step tests, according to Gev Hovsepyan, the head of product at mabl. 

That initial setup of the environment proves to be an interesting discussion between quality engineers and software engineers and in the organization as a whole. “I think that initial configuration, especially when onboarding the test automation platform, becomes an important discussion point, because the way you set it up, is going to define how scalable that approach is,” Hovsepyan said. 

The role of service virtualization

The key to unlocking continuous testing is having an available, stable, and controllable test environment. Service virtualization makes it possible to simulate a wide range of constraints in test environments, whether due to unavailability or uncontrollable dependencies. 

The behaviors of various components are mimicked with mock responses that can provide an environment almost identical to a live setting. 

“Service virtualization is an automation tester’s best friend. It can help to resolve roadblocks and allow teams to focus on the tests themselves instead of worrying about whether or not they can get access to a certain environment or third party service,” Amit Bhoraniya, the technical lead at Infostretch wrote in a blog post

Organizations can also prevent having too many automated tests by having a unified platform and by ensuring quality earlier on in the pipeline. 

Companies are looking for an approach that not only helps them with functional testing, but helps them with non-functional testing and scaling across different teams on a single platform, and having visibility across the quality of their product across different teams across different testing domains, according to mabl’s Hovsepyan. 

A unified approach helps because the responsibilities for testing and quality assurance are often shared within an organization, and that varies based on their DevOps maturity. 

At more mature organizations in terms of DevOps adoption, there is often a center of excellence of quality engineering, where they deploy the practices and then everyone in the organization participates in assuring the quality, including engineers, or developers.

Organizations that are still somewhere early or in the middle of their journey of DevOps adoption have a significant amount of ownership of quality assurance and quality automation at the team level. And these teams have added quality engineers, and they are responsible for ensuring the quality through automation as well as for manual testing. 

This collaborative effort to test automation can help ensure that the developers and testers both know how these tests should be created and maintained. 

“Test automation is one of those things that when it’s done it’s a huge enabler and can really give your business a boost,’ Hicken said. “And when it’s done wrong, it’s an absolute nightmare.”

AI can help with test creation and maintenance

The introduction of AI and ML assisting into automated testing makes it easier to shift quality left by providing earlier defect remediation and reducing risk for deliveries. 

By collecting and incorporating test data, machine learning can effectively update and interpret certain software metrics that show the state of the application under test. Machine learning can also quickly gather information from large amounts of data and point developers or testers right to the performance problem. 

AI is also excellent at finding those one-in-a-million anomalies which testers might just not catch, according to Jonathon Wright, chief technology evangelist at testing company Keysight. 

In the blog,  “What is Artificial Intelligence in Software Testing?,” Igor Kirilenko, Parasoft’s VP of Development, explains that these AI capabilities “can review the current state of test status, recent code changes, code coverage, and other metrics, decide which tests to run, and then run them,” while machine learning (ML) “can augment the AI by applying algorithms that allow the tool to improve automatically by collecting the copious amounts of data produced by testing.”

By 2025, 70% of enterprises will have implemented an active use of AI-augmented testing,

up from 5% in 2021, according to Gartner’s “Market Guide for AI-Augmented Software testing Tools.” Also by 2025, organizations that ignore the opportunity to utilize AI-augmented testing will spend twice as much effort on testing and defect remediation compared with their competitors that take advantage of AI. 

AI-augmented software testing tools can provide capabilities for test case and test data generation, test suite optimization and coverage detection, test efficacy and robustness, and much more. 

“AI can change the game here, because even in the decades that we’ve had test automation tools, there’s very little that it offered you regarding any guidance like how do I determine the test cases that I need?” Herschmann said. 

The post Targeting a key to automated testing appeared first on SD Times.

]]>
Low-code speeds up development time, but what about testing time? https://sdtimes.com/lowcode/low-code-speeds-up-development-time-but-what-about-testing-time/ Thu, 24 Mar 2022 16:38:31 +0000 https://sdtimes.com/?p=47034 Low-code has been experiencing widespread adoption across the industry over the past several years, but testing apps created using low-code tools is unfortunately falling behind. According to Raj Rao, chief strategy officer for Sauce Labs low-code solution AutonomIQ, often companies set up citizen development programs where employees with no technical background can use low-code or … continue reading

The post Low-code speeds up development time, but what about testing time? appeared first on SD Times.

]]>
Low-code has been experiencing widespread adoption across the industry over the past several years, but testing apps created using low-code tools is unfortunately falling behind.

According to Raj Rao, chief strategy officer for Sauce Labs low-code solution AutonomIQ, often companies set up citizen development programs where employees with no technical background can use low-code or no-code tools to build applications. However, when it comes time to test those applications, a lot of testing tools use traditional code that citizen developers don’t necessarily understand.

For those companies, testing is a bottleneck because the citizen developers who write the application then can’t test them. This bottleneck results in a number of issues, such as test fatigue and test debt. 

Test fatigue sets in when users are required to perform a full battery of tests repeatedly in a manual fashion. This is when the users responsible for testing start to make mistakes or give up due to pre-defined test windows. The effects of this are that defects can start creeping into production. 

Test debt is exactly what it sounds like. Just like when you cannot pay your credit card bill, when you cannot test your applications, the problems that are not being found in the application continue to compound. 

Eliminating test debt requires first establishing a sound  test automation approach. Using this an organization can create a core regression test suite for functional regression and an end-to-end test automation suite for end-to-end business process regression testing.

Because these are automated tests they can be run as often as code is modified. These tests can also be run concurrently, reducing the time it takes to run these automated tests and also creating core regression test suites. According to Rao, using core functional regression tests and end-to-end regression tests are basic table stakes in an organization’s journey to higher quality. 

Rao explained that when getting started with test automation, it can seem like a daunting task, and a massive mountain that needs climbing. “You cannot climb it in one shot, you have to get to the base camp. And the first base camp should be like a core regression test suite, that can be achieved in a couple of weeks, because that gives them a significant relief,” he said. 

According to a blog post from Sauce Labs, in addition to reducing test debt, test automation can save employees time and also save companies money. According to the blog post, companies that transition to low-code test automation tend to see a 25% to 75% reduction in costs. 

“Manual testing takes a lot of effort,” said Rao. “And it is not a one time effort; it is a repeated effort, because you keep making changes to these business applications all the time. It is easy to make changes, but hard to successfully deploy, because deployment includes testing and validation.”

This is especially true when talking about building applications in a platform like Salesforce or Oracle, where the underlying platform gets regular updates. For example, Salesforce gets three major updates per year from Salesforce, and with every update comes a large set of new features that then need to be tested against. According to Rao, doing this testing manually can take several weeks to complete the full regression. 

Addressing these issues sooner rather than later can help companies stay on top of things. Rao referenced Gartner’s prediction that by 2023, the number of citizen developers in the enterprise will be four times greater than the number of professional developers. “This is a group that cannot be ignored. And we have to provide the right capabilities and tools and frameworks for them to be successful,” said Rao.


To learn more, register for the free SD Times virtual conference, Low-Code/No-Code Developer Day, which is taking place on April 13, 2022. Rao will be giving a presentation on low-code test automation and will expand on some of these points.

The post Low-code speeds up development time, but what about testing time? appeared first on SD Times.

]]>
The future of testing is DevOps speed with managed risk https://sdtimes.com/test/the-future-of-testing-is-devops-speed-with-managed-risk/ Fri, 01 Oct 2021 14:00:20 +0000 https://sdtimes.com/?p=45434 Just as big data transformed the way organizations approach intelligence and cloud transformed the way they think about infrastructure, DevOps is fundamentally altering the way organizations think about software development. In a DevOps world, software development is no longer a balancing act between speed and quality but a quest for both, as forward-thinking development teams … continue reading

The post The future of testing is DevOps speed with managed risk appeared first on SD Times.

]]>
Just as big data transformed the way organizations approach intelligence and cloud transformed the way they think about infrastructure, DevOps is fundamentally altering the way organizations think about software development. In a DevOps world, software development is no longer a balancing act between speed and quality but a quest for both, as forward-thinking development teams aim to increase both release frequency and release velocity while ensuring they have the utmost confidence in production.  

The driving force, as always, is the customer. Users expect applications to have the latest and best features and functionality at all times. Not tomorrow. Not after the next planned software release. Now. And always. Just don’t even think about impacting application performance or usability to deliver those updates, or those customers you’re catering to won’t be customers for long. 

These demanding customer expectations, combined with technological advancements in software development made possible by DevOps and CI/CD, have developers focused on pushing smaller and smaller increments of code into production faster and faster, all while product and Q&A teams grow increasingly focused on ensuring that the user experience remains as close to flawless as possible at all times. 

Against this backdrop, progressive development teams are benefitting from an emerging new approach to testing, one that augments traditional front-end functional testing with error monitoring in production. The combination of these two test methodologies into a single comprehensive approach enables developers to benefit from deep automation of application intent prior to production while also layering in multiple production safety nets in the form of error reporting, rollbacks, and user analytics. 

“In the modern era of DevOps-driven development, a testing strategy that does not extend into production is simply not complete,” said John Kelly, CTO, Sauce Labs.

The ability to pair front-end functional testing in dev and test environments, with error monitoring in production, was the driving force behind Sauce Labs’ recent acquisition of Backtrace, a provider of best-in-class error monitoring solutions for software teams. Sauce Labs is already well-known for delivering one of the industry’s leading test automation platforms. Now, with the addition of Backtrace, the company enables developers of web, mobile, and gaming applications to quickly observe and remediate errors in production as well, often before they’re even discovered by end-users. 

For development teams looking to keep up with the pressure to accelerate the release of products into highly competitive and demanding markets, confidence is everything, according to Kelly. 

“As a developer, knowing that you can quickly discover and fix any bugs that make it to production, and often before production, is a tremendous source of empowerment,” said Kelly. “Having the safety net of error monitoring gives you a level of confidence that you just don’t have otherwise, and that in turn enables you to move with greater pace and deliver releases with greater frequency and velocity.” 

None of which is to say that the core components of front-end test automation are any less important to a comprehensive testing strategy, Kelly said. 

“It’s and, not or,” he said. “The development teams we speak to every day are still heavily focused on automating application intent in dev and test environments. But they’re also realizing that there’s no substitute for understanding how the application functions and performs in the production environment, and so they’re taking all the investments they’ve made in cross-browser testing, in mobile app testing, in API testing, and in UI and visual testing and they’re now augmenting them with error monitoring in production.”  

In fact, Kelly says that error monitoring itself can be leveraged directly in test and dev environments to create additional value for developers. 

“When you deploy it directly in dev and test environments, error monitoring really complements Selenium, Appium, and other scripted front-end test frameworks by providing an additional layer of depth and visibility into the root cause of an application failure,” said Kelly.

Importantly, according to Kelly, developers can also leverage the insights gleaned from error monitoring in production to expand and improve future test coverage during the development and test integration phases of CI/CD. 

“It’s about enabling developers to shift both left and right and create the kind of continuous feedback loop that’s necessary to mitigate risk and drive quality at speed,” he said.

Ultimately, according to Kelly, that ability to combine test signals, understand customer experience insights, and create continuous improvement loops represents the future of testing in the DevOps era. 

“Development teams no longer have to chase this holy grail of perfection in test,” said Kelly. “And that’s a good thing because perfection has never been less realistic than it is today when the market demands they deliver releases with unprecedented speed and frequency. If we can shift our focus away from perfection in test to a philosophy of risk management, where faster delivery of value is balanced with quick remediation and clear visibility of user impact, that’s a real sea change in the way we thinking about testing and quality.”   

To learn more about how Sauce Labs is helping organizations usher in a new era of testing, visit https://saucelabs.com/.


Content provided by Sauce Labs and SD Times. 

The post The future of testing is DevOps speed with managed risk appeared first on SD Times.

]]>
SD Times news digest: TypeScript 4.4 beta, Rust support improvements in Linux kernel, Sauce Labs acquires Backtrace https://sdtimes.com/msft/sd-times-news-digest-typescript-4-4-beta-rust-support-improvements-in-linux-kernel-sauce-labs-acquires-backtrace/ Tue, 06 Jul 2021 14:58:39 +0000 https://sdtimes.com/?p=44641 Some of the major highlights of the TypeScript 4.4 beta are control flow analysis of aliased conditions, symbol and template string pattern index signatures and more.  With control flow analysis of aliased conditions enabled, developers don’t have to convince TypeScript of a variable’s type whenever it is used because the type-checker leverages something called control … continue reading

The post SD Times news digest: TypeScript 4.4 beta, Rust support improvements in Linux kernel, Sauce Labs acquires Backtrace appeared first on SD Times.

]]>
Some of the major highlights of the TypeScript 4.4 beta are control flow analysis of aliased conditions, symbol and template string pattern index signatures and more. 

With control flow analysis of aliased conditions enabled, developers don’t have to convince TypeScript of a variable’s type whenever it is used because the type-checker leverages something called control flow analysis to deduce the type within every language construct.

TypeScript also now lets users describe objects where every property has to have a certain type using index signatures to form dictionary-like types, where string keys can be used to index into them with square brackets.

Additional details on all of the highlights in the new version are available here

Rust support improvements in Linux kernel 

The Linux kernel received several major improvements to overall Rust support including removed panicking allocations, added support for the beta compiler as well as testing.

The goal with the improvements is to have everything the kernel needs in the upstream ‘alloc’ and to drop it from the kernel tree. ‘Alloc’ is now compiled with panicking allocation methods disabled, so that they cannot be used within the kernel by mistake.

As for compiler support, Linux is now using the 1.54-beta1 version as its reference compiler. At the end of this month, `rustc` 1.54 will be released, and the kernel will move to that version as the new reference. 

Additional details on all of the support improvements are available here.

Sauce Labs acquires Backtrace

Sauce Labs announced that it has acquired Backtrace, a provider of error monitoring solutions for software teams. 

 “Combined with our recent acquisitions of API Fortress, AutonomIQ, and TestFairy, the addition of Backtrace extends Sauce Labs solutions to meet every stage of the development journey. We’re thrilled to welcome the talented people and products of Backtrace and look forward to supporting their high-quality innovation as part of the Sauce Labs team,” said Aled Miles, president and CEO of Sauce Labs.

Backtrace offers a cross-platform error monitoring solution for desktop, mobile, devices, game consoles, and server platforms that helps organizations reduce debugging time and improve software quality.

Apache weekly update

Last week at the Apache Software Foundation (ASF) saw the release of Apache Camel 3.11, which includes a new ‘camel-kamelet-main’ component intended for developers to try out or develop custom Kamelets, a ‘getSourceTimestamp’ API on ‘Message’ and more.

Apache MetaModel, which was a common interface for discovery, exploration of metadata and querying of different types of data sources has been retired. 

Also, Apache Druid was found to have a vulnerability that authenticated users to read data from other sources than intended.

Other new releases last week included Apache Geode 1.13.3 and 1.12.3. Additional details on all news from the ASF are available here.  

The post SD Times news digest: TypeScript 4.4 beta, Rust support improvements in Linux kernel, Sauce Labs acquires Backtrace appeared first on SD Times.

]]>
Sauce Labs report: Most companies fail to meet continuous testing benchmarks https://sdtimes.com/sauce-labs/sauce-labs-report-most-companies-fail-to-meet-continuous-testing-benchmarks/ Wed, 24 Apr 2019 20:15:52 +0000 https://sdtimes.com/?p=35249 Sauce Labs published its first report that analyzes how companies measure up to benchmarks of four key continuous testing pillars. The company also announced its acquisition of Screener and availability of Sauce Headless. The Continuous Testing Benchmark Report was based on user data from the company’s continuous testing cloud between June and December of last … continue reading

The post Sauce Labs report: Most companies fail to meet continuous testing benchmarks appeared first on SD Times.

]]>
Sauce Labs published its first report that analyzes how companies measure up to benchmarks of four key continuous testing pillars. The company also announced its acquisition of Screener and availability of Sauce Headless.

The Continuous Testing Benchmark Report was based on user data from the company’s continuous testing cloud between June and December of last year.

The report found that the majority of companies fared dismally when compared to the test quality and test run-time benchmarks. It stated that only 18.75% of organizations passed 90% of tests they run and 35.94% of organizations completed their tests in an average of two minutes or less.

However, numbers were much higher regarding test platform coverage and test concurrency. 62.53% of organizations tested across 5 or more platforms on average and 70.88% utilized at least three-quarters of their available testing capacity during peak testing periods.

Just 6.23% of organizations achieved the benchmark for excellence across all four categories.

“As organizations continue to prioritize continuous testing as the foundation of their agile development efforts, we are excited to see how their performance against these benchmarks improves over time, and we look forward to doing our part to help them reach their goals,” said Charles Ramsay, the CEO of Sauce Labs.

To expand its continuous testing capabilities, Sauce Labs purchased Screener, a provider of automated visual testing solutions.

Screener allows users to test their UI across multiple browsers, devices and operating systems to automatically detect visual errors for easier integration into the DevOps workflow.

It also allows developers to test individual UI components to get fast feedback in the early stages of the development cycle.

“As more code and complexity shifts to the front-end of the development process, visual component testing is quickly becoming a critical part of any comprehensive shift-left testing strategy,” said Loyal Chow, the founder of Screener.

Sauce Labs also released Sauce Headless, which enables development teams to get fast feedback on code by running atomic tests early in the delivery pipeline. It leverages headless Chrome and Firefox browsers on Linux in a container-based infrastructure so development teams can identify issues early and keep the pipeline moving by testing on every commit.

The post Sauce Labs report: Most companies fail to meet continuous testing benchmarks appeared first on SD Times.

]]>
Pushing automated testing to its limits https://sdtimes.com/test/pushing-automated-testing-to-its-limits/ Tue, 04 Dec 2018 14:00:43 +0000 https://sdtimes.com/?p=33479 The software industry keeps expressing it is under immense pressure to keep up with market demand and deliver software faster. Automated testing is an approach that came out to not only help speed up software delivery, but to ensure the software that did come out did what it was supposed to do. For some time … continue reading

The post Pushing automated testing to its limits appeared first on SD Times.

]]>
The software industry keeps expressing it is under immense pressure to keep up with market demand and deliver software faster. Automated testing is an approach that came out to not only help speed up software delivery, but to ensure the software that did come out did what it was supposed to do. For some time automated testing has been great at removing repetitive manual tasks, but the industry is only moving faster and businesses are now looking for ways to do more.

“Rapid change and accelerating application delivery is a topic that used to really be something only technology and Silicon Valley companies talked about. Over just the past few years, it has become something that almost every organization is experiencing,” said Lubos Parobek, vice president of product for the testing company Sauce Labs. “They all feel this need to deliver apps faster.”

RELATED CONTENT: A guide to automated testing tools

This sense of urgency has businesses looking to leverage test automation even further and go beyond just automating repetitive tasks to automating in dynamic environments where everything is constantly changing. “As teams start releasing even weekly, let alone daily or multiple times a day, test automation needs to change. Today test automation means ‘automation of test execution,’ but the creation and maintenance of tests, impact analysis and the decision of which test to run, the setup of environments, the reviewing of results, and the go/no-go decision are all entirely manual and usually ad-hoc,” said Antony Edwards, CTO of the test automation company Eggplant. “The key is that test automation needs to expand beyond the ‘test execution’ boundary and cover all these activities.”

Pushing the limits
Perhaps the biggest drivers for test automation right now are continuous integration, continuous delivery, continuous deployment and DevOps because they are what is pushing organizations to move faster and get software into the hands of their users more quickly, according Rex Black, president of the Rex Black Consulting Services, a hardware and software testing and quality assurance consultancy.

“But the only way for test automation to provide value and to not be seen as a bottleneck is for it to be ‘continuous,’” said Mark Lambert, vice president of products at the automated software testing company Parasoft.

According to Lambert, this happens in two ways. First, the environment has to be available at all times so tests can be executed at anytime and anywhere. Secondly, the tests need to take change into account. “Your testing strategy has to change resistance built into it. Handling change at the UI level is inherently difficult, which is why an effective testing strategy relies on a multi-layer approach. This starts with a solid foundation of fully automated unit tests, validating the granular functionality of the code, backed up with broad coverage of the business logic using API layer testing,” said Lambert. “By focusing on the code and API layers, tests can be automatically refactored leaving a smaller set of the brittle end-to-end UI level tests to manage.”

Part of that strategy also means having to look at testing from a different angle. According to Eggplant’s Edwards, testing has shifted from testing to see if something is right, to testing to see if something is good. “I am seeing more and more companies say, ‘I don’t really care if my product complies with a [specification] or not,’ ” he said. “No one wants to be the guy saying no one is buying our software anymore, and everyone hates it, but at least it complies with the spec.” Instead, testing is shifting from thinking about the requirements to thinking about the user. Does the software increase customer satisfaction, and is it increasing whatever the business metric is you care about?

“If you care about your user experience, if you care about business outcome, you need to be testing the product form the outside in, the way a user does,” Edwards added.

Looking at it from the user’s side involves monitoring performance and the status of a solution in production. While that may not seem like it has anything to do with testing or automation, it’s about creating automated feedback loops and understanding the technical behavior of a product and the business outcome, Edwards explained. For example, he said if you look at the page load speed of all your pages and feed that back into testing, instead of automating tests that say every page has to respond in 2 seconds, you can get more granular and say certain pages need to load faster while other pages can take up to 10 seconds and won’t have a big impact on experience.

“Testing today is too tied to the underlying implementation of the app or website. This creates dependencies between the test and the code that have nothing to do with verification or validation, they are just there because of how we’ve chosen to implement test automation,” Edwards said.

But just because you aren’t necessarily testing something against a specification anymore, doesn’t mean you shouldn’t be testing for quality, according to Thomas Murphy, senior director analyst at the research firm Gartner. Testing today has gone from a calendar event to more of a continuous quality process, he explained.

“There is a fundamental need to be shipping software every day or very frequently, and there is no way that testing can be manual. You don’t have time for that. It needs to be fast,” he said.

Some ways to speed things up is to capture the requirements and create the tests upfront. Two approaches that really drove the need for automating testing are test-driven development (TDD) and behavior-driven development (BDD). TDD is the idea that you are going to write the test first, then write the code to pass that test, according to Sauce Labs’ Parobek. BDD is where you enable people like the business analyst, product manager or product owners to write tests at the same time developers are developing code.

These approaches have helped teams get software out multiple times a day because they don’t have to wait for days to create the tests and get back results, and it enables them to understand if they make a mistake right away, Parobek explained.

However, if a developer is submitting new code or pull requests to the main branch multiple times a day, it can be hard to keep up with TDD and BDD, making automated testing impossible because there aren’t tests already in place for these changes. In addition, it slows down the process because now you have to go in manually to make sure the code that is being submitted doesn’t break any key existing function, according to Sauce Labs’ Parobek.

But Parobek does explain if you write your test correctly and follow best practices, there are ways around this. “As you change your application and as you add new functionality, you do not just create new tests, but you might have to change some existing tests,” he said.

Parobek recommends page object modeling as a best practice. It enables users to create tests in a way that is very easy to change when the behavior of the app is changed, he explained.  “It enables you to abstract out and keep in one place changes so when the app does change, you are able to change one file that then changes a variety of test cases for you. You don’t have to  go into 100 different test cases and change something 100 times. Rather you just change one file that is abstracted through page objects,” he said.

Another best practice, according to Parobek, is to be smart about locators. Locators enable automated tests to identify different parts of the user interface. A common aspect of locators is IDs. IDs enable tests to identify elements. For example, when an automated test goes in and needs to test a button, if you’ve attached a locator ID to it, the test can recognize the button even if you moved it somewhere else on the page. Other approaches to locators are to use names, CSS selectors, classes, tags links, text and XPath. “Locators are an important part for creating tests that are simpler and easier to maintain,” said Parobek.

In order to successfully use locators, Parobek thinks it is imperative that the development and QA teams collaborate better. “If QA and development are working closely together, it is easy to build apps that make it easier to test versus development not thinking about testability.”

No matter how much you end up being able to automate, Black explained in order to be successful at it, you will still always have to go back to the basics. If you become too aspirational with automation and have too many failed attempts, it can reduce management’s appetitive for wanting to invest. “You need to have a plan. You need to have an architecture,” Black said. “The plan needs to include a business case so you can prove to management it is not just throwing money into a bright shiny object.”

“It’s the boring basics. Attention to the business case. Attention to the architecture. Take it step by step and course correct as you go,” Black added.

The promise of artificial intelligence in automated testing
As artificial intelligence (AI) advances, we are seeing it be implemented in more tools and technologies as a way to improve user experience provide business value. But when it comes to test automation, the promise of AI is more inspirational than operational, RBCS’ Black explained.

“If you go to conferences, you will hear about people wanting to use it, and tool vendors making claims that they are able to deliver on it. But at this point, I have not had a client tell me or show me a successful implementation of test automation that relies on AI in a significant way,” he said. “What is happening now is that tool vendors are sensing that this is going to be the next hot thing and are jumping on that AI train. It is not a realized promise yet.”

When you think about AI, you think about a sentient element figuring things out automatically, according to Gartner’s Murphy, when in reality it tends to be some repeated pattern of learning something to be predictive or learning from past experiences. In order to learn from past experiences, you need a lot of data to feed into your machine learning algorithm. Murphy explained AI is still new and a lot of the test information that companies have today is very fragmented, so when you hear companies talk about AI in regards to test automation it tends to be under-delivering or over-promising.

Vendors that say they are offering an AI-oriented test automation tool are often just performing model-based testing, according to Murphy. Model-based testing is an approach where tests are automatically generated from models. The closest thing we have out there to an AI-based test automation tool are image-based recognition solutions that understand if things are  broken, and can show when it happened and where through visual validation, Murphy explained.

However, Black does see AI having potential within the test automation space in the future; he just warns businesses against investing in any technologies too soon. Areas where Black sees the most potential for AI include false positives, and flaky tests.

False positives happen when a test returns a failed result, but it turns out the software is actually working correctly. A human being is able to recognize this when they look further into correcting the result. Black sees AI being used to apply human reasoning and differentiate the correct versus incorrect behavior.

Flaky tests happen when a test fails once, but passes when the test runs again. This unpredictable result is due to the variation of the system architecture, the test architecture, the tool, or the test automation, according to Black. He sees AI being used to handle validation issues like this by bringing a more sophisticated sense of what fit for use means to the testing efforts.

Kevin Surace, CEO of Appvance.ai, sees AI being applied to test automation, but in different levels. Surace said there are 5 levels of AI that can be applied to test automation:

  1. Scripting/coding
  2. “Codeless” capture/playback
  3. Machine learning: self-healing human-created scripts and money bots
  4. Machine learning: Near full automation with auto-generated smart scripts
  5. Machine learning full automation: auto-generated smart scripts with validation

When deciding on AI-driven testing, Surace explained the most important qualification is to learn what type of level of AI a vendor is offering. According to Surace, many vendors have offerings at levels one and two, but there are very few vendors that can actually promise levels three and above.

In the future, Parasoft’s Lambert expects humans will just be looking at the results of test automation with the machine actually doing the testing in an autonomous way. But for now, the real value of AI and machine learning will be used to augment human work and spot patterns and relationships in the data in order to guide the creation and execution of tests, he explained.

Still, Black warns to enter AI for test automation with caution. “Organizations that want to try to use AI-based test automation at this point in time should be extremely careful and extremely conservative in how they pilot that and how they roll that out. They need to remember that the tools are going to evolve dramatically over the next decade, and making hard, fast and difficult to change  large investments in automation may not be a wise thing in the long term,” he said.

Manual practices remain
Despite the efforts to automate as much as possible, things for the time being will still require a human touch.

According to Rex Black, president of the Rex Black Consulting Services (RBCS),  a hardware and software testing and quality assurance consultancy, you can break testing down into two overlapping categories: 1. Verification, where a test makes sure the software works as specified; and 2. Validation tests, where you make sure tests are fit for use. For now, Black believes validation will remain manual because it is very hard to do in an automated fashion. For example, he explained, if you developed a video game, you can’t automate for things like: Is it fun? Is it engaged? Is it sticky? Do people want to come back and keep playing it?

“At this point, automation tools are really about verifying that the software works in some specified way. The test says what is suppose to happen and checks to see if it happens. There is always going to be some validation that will need to be done either by people,” he said.

Lubos Parobek, vice president of product for the testing company Sauce Labs explained that even if we get to a point where everything is automated in the long-term future, you will still always want a business stakeholder to take a final look and do a sanity check that everything works as expected to a human.

“Getting a complete view of customer experience isn’t just about validating user scenarios, doing click-counts and sophisticated ‘image analysis’ to make sure the look and feel is consistent — it’s about making sure the user is engaged and enjoying the experience. This inherently requires human intuition and cannot be fully automated,” added  Mark Lambert, vice president of products for automated software testing company Parasoft.

Robotic process automation
Test automation vendors are flocking to this idea of robotic process automation (RPA). RPA is a business process automation approach used to cut costs, reduce errors and speed up processes, so what does this have to do with test automation?

According to Thomas Murphy, senior director analyst at Gartner, RPA and test automation technologies have a high degree of overlap. “Essentially both are designed to replicate a human user performing a sequence of steps.”

Anthony Edwards, CTO of the test automation company Eggplant, explained that on a technical level, test automation is about automating user journeys across an app and verifying that what is supposed to happen, happens. RPA aims to do just that. “So at a technical level they are actually the exact same thing, it’s simply the higher level intent and purpose that is different. But if you look at a script that automates a user journey there is no way to tell if it has been created for ‘testing’ or for ‘RPA’ just by looking at it,” said Edwards. “The difference for some people would be that testing focuses on a single application whereas RPA typically works across several systems integrated together.”

Over the next couple of years, Gartner’s Murphy predicts we will see more test automation vendors entering this space as a new way to capitalize on market opportunity. “By moving into the RPA market, they are expanding their footprint and audience of people they go after to help them,” he said.

This move is especially important as more businesses move toward open-source technologies for their testing solutions.

Rex Black, president of the Rex Black Consulting Services (RBCS), a hardware and software testing and quality assurance consultancy, sees the test automation space moving towards open source because of cost. “It’s easier to get approval for  a test automation project if there isn’t a significant up-front investment in a tool purchase, especially if the test automation project is seen as risky. Related to that aspect of risk is that so many open-source test automation tools have been successful over recent years, so the perceived risk of going with an open-source tool is lower than it used to be,” he said.

The post Pushing automated testing to its limits appeared first on SD Times.

]]>
Sauce Labs introduces headless browser testing https://sdtimes.com/test/sauce-labs-introduces-headless-browser-testing/ Wed, 14 Nov 2018 15:55:25 +0000 https://sdtimes.com/?p=33265 Sauce Labs is looking to speed up software development with the release of a new testing solution. The company announced Sauce Headless, a cloud-based headless testing solution. According to Sauce Labs, headless browsers are becoming more popular as an option for testing web-based apps. “A headless browser is a type of software that can access … continue reading

The post Sauce Labs introduces headless browser testing appeared first on SD Times.

]]>
Sauce Labs is looking to speed up software development with the release of a new testing solution. The company announced Sauce Headless, a cloud-based headless testing solution.

According to Sauce Labs, headless browsers are becoming more popular as an option for testing web-based apps. “A headless browser is a type of software that can access webpages but does not show them to the user and can pipe the content of the webpages to another program. Unlike a normal browser, nothing will appear on the screen when you start up a headless browser, since the programs run at the backend,” the company wrote in a blog post.

Similarly to normal browsers, headless browsers are able to parse and interpret webpages, the company explained. It is able to provide real-browser context without memory and speed compromises.

Sauce Headless is designed to give developers access to a lightweight cloud-based infrastructure, which will be useful when they run into high test volumes early in their development cycle, according to Sauce Labs. The solution is available for Chrome and Firefox browsers in a container-based infrastructure.

“The testing landscape has changed dramatically in the last year, thanks in large part to the movement to shift testing to earlier in the development pipeline,” said Lubos Parobek, VP of Product at Sauce Labs. “Catching bugs earlier reduces costs, improves quality and increases developer productivity. Sauce Headless is an exciting new offering that will allow developers to test on every commit.”

Sauce Headless is expected to be available to enterprise users as a public beta towards the beginning of next year.

The post Sauce Labs introduces headless browser testing appeared first on SD Times.

]]>
SD Times news digest: Oculus reveals hybrid apps, Sauce Labs’ iOS and Android native test automation frameworks, and data.world’s $12 million round of funding https://sdtimes.com/softwaredev/sd-times-news-digest-oculus-reveals-hybrid-apps-sauce-labs-ios-and-android-native-test-automation-frameworks-and-data-worlds-12-million-round-of-funding/ Thu, 27 Sep 2018 14:31:09 +0000 https://sdtimes.com/?p=32450 Oculus revealed it is trying to merge traditional PC apps and VR together with a new experimental technology. The company made the announcement at its Oculus Connect 5 (OC5) event in San Jose this week. According to the company, Hybrid Apps will help developers migrate traditional desktop apps to virtual reality and leverage Dash’s virtual … continue reading

The post SD Times news digest: Oculus reveals hybrid apps, Sauce Labs’ iOS and Android native test automation frameworks, and data.world’s $12 million round of funding appeared first on SD Times.

]]>
Oculus revealed it is trying to merge traditional PC apps and VR together with a new experimental technology. The company made the announcement at its Oculus Connect 5 (OC5) event in San Jose this week. According to the company, Hybrid Apps will help developers migrate traditional desktop apps to virtual reality and leverage Dash’s virtual desktop technology to make the transition between 2D and VR easier. “We’re excited to see what developers create using Hybrid Apps. It’s early days, and this is just a glimpse of where Dash—and Rift—can go,” the company wrote in a blog post.

Other developer features announced at the event included mobile support for Rift, Core 2.0 is coming out of beta, updates to Oculus Home, the ability to create custom developer items, and improvements and refinements to Dash.

Sauce Labs now supports iOS and Android test automation frameworks
Sauce Labs announced it will now support XCUITest and Espresso to help developers simplify continuous testing for their mobile app developments.

“As developers write functional tests, they tend to prefer testing frameworks that are embedded in their development tools so they can maximize their efficiency and velocity,” said Lubos Parobek, vice president of product at Sauce Labs. “Sauce Labs is excited to support iOS developers and Android developers with the addition of XCUITest and Espresso to our service.” With Sauce Labs Continuous Testing Cloud, developers can scale automation quickly and effectively, creating robust and reusable code to keep pace with CI/CD while improving the quality of mobile apps developed via test automation.

The newly added support will provide faster test execution, advanced parallel test execution and simplified CI/CD integration, according to the company

Data.world announces third round of funding with Workday Ventures
Data teamwork and collaboration platform data.world announced a $12 million investment from Workday Ventures, the Associated Press and OurCrowd, bringing the company’s total amount of funding to $45.3 million since 2016.

The company is known for its collaborative data community, which provides users with the ability to find, understand and use data. The latest investment will go towards advancing its collaborative data resource.

“People who are deeply analyzing data are not in decision-making positions and decision makers seldom understand the complexities of data,” said Brett Hurt, co-founder and CEO of data.world. “There is an urgent demand for a platform that bridges these gaps within an enterprise. The data divide between people and companies is becoming a bigger and bigger issue in corporate performance and longevity. Next-gen companies like Airbnb and Warby Parker were built to be data-driven from the ground up, and traditional Global 2000 companies are significantly behind and very motivated to catch up. And today we have the community know-how, integrations, customer service, and enterprise capabilities to help them rapidly do so.”

GitHub Classroom Assistant
GitHub is releasing a new tool to help teachers set up courses on GitHub. GitHub Classroom is able to automatically create student repositories and enable teachers to track assignments from the dashboard. However, GitHub says this can become complicated when teachers have dozens or hundreds of students. Instead of having to cloud each repository, the newly announced Classroom Assistant enables teachers to download all repositories for their course.

This is a cross-platform desktop app available for Windows, Mac and Linux.

The post SD Times news digest: Oculus reveals hybrid apps, Sauce Labs’ iOS and Android native test automation frameworks, and data.world’s $12 million round of funding appeared first on SD Times.

]]>