Load Testing Archives - SD Times https://sdtimes.com/tag/load-testing/ Software Development News Wed, 03 May 2023 14:42:41 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.1 https://sdtimes.com/wp-content/uploads/2019/06/bnGl7Am3_400x400-50x50.jpeg Load Testing Archives - SD Times https://sdtimes.com/tag/load-testing/ 32 32 Mabl’s load testing offering provides increased insight into app performance https://sdtimes.com/test/mabls-load-testing-offering-provides-increased-insight-into-app-performance/ Wed, 03 May 2023 14:42:41 +0000 https://sdtimes.com/?p=51072 Low-code intelligence automation company mabl today announced its new load testing offering geared at allowing engineering teams to assess how their application will perform under production load. This capability integrates into mabl’s SaaS platform so that users can enhance the value of existing functional tests, move performance testing to an earlier phase of the development … continue reading

The post Mabl’s load testing offering provides increased insight into app performance appeared first on SD Times.

]]>
Low-code intelligence automation company mabl today announced its new load testing offering geared at allowing engineering teams to assess how their application will perform under production load.

This capability integrates into mabl’s SaaS platform so that users can enhance the value of existing functional tests, move performance testing to an earlier phase of the development lifecycle, and cut down on infrastructure and operations costs.

“The primary goal is to help customers test application changes under production load before they release them so that they can detect any new bottlenecks or things that they would have experienced as the changes hit production before release,” said Dan Belcher, co-founder of mabl.

According to the company, these API load testing capabilities allow for the unification of functional and non-functional testing by utilizing functional API tests for performance and importing Postman Collections to cut down on the time it takes to create tests. 

Mabl also stated that this performance testing lowers the barrier to a sustainable and collaborative performance testing practice, even for teams that do not have dedicated performance testers or specific performance testing tools. 

“Anyone within the software team can use it, so it is not limited to just the software developers or just the performance experts,” Belcher said. “Because we’re low-code and already handling the functional testing, it makes it super easy for the teams to be able to define and execute performance tests on their own without required specialized skills.”

Furthermore, these tests can also be configured to run alongside functional tests on demand, on a schedule, or as a part of CI/CD pipelines. 

The post Mabl’s load testing offering provides increased insight into app performance appeared first on SD Times.

]]>
SD Times Open-Source Project of the Week: k6 https://sdtimes.com/k6/sd-times-open-source-project-of-the-week-k6/ Fri, 10 Apr 2020 13:23:34 +0000 https://sdtimes.com/?p=39602 K6 is an open-source load testing tool designed as a modern alternative to JMeter. In addition, the team explained k6 now serves as an alternative to Azure’s load testing and Visual Studio load test, which were just closed down at the end of March. “Built primarily for load testing, k6 tests can with advantage be … continue reading

The post SD Times Open-Source Project of the Week: k6 appeared first on SD Times.

]]>
K6 is an open-source load testing tool designed as a modern alternative to JMeter.

In addition, the team explained k6 now serves as an alternative to Azure’s load testing and Visual Studio load test, which were just closed down at the end of March.

“Built primarily for load testing, k6 tests can with advantage be reused for performance monitoring of your APIs and microservices in production,” the team wrote in a post.

Users can build test cases to validate the performance of APIs and microservices to check whether systems can handle the expected volume of traffic and catch SLA/SLO-breaking performance regressions in CI before production. 

It was built primarily to be able to seamlessly scale to the cloud, automate performance tests, offer reuse modules and JavaScript libraries to enable developers to build and maintain their test suite, and output test results in various backends and formats such as Grafana, DataDog, Kafka, and JSON.  

Another key feature of k6 is Checks and Thresholds for goal-oriented, automation-friendly load testing.

“Load testing should be done by the people who know the application best, the developers, and we believe that developer tools should be open source to allow for a community to form and drive the project forward through discussions and contributions. Hence why we built k6, the load testing tool we’ve always wanted ourselves!” k6 wrote in a post. “k6 provides great primitives for code modularization, performance thresholds, and automation. These features make it an excellent choice for performance monitoring. You could run tests with a small amount of load to continuously monitor the performance of your production environment.”

The post SD Times Open-Source Project of the Week: k6 appeared first on SD Times.

]]>
Tricentis acquires load testing provider Flood IO https://sdtimes.com/agile/tricentis-acquires-load-testing-provider-flood-io/ https://sdtimes.com/agile/tricentis-acquires-load-testing-provider-flood-io/#comments Thu, 27 Jul 2017 16:51:50 +0000 https://sdtimes.com/?p=26344 Tricentis is bolstering its software testing expertise with the acquisition of Flood IO. Flood IO is an on-demand load testing solution provider designed to maximize test strategies, provide feedback loops, and discover issues in real time. “Times have changed. Old performance testing approaches are too late, too heavy, and too slow for today’s lean, fast-paced … continue reading

The post Tricentis acquires load testing provider Flood IO appeared first on SD Times.

]]>
Tricentis is bolstering its software testing expertise with the acquisition of Flood IO. Flood IO is an on-demand load testing solution provider designed to maximize test strategies, provide feedback loops, and discover issues in real time.

“Times have changed. Old performance testing approaches are too late, too heavy, and too slow for today’s lean, fast-paced delivery pipelines,” said Sandeep Johri, CEO of Tricentis. “Yet, releasing updates without insight into their performance impact is incredibly dangerous in today’s world—with competitors just a click away. Flood’s technology offers DevOps teams unparalleled flexibility for load testing early and continuously. This acquisition enables us to take our mission of ‘transforming testing for DevOps’ to the next level.”

According to Tricentis, Flood IO will enable the company and its users to embrace load and performance testing as well as the concept of “shift left” load testing. Combining load testing with Tricentis’ Continuous Testing platform will enable users to load test with Tricentis Tosca, create smoke tests, integration load testing into their continuous integration workflows, and identify performance programs early.

As part of the acquisition, Flood IO will continue as a standalone service and continue its mission to provide continuous load testing in DevOps.

“At Flood, we set out to build insanely easy-to-use performance testing tools that help teams scale their apps to millions of users,” said Tim Koopmans, co-founder for Flood. “Joining forces with Tricentis will help us advance our vision for achieving Continuous Load Testing in a DevOps environment. We’re excited about the opportunity to accelerate the path to Continuous Testing—making it faster and easier to ensure that applications meet users’ rising expectations.”

The post Tricentis acquires load testing provider Flood IO appeared first on SD Times.

]]>
https://sdtimes.com/agile/tricentis-acquires-load-testing-provider-flood-io/feed/ 1
Security testing should be on every DevOps team’s Black Friday checklist https://sdtimes.com/bigpanda/security-testing-every-devops-teams-black-friday-checklist/ https://sdtimes.com/bigpanda/security-testing-every-devops-teams-black-friday-checklist/#comments Fri, 04 Nov 2016 19:58:52 +0000 https://sdtimes.com/?p=21827 The holidays are a time for shoppers to reap the benefit of online deals—and for hackers to leverage software vulnerabilities in retail systems and applications. In order to prepare for this year, IT monitoring experts suggested developers and operations teams incorporate adequate security testing as part of their holiday preparedness checklist. The biggest mistake organizations … continue reading

The post Security testing should be on every DevOps team’s Black Friday checklist appeared first on SD Times.

]]>
The holidays are a time for shoppers to reap the benefit of online deals—and for hackers to leverage software vulnerabilities in retail systems and applications. In order to prepare for this year, IT monitoring experts suggested developers and operations teams incorporate adequate security testing as part of their holiday preparedness checklist.

The biggest mistake organizations make when preparing for holiday sales is decreasing the required amount of security testing of their web and mobile applications in favor of tight release deadlines, said global director of application security strategy at Checkmarx, Matt Rose.

“Proper security testing is a must and should not be overshadowed by the need for enhanced features or functionality that may not even be utilized if an application is hacked or down to a DDoS attack,” he said.

(Related: How DevOps security is lacking)

Organizations might look to cut testing processes because of their shorter release deadlines. Sometimes, security testing is cut because “cool” application features are seen as generating revenue, whereas security testing is not, said Rose. It’s a narrow-minded view, because if the application has security issues, the new revenue-generating feature may never be available to the user, he said.

Different organizations can assign different levels of responsibilities to developers during the holiday season, but all companies should review how developers would support operations during critical times like Black Friday and Cyber Monday, according to Michael Butt, senior product marketing manager at BigPanda. And, just like those in operations, developers need to understand how much stress peak shopping times will have on systems during the holiday season, he said.

Developers can also prepare for the holiday season by properly testing their applications for stability and security, because the “potential for unanticipated load or exposure to hackers is a real threat,” said Rose.

If developers fail to do this, retailers can expect worst-case scenarios like being blacklisted by users, he said, especially if they fear that a platform is unstable and their personal information is at risk.

“The holiday selling season is a very short time period, and any downtime or instability of their web or mobile applications could potentially have very damaging implications to a retailer’s bottom line,” said Rose. “If an application fails to meet the consumer’s expectations, they will simply take their business somewhere else.”

Mobile applications have changed the world of digital business and e-commerce, and now that organizations are going to a mobile-first world, all of that mobile traffic adds to the holiday load, said Butt.

Just the nature of these mobile applications and how they have developed opens a new category for risk, said Rose. Many organizations outsource mobile application development to third parties, and if these third parties do not know if proper security testing was done to applications, it increases the chances of hackers attacking, according to him.

“The third parties are paid to develop these mobile apps based on a set of functionality criteria,” said Rose. “If security requirements are not properly defined by the outsourced development teams, they will probably not be included in the application, which is a huge risk to organization contracting the third party.”

The post Security testing should be on every DevOps team’s Black Friday checklist appeared first on SD Times.

]]>
https://sdtimes.com/bigpanda/security-testing-every-devops-teams-black-friday-checklist/feed/ 2
TestPlant survey finds increase in automation, Software AG launches IoT kit, and Samsung discontinues the Galaxy Note 7—SD Times news digest: Oct. 11, 2016 https://sdtimes.com/analytics/testplant-survey-finds-increase-automation-software-ag-launches-iot-kit-samsung-discontinues-galaxy-note-7-sd-times-news-digest-oct-11-2016/ https://sdtimes.com/analytics/testplant-survey-finds-increase-automation-software-ag-launches-iot-kit-samsung-discontinues-galaxy-note-7-sd-times-news-digest-oct-11-2016/#comments Tue, 11 Oct 2016 16:12:02 +0000 https://sdtimes.com/?p=21354 TestPlant has announced the results of its 2016 User Survey, finding that there is a clear increase in the use of automation and application-level load testing. TestPlant gathered almost 200 eggPlant users, and according to those surveyed, test automation is increasing dramatically. The survey found that more than 60% of respondents said they have achieved … continue reading

The post TestPlant survey finds increase in automation, Software AG launches IoT kit, and Samsung discontinues the Galaxy Note 7—SD Times news digest: Oct. 11, 2016 appeared first on SD Times.

]]>
TestPlant has announced the results of its 2016 User Survey, finding that there is a clear increase in the use of automation and application-level load testing.

TestPlant gathered almost 200 eggPlant users, and according to those surveyed, test automation is increasing dramatically. The survey found that more than 60% of respondents said they have achieved more than 25% automation of their functional testing, with a third having achieved more than 50%.

This compares to last year’s Accenture/PAC’s Digital Testing in Europe report, which found 8% of companies achieving more than 50% automation. The survey also found that interest in application-level load testing is increasing (56%), which compares to the interest in load testing for the web (70%).

“Levels of test automation have remained static for many years, with most organizations automating less than 25% of their functional testing,” said Antony Edwards, CTO of TestPlant. “But this year we have seen a clear shift toward increasing test automation among our customer base as it delivers clear value for organizations.”

All of the results from the survey can be found here.

Software AG creates new IoT Analytics Kit
Software AG has released a new Internet of Things Analytics Kit available for free as part of the Open Source Software under the Apache License 2.0. This kit can also run on Raspberry Pi, and an alternate version of the company’s Apama Community Edition is available, too.

The IoT Analytics Kit for Apama Community Edition comes with event-based, analytical microservices used to develop IoT applications. Some of the analytics include threshold breach, missing data, creating alerts, and the ability to calculate the normal range of numeric values.

The new version of Apama Community Edition allows developers to build their apps on top of the edition and then distribute their applications free of charge, said the company. Developers can also run Apama Community Edition on Raspberry Pi, and build streaming analytics applications, for example.

Developers can review the full list of features here.

Samsung discontinues the Galaxy Note 7
Samsung decided to take the blow and permanently discontinue production of the Galaxy Note 7 due to safety concerns. The tech giant’s decision comes one day after it put a global stop on sales and exchanges of the defective smartphone.

According to Android Authority, Samsung confirmed the official discontinuation of the Galaxy Note 7 with South Korean regulators today. Analysts cited by Reuters said that Samsung, based on revenues of the 19 million Galaxy Note 7 units it expected to sell, lost US$17 billion.

With the discontinuation of the device, Samsung will lose a full half-year of flagship smartphone sales on top of the costs associated with the official recall, wrote Android Authority.

FreeBSD 11.0 release now available
The FreeBSD Release Engineering team announced the availability of FreeBSD 11.0, which is the first release of the stable 11 branch.

The engineering team wrote that users should consult the release notes before installing FreeBSD, as they were updated with late-breaking information discovered late in the release cycle. Some of the key highlights of this release include OpenSSH DSA key generation being disabled by default, OpenSSH updates to 7.2p2, and broader wireless network driver support has been added.

A complete list of new features and know problems can be located here.

The post TestPlant survey finds increase in automation, Software AG launches IoT kit, and Samsung discontinues the Galaxy Note 7—SD Times news digest: Oct. 11, 2016 appeared first on SD Times.

]]>
https://sdtimes.com/analytics/testplant-survey-finds-increase-automation-software-ag-launches-iot-kit-samsung-discontinues-galaxy-note-7-sd-times-news-digest-oct-11-2016/feed/ 1
SD Times Blog: Touchdowns in Tech https://sdtimes.com/advertisements/sd-times-blog-touchdowns-in-tech/ Wed, 03 Feb 2016 14:00:10 +0000 https://sdtimes.com/?p=17001 The countdown to Super Bowl 50 has begun. If you are like other sports-loving Americans, you’re probably ironing your jersey, ordering a platter of wings, and inviting all your friends over to shotgun some six packs. If you’re like me, you’re flipping channels to find Animal Planet and waiting for the Puppy Bowl to begin … continue reading

The post SD Times Blog: Touchdowns in Tech appeared first on SD Times.

]]>
The countdown to Super Bowl 50 has begun. If you are like other sports-loving Americans, you’re probably ironing your jersey, ordering a platter of wings, and inviting all your friends over to shotgun some six packs.

If you’re like me, you’re flipping channels to find Animal Planet and waiting for the Puppy Bowl to begin (“Kitty Halftime” is my personal favorite).

If you’re an advertiser, tester or developer, you’re not celebrating the big game until it’s over. There is obvious entertainment in this big game, but it’s much more complex behind the scenes. Everyone is going to be tuned in on their phones, downloading deals, checking websites, tweeting, sharing, snapping, surfing, watching—so many things can go wrong. Advertisers are the defense, and they are rooting for viewers to turn to their websites and make sure that their landing pages can handle the load. For them, it’s the tech behind the game that leads to a victory.

(Related: Continuous Delivery: Getting code where it needs to go)

David Jones, a field technical evangelist at Dynatrace, has some pointers on how you can be prepared from the start to the end of the Super Bowl. It’s critical to streamline your websites and make sure all ends of your program are suited up and ready to tackle.

Before the game…
First, fill up a plate of nachos and wings, and crack open your first beer. Game on!

The first step in having a successful Super Bowl is to make sure you have a plan in place. This seems like a no-brainer, but Jones said that most organizations only have a disaster plan in place. That is to say, they are expecting to fail. He said organizations need to communicate what the overall business driver of the Super Bowl is going to be. Are you going to drive traffic to a specific website? Do you want people to watch a video? Is there a coupon or product you are trying to promote? It’s not just an IT-related process; it has to include everyone—development, business owners, the QA team, operations, marketing and more. Everyone has to be working off the same plan—and not one that just sets up your team to drop the ball.

Take a look at what tools the organization is using. Is everyone speaking a different language? Jones said that using a variety of tools creates a war-room scenario, and your tools should not be siloed in the organization. They are as good as your best players, and those MVPs shouldn’t be forgotten. What happens if your rock stars leave? You could be spending hours or weeks uncovering an issue. The Panthers won’t give up just because Cam Newton tweaked his knee, so you must be able and prepared to communicate these tremendous amounts of data across all teams.

Jones recommends load-testing before the game. By now, most companies already have the code for the sites these ads will drive traffic to, but right now, he said that organizations should be heavily invested in external load-testing. Generate load from outside of the organization, test from remote locations across the globe (after all, the event is global), and do this up until the release. It’s not enough to test your internal QA environment; You need to also test your production, and above all else, test everything.

“If you just hope they are going to work, that, again, is a bad plan,” said Jones. “Hope is a bad plan.”

During the game…
Take a quick break and refill on the nachos. Make sure not to spill melty delicious cheese on the servers.

During the game, you are going to want to be proactively monitoring the applications throughout the entire delivery stack. Organizations need to make sure they have the right digital performance, making sure their tools are in place and they are monitoring every component that could potentially impact an end user, said Jones. That means monitoring everything like Web servers, app servers, database servers, and anything relating to the underlying hosts.

Don’t leave out testing the third party, Jones warns. “They will sneak up and get you.” Make sure all of their production capabilities are up and running during the game.

At halftime…
No more nachos?!

All eyes are going to be glued to the set during halftime. As we know, some of the strangest performances happen during this small break. People are going to be tap-tapping away on their phones or tablets, so the amount of mobile activity is definitely going to increase during this time. If something happens during this time, companies need to react fast. Monitor your data in real time, and this should be a primary importance according to Jones. If you are not monitoring in real time, you will not be able to “pivot and react” if something occurs. If you have a site that you are promoting and directing traffic to, you need to make sure you are using the right responsive design. Make sure you are delivering a site to mobile devices that renders based on what devices is being used. Note this as a “vital thing” to do, as Jones said.

As the game winds down…
Depending on who you are rooting for, you might be crying and/or cheering. Either way, crack open another beer.

You are only as successful as your company’s initiatives. For some, Jones said, it could mean keeping the lights on at the company. If they get a whole lot of traffic and their measures were successful, it could mean the company gets to add one more year of staying open. A lot of traffic could be a measure of success, such as driving sales of a special product specific to the Super Bowl. And, if your site doesn’t crash, you can celebrate by dumping buckets of Gatorade on all the coders. Congrats guys!

It’s a success based on what your company’s goals are, and if you follow this plan, you too could win the day.

For more insight on Super Bowl Sunday, Dynatrace will be monitoring all the major advertisers here.

 

The post SD Times Blog: Touchdowns in Tech appeared first on SD Times.

]]>
Guest View: Testing conveyer: Why do you need to ‘polish’ software? https://sdtimes.com/compatibility/guest-view-testing-conveyer-why-do-you-need-to-polish-software/ https://sdtimes.com/compatibility/guest-view-testing-conveyer-why-do-you-need-to-polish-software/#comments Tue, 23 Jun 2015 19:00:29 +0000 https://sdtimes.com/?p=13379 No matter how genius a new application is, it is still required to go through testers’ hands. And despite the important role testers play, they remain in the shadows. When developers become aware of the variety of tests their software must endure, it often forces them to rethink the way they develop their software—in a … continue reading

The post Guest View: Testing conveyer: Why do you need to ‘polish’ software? appeared first on SD Times.

]]>
No matter how genius a new application is, it is still required to go through testers’ hands. And despite the important role testers play, they remain in the shadows. When developers become aware of the variety of tests their software must endure, it often forces them to rethink the way they develop their software—in a good way.

It’s clear that the main task of the testing team is to help developers to create quality software. The term “quality” is not that abstract when it relates to testing. It includes certain characteristics, such as functionality, practicality, efficiency, reliability and mobility. Those characteristics also consist of a number of technical requirements. When receiving new software, testers do a thorough check of every characteristic. Depending on the characteristics involved, there are different types of testing in quality assurance. My intent in this article is to help provide developers with a “helicopter view” of the whole process of providing high-quality software from start to finish. With that in mind, let’s look at each of the testing types in details.

What does it do?
It’s really no wonder that functional testing is the most popular type. Like a physician’s overview of the patient, this testing helps to disclose basic vulnerabilities. If a tester/physician doesn’t know how to “cure” the “patient,” he directs him/her to other testers/specialists. In reality, functional is a type of black box testing that bases its test cases on the specifications of the software component under test. Functions are tested by feeding them input and examining the output, and internal program structure is rarely considered. Thus, functional testing usually describes what the system does. In other words, this is a way of checking software to ensure that it has all the required functionality that’s specified within its functional requirements.

(Related: How to test in an agile world)

Usually, functional testing checks if the software accomplishes the tasks for which it was designed. The bugs found during this test help to determine the software’s vulnerabilities. Sometimes, really serious defects could be detected during the functional testing stage. It’s like if you buy a calculator and find it doesn’t have the “+” button. You cannot release a complex and expensive software if it’s not working properly.

How much will it sustain?
The second most popular type of testing is performance testing, which determines the maximum number of users allowed to work with the system at the same time without causing harm. A long time ago, civil engineers were using the same approach to check the strength of their structures. For instance, after finishing a bridge, they would fully load it with huge construction trucks to see if it could sustain that weight.

In technical terms, performance testing is the process of determining the speed or effectiveness of a computer, network, software program or device. This process can involve quantitative tests done in a lab, such as measuring the response time or the number of MIPS (millions of instructions per second) at which a system functions. In other words, we talk about the work speed and the reaction speed that might be altered by growing performance. The load is determined by the number of requests in the unit of time. The more frequent the requests, the lower the performance. In this case, work speed may decrease.

When the software is still under development, it is required to estimate the biggest amount of users allowed to have access to the system at the same and having no negative impact on the speed. This is the purpose of stability testing. If the speed is low, users can switch to another provider of the service. It’s like checking out at the supermarket. If you observe the cashiers their work too slowly, you’ll probably choose another supermarket next time.

However, it doesn’t make sense to overwork and develop an application that would serve users you can never attract since the application is likely targeted at a specific group. To continue the example with the supermarket, you don’t need to install 10 checkout counters in a shop based in a small town.

To check the system behavior in extreme conditions, two more tests are carried out: load testing and stress testing. Load testing checks the system in terms of constant loading. It also determines the moment of the highest performance. So, load testing is the process of putting demand on a system or device and measuring its response. Load testing is performed to determine a system’s behavior under both normal and anticipated peak load conditions. It helps to identify the maximum operating capacity of an application as well as any bottlenecks, and determine which element is causing degradation.

Stress testing checks if the whole system is able to work appropriately while being overloaded. It also tests the system’s ability to recover after working under stress. Stress testing puts greater emphasis on robustness, availability and error handling under a heavy load, rather than on what would be considered correct behavior under normal circumstances. The goals of such tests are to ensure the software does not crash in conditions of insufficient computational resources (such as memory or disk space).

Failover and recovery testing checks the product from the perspective of resistance to possible crashes as well as to recovering. Failover testing determines whether a system is able to allocate extra resources, such as additional CPU or servers, during critical failures or at the point the system reaches a predetermined performance threshold. Recovery testing is basically done in order to check how fast the application can recover against any type of crash or hardware failure. Generally speaking, it serves as a kind of software immunity.

These crashes could be caused by bugs, incorrect hardware or a poor connection. This testing checks the systems responsible for maintaining the safety and integrity of the product. Failover and recovering testing is important for 24×7 systems, when every minute of downtime or data loss would cause a huge financial loss, penalties, loss of clients and/or a ruined reputation. This is why the testing team emulates different conditions of possible failures. Then it evaluates the defense reaction. These checks determine whether the required defense and recovery levels are being reached.

Don’t lose your way on the website
A user should intuitively understand the way the website or application works. Usability is what testers check when having a look at the application with users’ eyes. They know if the application is usable and intuitive, it will gain popularity faster and remain competitive longer. This is why applications should not focus only on being functional, powerful and stable. If, after loading the main page, the user isn’t able to easily manage the application or the website, all other characteristics become moot.

Technically, usability testing refers to evaluating a product or service by testing it with representative users. Typically, during a test, participants will try to complete tasks while observers watch, listen and takes notes. The goal is to identify any usability problems, collect qualitative and quantitative data, and determine the participants’ satisfaction with the product.

To keep users on the website and avoid uninstallation of the application, testers pay a lot of attention to usability. Placing an order on the website should be done in three clicks, while filling numerous unnecessary forms should be avoided. If the application is simple and serviceable, it will enable a user to enjoy working with it and will help to make him a returning user.

Testing user interfaces is quite similar to the previous type of testing. During this stage of testing, testers evaluate the product design and visual perception. It involves checking screen validations of all links, data integrity, object states, the date field, and numeric field formats. What testers really check the application for are color, gamma, size, and the buttons’ colors. All of these factors have a real impact on user perception.

For instance, if you have white text on a black background, users’ eyes will tire fast. Or, if you have a button of a small size, the user might not find it at all. To the contrary, if the website has a light and attractive design, it increases the comfort of working with it and raises the effectiveness of interactivity of the user and the system.

Lost in translation
If the application is going to be released in different regions, localization and internationalization must be considered. During localization, testers check if the elements of the interface are translated properly. Internalization is a more difficult process of the application adapting to different languages and regions without changing the code of the program. If regions’ settings are not taken into consideration, the application may very possibly prove unusable for parts of the targeted audience.

Oftentimes, to test localization and internalization, specialists acquainted with the region(s) in question are engaged. Even better is if there are testers who were born in the region that can contribute. Knowing nothing or just a little of the culture, or doing something like putting an inappropriate color on the website, can easily offend users. For example, while western people treat a green color as the symbol of nature, the first association Muslims have when seeing anything green is Islam.

Last but not least, security
If information transfer is included in the work of application, it is crucial to test its security. Applications dealing with online payments and transfers of confidential information, for example, are always going through security tests. The more valuable the information, the higher the risk of the application being hacked. To prevent system break-ins, testers perform the same actions hackers would to see how the system responds to any nonstandard requests and whether it meets security requirements.

Most types of security testing involve complex steps and out-of-the-box thinking, but sometimes it is simple tests that help expose the most severe security risks.

Be ready for the challenge
Since there is a great variety of mobile devices on the market, it is becoming really important to ensure the compatibility of different systems, services and applications. If the system is not compatible, it can cause not just limited capacity, but even damage company reputation. To avoid those problems, compatibility tests should be run. This type of testing assures the website or application will work at the same speed with no errors in any popular browser. If just one browser fails to launch the site, a great part of the potential audience could be lost.

Many products (mostly complex applications) also need pre-installation. These programs should go through pre-installation testing, which checks if an installation goes correctly and how easy uninstallation would be.

Configuration testing ensures the software will work on systems with different configurations. Also, it determines the optimal configuration the hardware should have to perform at the highest capacity.

Of course, not all these types of testing are necessary for all projects. The kinds of tests are determined not only by the project requirements, the project specificity, and the development stage, but also such factors as the budget size and the time allotted for testing.

The more informed developers are about how their software will be tested, the more carefully they can approach the way in which they work with software. If they view their products through the eyes of both a developer and a tester, it has the potential to greatly simplify tasks for both sides and increase the quality of the software.

The post Guest View: Testing conveyer: Why do you need to ‘polish’ software? appeared first on SD Times.

]]>
https://sdtimes.com/compatibility/guest-view-testing-conveyer-why-do-you-need-to-polish-software/feed/ 3
The Super Bowl XLIX ‘Performance Bowl’ https://sdtimes.com/ads/super-bowl-xlix-performance-bowl/ https://sdtimes.com/ads/super-bowl-xlix-performance-bowl/#comments Fri, 30 Jan 2015 21:04:36 +0000 https://sdtimes.com/?p=10552 For the advertisers paying US$4.5 million a pop for a 30-second Super Bowl spot, a year’s marketing budget could be wasted if the company’s website can’t handle the influx in traffic. Application performance management software provider Dynatrace is monitoring the Web and mobile site performance of a litany of major brands that purchased ads well … continue reading

The post The Super Bowl XLIX ‘Performance Bowl’ appeared first on SD Times.

]]>
For the advertisers paying US$4.5 million a pop for a 30-second Super Bowl spot, a year’s marketing budget could be wasted if the company’s website can’t handle the influx in traffic.

Application performance management software provider Dynatrace is monitoring the Web and mobile site performance of a litany of major brands that purchased ads well in advance of the big game. The company is keeping track of load times, site performance degradations and dreaded website outages to determine who stepped up during the ‘Performance Bowl’—and who buckled under the pressure.

Ryan Bateman, director of brand and social marketing at Dynatrace, said people often forget about the Web and mobile developers and testers behind these sites, who are the linemen in the trenches of the Performance Bowl.

“Developers are the ones building these landing pages the ads are sending folks to,” he said. “Marketers sometimes don’t consider what happens behind the ad itself.”

NBC only finished selling the last of its Super Bowl ad slots this past Wednesday, leaving developers and testers behind the last-minute sites with only days to prepare. Under that kind of time crunch, Bateman identified some “classic signs” of sites gearing up to win or lose the Performance Bowl.

“Some of these guys who just recently bought ad inventory simply don’t have as much time as the advertisers who purchased a year ago to plan out the digital properties that supplement their ads,” he said. “There’s always a handful of sites that don’t quite balance the social interaction with the performance effects very well. We’ve seen, even in past years, some of the sites that put a heavy, heavy focus on interactive experience. Something like live tweeting during their ad placement or submitting photos. There’s a fine line there to walk with interactivity, the use of social, and these heavy, performance-sucking plug-ins and third-party services.”

How winning sites handle the Super Bowl pressure
In pursuit of capturing the interest, loyalty and ultimately revenue of consumers swarming their sites during peak Super Bowl ad windows, the Web developers and testers behind the brand must toe a fine line. Dynatrace believes it takes a combination of a robust user experience to sustain engagement, and a site that not only withstands the load but also maintains the same level of performance.

Bateman identified one advertiser whose website contained upwards of 200 connection streams to outside services, explaining that when it comes to performance, it’s best to keep it simple.

“It’s really hard to preserve a quality user experience,” he said. “Three seconds or less on average is how long a user will tolerate [site loading], and when you’ve got that many connections in place, it’s really hard to keep it under three seconds. If you’re doing a good job, you’re probably still struggling to keep it under six or seven seconds.”

Aside from not getting too fancy, maintaining a high-performing site during Super Bowl traffic is about preparation and testing. Whether it’s maintaining physical servers or scaling up cloud infrastructure and virtual capacity to meet increased demand, no site makes it through the performance ringer without a strategy.

“When you’re putting $4.5 million into 30 seconds, you don’t want to just throw a bunch of cash at hardware without really understanding what you’re going to need,” Bateman said.

Four pillars of a winning Performance Bowl site
Bateman laid out four points both marketers and developers should focus on to keep their site running smoothly during crunch time:

  1. Slim your pages down: “It’s about delivering less content,” Bateman said. “Everyone advertising in the Super Bowl has some sort of digital supplement. There’s some sort of call to action, either to the general homepage or an individual landing page. The page you’re sending users to needs to be very, very concise as far as its content delivered and the size of that content [in megabytes]. Maybe it’s not prudent to deliver your entire feature-rich site during that three- to five-hour window.”
  2. Optimize third-party services: “Everyone wants higher engagement rates; they want people to spend more time on these landing pages, either purchasing something or absorbing this branded information,” Bateman said. “But there’s that balance that needs to happen between the number of third-party services like a Facebook commenting system or a Twitter hashtag stream, or a series of Instagram feeds. Every single time you bring in a new service like that, it might be great for your engagement numbers, but it’s not great for your load time. Because it’s third-party services, you might not have that same level of control. Engagement means nothing if it takes eight seconds to load that page. That user is gone.”
  3. Testing: “Testing is a big piece of it,” Bateman said. “The nice part about having cloud vendors available and the ability to spin up hardware is it’s a lot easier than it’s ever been. Load testing is important, [as us] understanding what sort of performance impact third-party services has on the pages you’re delivering. I’m definitely not saying don’t use plug-ins, but you don’t them all on the same page at once.”
  4. The trick with tracking: “Tracking is a bit of a double-edged sword,” Bateman said. “Let’s say a site has Google Analytics installed, some sort of marketing automation software installed to track supplemental user information. You absolutely have to do that stuff, but those are third-party services too, and they’re going to have an effect on potential load time. It’s about balance. You need to have a use in mind for what you’re tracking. If you’re needlessly tracking every metric possible about a user’s visit, it can be excessive.”

Be sure to check back on Monday for the biggest winners and losers of this year’s ‘Performance Bowl.’

The post The Super Bowl XLIX ‘Performance Bowl’ appeared first on SD Times.

]]>
https://sdtimes.com/ads/super-bowl-xlix-performance-bowl/feed/ 2
Microsoft releases cloud-based load-testing REST APIs https://sdtimes.com/automation/microsoft-releases-cloud-based-load-testing-rest-apis/ Mon, 03 Nov 2014 20:04:56 +0000 https://sdtimes.com/?p=9101 Microsoft has released Cloud-based Load Testing REST APIs for cloud-based load testing with Visual Studio Online. Microsoft program manager Jimson Chalissery announced the REST APIs in a blog post explaining how the release is driven by customer requests for expanded automated load-testing capabilities. With the cloud-based REST APIs for Visual Studio Online, Microsoft seeks to … continue reading

The post Microsoft releases cloud-based load-testing REST APIs appeared first on SD Times.

]]>
Microsoft has released Cloud-based Load Testing REST APIs for cloud-based load testing with Visual Studio Online.

Microsoft program manager Jimson Chalissery announced the REST APIs in a blog post explaining how the release is driven by customer requests for expanded automated load-testing capabilities. With the cloud-based REST APIs for Visual Studio Online, Microsoft seeks to catch up with trends in automation.

“The Cloud-based Load Testing REST APIs give you the ability to execute Load tests from the Cloud in an automated manner, that can be integrated either as part of your Continuous Integration/Deployment pipeline or Test Automation,” he wrote.

The APIs enable a variety of new load-testing features in Visual Studio Online, including the ability to:
• Start or stop a load test run
• Get load test results for application performance and throughput
• Get service messages during a run
• Get exceptions (if any) from the service during a run
• Get counter instances and samples for a load test run
• Get application counters for apps configured with your load test
• Get a list of all past load test runs, filtered by requester, date or status.

More information on the Cloud-based Load Testing APIs can be found in the API documentation.

The post Microsoft releases cloud-based load-testing REST APIs appeared first on SD Times.

]]>
New LoadComplete by SmartBear dramatically reduces time to acquire data for blazing fast load testing performance https://sdtimes.com/load-testing/new-loadcomplete-smartbear-dramatically-reduces-time-acquire-data-blazing-fast-load-testing-performance/ Wed, 29 Oct 2014 13:45:34 +0000 https://sdtimes.com/?p=9019 BEVERLY, Mass. – SmartBear Software, the choice of more than two million software professionals for building and delivering the world’s best applications, announced a new and rebranded version of LoadUIWeb – LoadComplete 3.0 – to help organizations drastically reduce the time required to test and optimize application performance. Customers deploying the new version accelerate load … continue reading

The post New LoadComplete by SmartBear dramatically reduces time to acquire data for blazing fast load testing performance appeared first on SD Times.

]]>
BEVERLY, Mass. – SmartBear Software, the choice of more than two million software professionals for building and delivering the world’s best applications, announced a new and rebranded version of LoadUIWeb – LoadComplete 3.0 – to help organizations drastically reduce the time required to test and optimize application performance. Customers deploying the new version accelerate load testing cycles and significantly decrease performance testing time.

Load testing is often left to the last minute by many organizations since new revenue enhancing features almost always taking precedence over basic performance testing. By leaving performance testing to the end, companies erroneously believe that simple, quick and minor tweaks are all that are needed to meet application performance requirements. Teams are frequently left with a short amount of time before the deployment of an application, to identify, uncover and resolve serious performance issues. While these challenges affect all applications, they are especially poignant with mobile applications. A recent Forrester Research report states, “Performance is not a priority for today’s mobile development teams….Development teams typically want to focus on mobile app performance, but all too often business sponsors prioritize new features over sustained performance engineering.”

LoadComplete 3.0 helps test business-critical rich Internet and mobile applications in shortened performance testing cycles, providing advanced analysis and reporting, which allows testers to compare results of different tests side by side. Analyzing server and browser side metrics across different load tests becomes convenient as there is no need to perform manual comparison when a change is introduced to existing performance tests or there is a need to compare one version of the application against another. Additionally, LoadComplete 3.0 further reduces the time required for analyzing and debugging test scripts. Through a single click, testers can identify request dependencies among different pages of an application, dramatically minimizing the need for time consuming manual analysis that would otherwise be required to develop and debug load scripts.

“Inter-request correlation is the single-most complex, time-consuming and frustrating effort in all performance testing practices, stopping the brightest engineers from conducting meaningful performance tests,” said Mark Tomlinson of PerfBytes. “The outcome is inadequate attention and validation of performance. Reducing the time and complexity associated with testing increases the adoption of testing by more engineers, and earlier in the lifecycle, provides more accurate insights into application performance. If you’re using any other load testing tool, you’re probably wasting tons of time doing correlation manually and publishing results that are incorrect. LoadComplete 3.0 reduces the pain associated with application testing, making it easier to find, fix and move forward, spending less time in scripting puzzles and more time running tests and driving results back to your team.”

“With LoadComplete 3.0, SmartBear has focused on reducing load testing time, enabling testers to significantly increase their efficiency and decrease the time to value of their testing within shorter schedules,” said Rich Caplow, SVP Product Commercialization at SmartBear. “A side by side comparison of different load tests along with an ability to identify request dependencies between pages not only reduces the complexity of running a load test but also ensures these tests are completed in a timely manner.”

Other enhancements to LoadComplete include support for an increase in the number of virtual users a load generation agent can support. LoadComplete 3.0 supports 1,000 virtual users per agent, an increase from a 250 virtual user limit for each agent previously. As a result, organizations spend less on hardware in order to run a particular load test.

For more information on LoadComplete, visit: http://smartbear.com/products/qa-tools/load-testing/.

The post New LoadComplete by SmartBear dramatically reduces time to acquire data for blazing fast load testing performance appeared first on SD Times.

]]>