Wayne Ariola, Author at SD Times https://sdtimes.com/author/wayne-ariola/ Software Development News Wed, 13 Jan 2021 18:20:04 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.1 https://sdtimes.com/wp-content/uploads/2019/06/bnGl7Am3_400x400-50x50.jpeg Wayne Ariola, Author at SD Times https://sdtimes.com/author/wayne-ariola/ 32 32 The Open Testing Platform https://sdtimes.com/test/the-open-testing-platform/ Wed, 13 Jan 2021 19:30:55 +0000 https://sdtimes.com/?p=42661 This is a rather unique time in the evolution of software testing.  Teams worldwide are facing new challenges associated with working from home. Digital transformation initiatives are placing unprecedented pressure on innovation.  Speed is the new currency for software development and testing. The penalty for software failure is at an all-time high as news of … continue reading

The post The Open Testing Platform appeared first on SD Times.

]]>
This is a rather unique time in the evolution of software testing.  Teams worldwide are facing new challenges associated with working from home. Digital transformation initiatives are placing unprecedented pressure on innovation.  Speed is the new currency for software development and testing. The penalty for software failure is at an all-time high as news of outages and end-user frustration go viral on social media. Open-source point tools are good at steering interfaces but are not a complete solution for test automation.

Meanwhile, testers are being asked to do more while reducing costs.

Now is the time to re-think the software testing life cycle with an eye towards more comprehensive automation. Testing organizations need a platform that enables incremental process improvement, and data curated for the purpose of optimizing software testing must be at the center of this solution. Organizations that leverage multiple open-source or proprietary testing tools must consider an Open Testing Platform to keep pace with Agile and enterprise DevOps initiatives.   

What is an Open Testing Platform?
An Open Testing Platform (OTP) is a collaboration hub that assists testers to keep pace with change. It transforms observations into action – enabling organizations to inform testers about critical environment and system changes, act upon observations to zero in on ‘what’ precisely needs to be tested, and automate the acquisition of test data required for effective test coverage.

RELATED CONTENT:
Testing tools deliver quality – NOT!
The de-evolution of software testing

The most important feature of an Open Testing Platform is that it taps essential information across the application development and delivery ecosystem to effectively test software. Beyond accessing an API, an OTP leverages an organization’s existing infrastructure tools without causing disruption—unlocking valuable data across the infrastructure. An OTP allows any tester (technical or non-technical) to access data, correlate observations and automate action. 

Model in the middle
At the core of an Open Testing Platform is a model. The model is an abstracted representation of the transactions that are strategic to the business. The model can represent new user stories that are in-flight, system transactions that are critical for business continuity, and flows that are pivotal for the end-user experience.

In an OTP, the model is also the centerpiece for collaboration. All tasks and data observations either optimize the value of the model or ensure that the tests generated from the model can execute without interruption.  Since an OTP is focused on the software testing life cycle, we can take advantage of known usage patterns and create workflows to accelerate testing. For example, with a stable model at the core of the testing activity:

  •   The impact of change is visualized and shared across teams
  •   The demand for test data is established by the model and reused for team members
  •   The validation data sets are fit to the logic identified by the model
  •   The prioritization of test runs can dynamically fit the stage of the process for each team, optimizing for vectors such as speed, change, business-risk, maintenance, etc.

Models allow teams to identify critical change impacts quickly and visually. And since models express test logic abstracted from independent applications or services, they also provide context to help testers collaborate across team boundaries.

Data curated for testing software
Automation must be driven by data. An infrastructure that can access real-time observations as well as reference a historical baseline is required to understand the impact of change. Accessing data within the software testing life cycle does not have to be intrusive or depend on a complex array of proprietary agents deployed across an environment. In an overwhelming majority of use cases, accessing data via an API provides enough depth and detail to achieve significant productivity gains.  Furthermore, accessing data via an API from the current monitoring or management infrastructure systems eliminates the need for additional scripts or code that require maintenance and interfere with overall system performance.

 Many of the data points required to optimize the process of testing exist, but they are scattered across an array of monitoring and infrastructure management tools such as Application Performance Monitoring (APM), Version Control, Agile Requirements Management, Test Management, Web Analytics, Defect Management, API Management, etc.

An Open Testing Platform curates data for software testing by applying known patterns and machine learning to expose change. This new learning system turns observations into action to improve the effectiveness of testing and accelerate release cycles. 

Why is an Open Testing Platform required today?
Despite industry leaders trying to posture software testing as value-added, the fact is that an overwhelming majority of organizations identify testing as a cost center. The software testing life cycle is a rich target for automation since any costs eliminated from testing can be leveraged for more innovative initiatives.

If you look at industry trends in automation for software testing, automating test case development hovers around 30%.  If you assess the level of automation across all facets of the software testing life cycle, then automation averages about 20%.  This low average automation rate highlights that testing still requires a high degree of manual intervention which slows the software testing process and therefore delays software release cycles.

But why have automation rates remained so low for software testing when initiatives like DevOps have focused on accelerating the release cycle? There are four core issues that have impacted automation rates:

  •   Years of outsourcing depleted internal testing skills
  •   Testers had limited access to critical information
  •   Test tools created siloes
  •   Environment changes hampered automation

Outsourcing depleted internal testing skills
The general concept here is that senior managers traded domestic, internal expertise in business and testing processes for offshore labor, reducing Opex . With this practice known as labor arbitrage, an organization could reduce headcount and shift the responsibility for software testing to an army of outsourced resources trained on the task of software testing. This shift to outsourcing had three main detrimental impacts to software testing: the model promoted manual task execution, the adoption of automation was sidelined and there was a business process “brain-drain” or knowledge drain. 

 With the expansion of Agile and the adoption of enterprise DevOps, organizations must execute the software testing life cycle rapidly and effectively. Organizations will need to consider tightly integrating the software testing life cycle within the development cycle which will challenge organizations using an offshore model for testing.  Team must also think beyond the simple bottom-up approach to testing and re-invent the software testing life cycle to meet increasing demands of the business. 

Testers had limited access to critical information 
Perhaps the greatest challenge facing individuals responsible for software testing is staying informed about change. This can be requirements-driven changes of dependent application or services, changes in usage patterns, or late changes in the release plan which impact the testers’ ability to react within the required timelines. 

Interestingly, most of the data required for testers to do their job is available in the monitoring and infrastructure management tools across production and pre-production. However, this information just isn’t aggregated and optimized for the purpose of software testing. Access to APIs and advancements in the ability to manage and analyze big data changes this dynamic in favor of testers. 

Although each organization is structurally and culturally unique, the one commonality found among Agile teams is that the practice of testing software has become siloed. The silo is usually constrained to the team or constrained to a single application that might be built by multiple teams. These constraints create barriers since tests must execute across componentized and distributed system architectures.

Ubiquitous access to best-of-breed open-source and proprietary tools also contributed to these silos. Point tools became very good at driving automated tests. However, test logic became trapped as scripts across an array of tools. Giving self-governing teams the freedom to adopt a broad array of tools comes at a cost:  a significant degree of redundancy, limited understanding of coverage across silos, and a high amount of test maintenance. 

The good news is that point tools (both open-source and proprietary) have become reliable to drive automation. However, what’s missing today is an Open Testing Platform that assists to drive productivity across teams and their independent testing tools.  

Environment changes hampered automation
Remarkably, the automated development of tests hovers at about 30% but the automated execution of tests is half the rate at 15%. This means that tests that are built to be automated are not likely to be executed automatically – manual intervention is still required. Why? It takes more than just the automation to steer a test for automation to yield results. For an automated test to run automatically, you need: 

  •   Access to a test environment
  •   A clean environment, configured specifically for the scope of tests to be executed
  •   Access to compliant test data
  •   Validation assertions synchronized for the test data and logic

 As a result, individuals who are responsible for testing need awareness of broader environment data points located throughout the pre-production environment. Without automating the sub-tasks across the software testing life cycle, test automation will continue to have anemic results.

An Open Testing Platform levels the playing field 
Despite the hampered evolution of test automation, testers and software development engineers in test (SDETs) are being asked to do more than ever before. As systems become more distributed and complex, the challenges associated with testing compounds. Yet the same individuals are under pressure to support new applications and new technologies – all while facing a distinct increase in the frequency of application changes and releases. Something has got to change. 

An Open Testing Platform gives software testers the information and workflow automation tools to make open-source and proprietary testing point tools more productive in light of constant change.  An OTP provides a layer of abstraction on top of the teams’ point testing tools, optimizing the sub-tasks that are required to generate effective test scripts or no-code tests. This approach gives organizations an amazing degree of flexibility while significantly lowering the cost to construct and maintain tests.

 An Open Testing Platform is a critical enabler to both the speed and effectiveness of testing.  The OTP follows a prescriptive pattern to assist an organization to continuously improve the software testing life cycle.  This pattern is ‘inform, act and automate.’ An OTP offers immediate value to an organization by giving teams the missing infrastructure to effectively manage change. 

The value of an Open Platform

Inform the team as change happens
What delays software testing? Change, specifically late changes that were not promptly communicated to the team responsible for testing. One of the big differentiators for an Open Testing Platform is the ability to observe and correlate a diverse set of data points and inform the team of critical changes as change happens. An OTP automatically analyzes data to alert the team of specific changes that impact the current release cycle.

 Act on observations

Identifying and communicating change is critically important, but an Open Testing Platform has the most impact when testers are triggered to act. In some cases, observed changes can automatically update the test suite, test execution priority or surrounding sub-tasks associated with software testing. Common optimizations such as risk-based prioritization or change-based prioritization of test execution can be automatically triggered by the CI/CD pipeline.  Other triggers to act are presented within the model-based interface as recommendations based on known software testing software algorithms.

Automate software testing tasks 
When people speak of “automation” in software testing they are typically speaking about the task of automating test logic versus a UI or API.  Of course, the scope of tests that can be automated goes beyond the UI or API but also it is important to understand that the scope of what can be automated in the software testing life cycle (STLC) goes far beyond the test itself.  Automation patterns can be applied to:

  •   Requirements analysis
  •   Test planning
  •   Test data
  •   Environment provisioning
  •   Test prioritization
  •   Test execution
  •   Test execution analysis
  •   Test process optimization

Key business benefits of an Open Testing Platform
By automating or augmenting with automation functions within the software testing life cycle, an Open Testing Platform can provide significant business benefits to an organization. For example:

  • Accelerating testing will improve release cycles
  • Bringing together data that had previously been siloed allows more complete insight
  • Increasing the speed and consistency of test execution builds trust in the process
  • Identifying issues early improves capacity
  • Automating repetitive tasks allows teams to focus on higher-value optimization
  • Eliminating mundane work enables humans to focus on higher-order problems, yielding greater productivity and better morale

Software testing tools have evolved to deliver dependable “raw automation.” Meaning that the ability to steer an application automatically is sustainable with either open-source or commercial tools.  If you look across published industry research, you will find that software testing organizations report test automation rates to be (on average) 30%.  These same organizations also report that automated test execution is (on average) 16%.  This gap between the creation of an automated test and the ability to execute it automatically lies in the many manual tasks required to run the test. Software testing will always be a delay in the release process if organizations cannot close this gap.  

Automation is not as easy as applying automated techniques for each of the software testing life cycle sub-processes.  There are really three core challenges that need to be addressed:

  1. Testers need to be informed about changes that impact testing efforts.  The requires interrogating the array of  monitoring and infrastructure tools and curating data that impacts testing.
  2. Testers need to be able to act on changes as fast as possible. This means that business rules will automatically augment the model that drives testing – allowing the team to test more effectively.
  3. Testers need to be able to automate the sub-tasks that exist throughout the software testing lifecycle.  Automation must be flexible to accommodate each team need yet simple enough to make incremental changes as the environment and infrastructure shifts.

Software testing needs to begin its own digital transformation journey. Just as digital transformation initiatives are not tool initiatives, the transformation to sustainable continuous testing will require a shift in mindset.  This is not shift-left.  This is not shift-right. It is really the first step towards Software Quality Governance.  Organizations that leverage multiple open-source or proprietary testing tools must consider an Open Testing Platform to keep pace with Agile and enterprise DevOps initiatives.  

The post The Open Testing Platform appeared first on SD Times.

]]>
premium Testing tools deliver quality – NOT! https://sdtimes.com/test/testing-tools-deliver-quality-not/ Thu, 19 Nov 2020 17:00:13 +0000 https://sdtimes.com/?p=42137 I was recently hired to do an in-depth analysis of the software testing tool marketplace.  By the way, there are more tools in the software testing space than a do-it yourself, home improvement warehouse.  Given this opportunity to survey a broad set of software testing tool vendors, it was pretty interesting to look at the … continue reading

The post <span class="sdt-premium">premium</span> Testing tools deliver quality – NOT! appeared first on SD Times.

]]>
I was recently hired to do an in-depth analysis of the software testing tool marketplace.  By the way, there are more tools in the software testing space than a do-it yourself, home improvement warehouse.  Given this opportunity to survey a broad set of software testing tool vendors, it was pretty interesting to look at the promises they make to the market. These promises can be split up into four general categories:

  • We provide better quality
  • We have AI and we are smarter than you
  • We allow you to do things faster
  • We are open-source – give it a go

What struck me most was the very large swath of software testing tool vendors who are selling the idea of delivering or providing “quality.”  Placing this into pointed analogy: claiming that a testing tool provides quality is like claiming that COVID testing prevents you from being infected.  The fact is when a testing tool finds a defect, then “quality” has already been compromised.  Just as when you receive a positive COVID test, you are already infected.

Let’s get this next argument out of the way.  Yes, testing is critical in the quality process; however the tool that detects the defect DOES NOT deliver quality.  Back to the COVID test analogy, the action of wearing masks and limiting your exposure to the public prevents the spread of the infection. A COVID test can assist you to make a downstream decision to quarantine in order to stop the spread of infection or an upstream decision to be more vigilant in wearing a mask or limiting your exposure to high-risk environments.  I’m going to drop the COVID example at this point out of sheer exhaustion on the topic.  

But let’s continue the analogy with weight loss – a very popular topic as we approach the holidays.  Software testing is like a scale, it can give you an assessment of your weight.  Software delivery is like the pair of pants you want to wear over the holidays. Weighing yourself is a pretty good indicator of your chances to fit into the pair of pants at a particular point in time.  

Using the body weight analogy is interesting because a single scale might not give you all the information you need, and you might have the option to wear a different pair of pants.  Let me unpack this a bit.  

The scale(s)
We cannot rely on a single measurement nor a single instance of that measurement to make an assessment of the quality of an application.  In fact, it requires the confluence of many measurements both quantitative and qualitative to assess the quality of software at any particular point in time. At a very high level there are really only three types of software defects:

  • Bad Code
    • The code is poorly written
    • The code does not implement the user story as defined
  • Bad User Story
    • The user story is wrong or poorly defined
  • Missing User Story 
    • There is missing functionality that is critical for the release

Using this high-level framework there are radically different testing approaches required.  If we want to assess bad code, then we would rely on development testing techniques like static code analysis to measure the quality of the code.  We would use unit testing or perhaps test-driven development (TDD) as a preliminary measurement to understand if the code is aligned to critical function or component of the user story.  If we want to assess a bad user story, this is where BDD, manual testing, functional testing (UI and API) and non-functional testing takes over to assess if the user story is adequately delivered in the code.  And finally, if we want to understand if there is a missing user story this is usually an outcome of exploratory testing when you get that ‘A-ha’ moment that something critical is missing.

The pants
Let’s refresh the analogy quickly.  The scale is like a software testing tool and we want to weigh ourselves to make sure we can fit into our pants, which is our release objective. The critical concept here is that not all pants are designed to fit the same and the same is true for software releases.  Let’s face it, our software does not have to be perfect and, to be blunt, “perfection” comes at a cost that is far beyond an organization’s resources to achieve. Therefore, we have to understand that some pants are tight with more restrictions and some pants are loose, which give you more comfort. So, you might have a skinny jeans release or a sweatpants release.    

Our challenge in the software development and delivery industry is that we don’t differentiate between skinny jeans and sweatpants. This leads us to a test-everything approach ,which is a distinct burden to both speed and costs. The alternative, which is the “test what we can” approach, is also suboptimal.  

So, what’s the conclusion?  I think we need to worry about fitting into our pants at a particular point in time.  There is enough information that currently exists throughout the software development life cycle and production that can guide us to create and execute the optimal set of tests. The next evolution of software testing will not solely be AI.  The next evolution will be using the data that already exists to optimize both what to test and how to test it.  Or in other terms, we will understand the constraints associated with each pair of pants and we will use our scale effectively to make sure we fit in them in time for the holiday get- together of less than 10 close family members.

The post <span class="sdt-premium">premium</span> Testing tools deliver quality – NOT! appeared first on SD Times.

]]>
Guest View: The de-evolution of software testing https://sdtimes.com/test/guest-view-the-de-evolution-of-software-testing/ Tue, 18 Aug 2020 16:34:34 +0000 https://sdtimes.com/?p=41026 Software testing is nearing the end of its Cretaceous period.  Personally, I invite the proverbial asteroid to advance its destructive approach so the practice of software testing can continue down its much-needed evolutionary journey. Don’t get me wrong, software testing has not been totally stagnant; it did evolve during its Cretaceous period.  The most significant … continue reading

The post Guest View: The de-evolution of software testing appeared first on SD Times.

]]>
Software testing is nearing the end of its Cretaceous period.  Personally, I invite the proverbial asteroid to advance its destructive approach so the practice of software testing can continue down its much-needed evolutionary journey. Don’t get me wrong, software testing has not been totally stagnant; it did evolve during its Cretaceous period.  The most significant shift was at the top of the testing food chain, as developers evolved to accept more responsibility for software quality. This distribution of the onus of quality is a critical stepping stone for the industry’s next evolutionary leap.  

The evolution of software testing has been – in comparison to other technologies – slow.  If you agree that software testing as a practice has been sluggish, then we need to take a step back and ask: “Why are we in this situation?” This article will explore the two main reasons why I believe software testing has not evolved as fast as it should and in an additional article, I will offer my hope for software testing natural selection.

Two main reasons software testing has not evolved
I believe that there are two main reasons why software testing has not evolved: organizations are handcuffed by the global system integrators (GSIs) and testing has had a messed-up organizational structure.  

Between the two, which is the chicken and which is the egg?  If software quality had a stronger reporting hierarchy could the GSIs exert so much control?  Did the GSIs abuse their position and successfully mute the internal opposition? I have my guesses but I would love to hear your opinion.

Handcuffed by the GSIs
Let’s start this discussion with the GSIs because the topic is significantly more incendiary.  The general concept here is that senior managers traded domestic, internal expertise in business and testing processes for offshore labor, reducing Opex.  Known as labor arbitrage, an organization could reduce headcount and shift the responsibility for software testing to an army of outsourced resources trained on the task of software testing.  There were three main detrimental impacts to software testing with the shift to the GSIs: the model promoted manual task execution, the adoption of automation was sidelined and there was a business process “brain-drain” or knowledge drain.  

Given the comparatively lower cost of labor (an average of 2.5 to 1), the GSI model primarily structured and executed tasks manually.  The GSIs painted a picture of an endless supply of technical resources clamoring to work 24/7 compared to complacent domestic resources.  It conjured images of the secretarial pool (without iPhones) hammering away at test plans at 60% of the current spend.  With an abundance of human capital there is really no impetus to promote automation.  As for the domestic operation, costs were contained for the time being as software testing was demoted from being a strategic task.

It’s obvious, but needs to be highlighted, that the GSI model that favored the manual execution of tasks also sidelined automation efforts.  Why?  In the GSI model, automation potentially eliminates headcount and reduces testing cycle times.  Less headcount plus reduced cycle times equates to fewer billable hours and reduced revenue in a time and materials model.  Therefore, the benefits of automation certainly would not serve the financial goals of the GSI.  Furthermore, if automation was suggested to your service provider, then the GSI suggested that they build it for you. All GSIs today sit on millions of lines of dead code that represent the efforts to build one-off automation projects.  This dead code also represents millions of dollars in billable hours.  

Perhaps the greatest impact to the evolution of software testing was the business and process brain drain.  With lower OpEx as the bait, the global software testing services market swelled to $32 billion dollars annually (that is “B-Billion”).  This tectonic shift drained resources that had deep business process knowledge from the domestic organization.  The net of this brain-drain was less impactful outcomes from the activity of testing.  What’s my evidence?

  • Severely swollen test suites 
  • No concepts of risk or priority in test suites
  • Metrics driven by count of tests
  • False positive rates >80%
  • Abandoned test suites because the code is too far out of sync with the tests
  • There’s more but this is getting too depressing…

Let me be very open about my opinion on this matter.  Organizations traded process control for lower costs.  In the post Y2K world this seemed like a pretty good idea since software primarily served an operational purpose.  Today software is the primary interface to the business and any facet of its delivery should be considered a core competency.    

Testing has had a messed-up organizational structure
Testing has historically reported into the development team and this was a BIG mistake.  Testing should have always reported to operations.  I cannot think of a single reason why testing should not report to operations.  In fact, if testing did report to operations then I believe the practice of testing software would be in a significantly different evolutionary state.  Let’s play this concept out a bit.  What if the practice of software testing landed with operations instead of development? I think we would have seen three primary outcomes: more rapid adoption of development testing practices, advanced end-to-end test automation, and a focus on business risk.

If the software testing team historically reported to operations then there would have been (even) more tension between Dev and Ops. This tension would have promoted the need for more rigorous testing in development by developers.  The modern form of software testing (and the tension between developers and testers) evolved from the lack of diligent testing by developers. Practices such as static analysis, structural analysis, early performance testing and unit testing matured slowly over the past decade.  The evolution of these practices often created tension as organizations layered in quality and security governance programs.  

If the software testing team reported to operations, software testing would have been one of the frontline tasks in ITIL processes, versus a more diminutive validation task.  Speed would have come to light earlier as a business objective, therefore promoting the adoption of advanced automation techniques.  I realize that my statement above is loaded with some solid conjecture but it contains some of the core drivers of DevOps — so please feel free to comment.  With speed to production being a more prominent objective, there would be better access to production data, better access to environment data and a more cohesive approach to the application life cycle and not just the software development life cycle.  Automation would have become an imperative and not an alternative to outsourcing.  

With software testing reporting to operations, I believe the KPIs and metrics driving the activity would have been different.  Metrics like count of tests and percentage of tests executed would have never leaked onto dashboards.  I believe we would have evolved metrics more closely aligned to business risk and would have evolved models that allow the organization to more reliably assess the risks associated with releasing software at any point in the development cycle.  

Now I’m depressed, yet energized
We are in a pretty unique time in the evolution of software testing.  We are facing new challenges associated with working from home.  We face unprecedented pressure from digital transformation initiatives.  Speed is the new mantra for software testing yet the penalty for software failure is at an all-time high as news of outages and end-user frustration go viral on social media.  Now is the time to re-think the whole process.  I will share some of those ideas in my next article on software testing natural selection.  

The post Guest View: The de-evolution of software testing appeared first on SD Times.

]]>
Test automation: Tools don’t work https://sdtimes.com/test/test-automation-tools-dont-work/ Thu, 03 Oct 2019 13:30:46 +0000 https://sdtimes.com/?p=37257 I know—it’s a pretty startling statement, especially coming from someone who’s worked in the test automation industry for nearly two decades now. But it’s the truth.  Sure, you can probably download (or sign up for) a test automation tool in less time than it takes to read this article, automate some happy paths through an … continue reading

The post Test automation: Tools don’t work appeared first on SD Times.

]]>
I know—it’s a pretty startling statement, especially coming from someone who’s worked in the test automation industry for nearly two decades now. But it’s the truth. 

Sure, you can probably download (or sign up for) a test automation tool in less time than it takes to read this article, automate some happy paths through an API/web/mobile interface, and be “doing” test automation before the end of the day or the week. If you’re on a small team of like-minded individuals, you can probably get most or all of your team members on board, and showcase some test automation gains after a few weeks. 

But then what? 

Do you keep maintaining and extending test automation even once the novelty wears off, priorities shift, and team members come and go? Are all the other teams and divisions who are working on related application components aligned with what your team is doing—to the point where you can accurately assess how the latest changes impact the holistic user experience? If so, what’s keeping it all going? And if not, does your pocket of automation really help stakeholders answer the most important question for today’s highly accelerated and automated delivery processes: Does the release have an acceptable level of business risk? 

RELATED CONTENT: Testing all the time

It’s becoming increasingly clear that the era of digital transformation is dramatically changing the expectations for testing. Today, speed is the new currency. Digital transformation is driving organizations to expect new functionality faster and faster—and, at the same time, customers with myriad options have grown increasingly intolerant of even the slightest glitch.  

Stakeholders need near real-time insight into whether pushing each release into production will negatively impact the overall user experience and ultimately do more harm than good. Unless you have a highly advanced (dare I say “native”) microservices architecture, you cannot reliably assess the overall risk with isolated pockets of test automation. You can’t assess the overall risk without test automation tools either. With today’s ultra-compressed delivery cycles and highly complex, distributed applications, relying solely on manual testing is simply not an option. 

Putting too much faith in test automation tools, though, can be just as bad as having only isolated pockets of test automation or no test automation at all.  From what I’ve seen, organizations that are hyper-focused on adopting tools simply tread water longer. They don’t really progress from a process perspective. Organizations really need to tackle test automation (and the broader Continuous Testing, which is essential for Agile and DevOps) from a transformation perspective. They need to holistically make process changes to satisfy the business goals that they are being asked to achieve. 

In survey after survey, you’ll find that organizations appear to be adopting test automation tools at an increasing rate. However, if you take a deep dive into their test automation success, you typically don’t see measurable process improvement or progress towards their objectives. There’s a huge difference between the idea of adopting a tool and committing to the idea of transforming a process. 

Don’t get me wrong. Tools do matter. You will not succeed unless you have test automation tools that provide the specific functionality required to meet your goals. But it’s absolutely essential to realize that tools are just one piece of a large and highly complex puzzle that also involves people and processes. People need to be willing to change and they need to be comfortable with the change. But how do you prepare the people and adapt the process to enable you to get the most out of the tools you choose to adopt? 

If you look at the organizations that have successfully transformed their testing processes, you will find that they have several things in common:

  • The pain became so pointed that they recognized the value of changing
  • A champion committed to providing the tools, access, and guidance required to get everyone—not just specific teams—to the place where they need to be
  • Everyone understands exactly what’s changing and why—and what that means to them on a day-to-day basis
  • They have clear, meaningful KPIs for measuring progress and celebrating success
  • They collect and share internal success stories that prove change is feasible (and valuable) in the organization’s unique environment

Just like buying a new guitar isn’t going to catapult you into the same realm as your favorite musician in your favorite band, buying a new test automation tool isn’t a fast pass to a mature, sustainable Continuous Testing process. You also need commitment, alignment, training, guidance, collaboration and a framework for adapting to changing expectations and opportunities. It’s these often-overlooked elements that ultimately make all the difference between a tinkerer and a master. 

The post Test automation: Tools don’t work appeared first on SD Times.

]]>
Guest View: RPA: Shiny new trend. Same old automation challenge https://sdtimes.com/softwaredev/guest-view-rpa-shiny-new-trend-same-old-automation-challenge/ Fri, 13 Sep 2019 18:00:46 +0000 https://sdtimes.com/?p=36984 Place yourself in Jeopardy! mode for a moment. Here’s the answer… Automatically driving an application via the UI or API to make manual work faster, less burdensome, and more accurate.  Time to give Alex Trebec your answer. If you say “What is test automation?”,  you’re correct. But if you say “What is Robotic Process Automation?” … continue reading

The post Guest View: RPA: Shiny new trend. Same old automation challenge appeared first on SD Times.

]]>
Place yourself in Jeopardy! mode for a moment. Here’s the answer…

Automatically driving an application via the UI or API to make manual work faster, less burdensome, and more accurate. 

Time to give Alex Trebec your answer. If you say “What is test automation?”,  you’re correct. But if you say “What is Robotic Process Automation?” you’re also correct. Undeniably, test automation and RPA have a lot in common—for better, or for worse. 

What is RPA?
While software testing all-too-often remains the overlooked “second-class citizen” of the application development world, RPA has truly captured the attention (and dollars) of IT leaders—to the point where it has become the fastest-growing market in enterprise software. 

At its core, RPA is ultimately the same as software test automation. As the Jeopardy! leadoff suggested, both RPA and software test automation automatically drive an application via the UI or API to make manual work faster, less burdensome, and more accurate. 

RELATED CONTENT: Is smarter RPA on the horizon?

 Of course, there are some key differences:

  • RPA focuses on automating sequences of tasks in production environments to successfully execute a clearly-defined path through a process so you can complete work faster
  • Test automation focuses on automating realistic business processes in test environments to see where an application fails so you can make informed decisions about whether an application is too risky to release

In other words: with RPA, you use automation to make a process work. For software testing, you focus on using automation to determine how a process can possibly break.

There are two critical automation capabilities required for both RPA and software test automation: UI automation and API automation.  At the same time, there are some core differences that must be addressed to enable either software test automation or RPA to succeed at scale in the enterprise. In terms of software test automation, this includes secure and stateful test data management, test-driven service virtualization, change-impact analysis, and risk-based test case design. For RPA, this involves production-grade execution, enterprise-grade security and access control, and comprehensive event triggers. 

RPA: Not robotic. Not process. “Just” automation
Despite its catchy moniker, RPA is really “just” automation. 

It’s easy to be enticed by vendor-induced visions of sophisticated cyborg-like robots taking over previously-human-led processes—faster, better, and cheaper. However, that’s quite far from RPA reality. From a technical perspective, RPA bots are really just sets of automation instructions. These instructions are either expressed in the form of scripts (with hard-coded technical details on how to find various element locators on the page) or model-based automation technologies (which define automation from the perspective of business user and store automation details in reusable/rearrangeable automation building blocks). If you’ve ever worked with test automation, these approaches should sound quite familiar.

Moreover, RPA’s strength has proven to be automating short repetitive tasks rather than long-running end-to-end processes.   According to Gartner, “the term ‘process’ in the RPA acronym is more accurately discrete ‘task’ automation. Most automations supported by RPA tools last, at most, a couple of seconds. Furthermore, at best, the process support aspect of these products is limited to simplistic workflow.” 

Solving age-old automation challenges
Ultimately, it’s the strength of the underlying automation engine that makes or breaks both RPA and software test automation initiatives. Given that script-based automation approaches have failed to meet enterprise test automation objectives over the past 20 years, it’s unreasonable to expect the same script-based approaches to now meet enterprise RPA objectives.  Not surprisingly, the script-based approaches that have yielded poor results in the software test automation world continue to fall short in the RPA sphere—and the resilient model-based approaches that enable high levels of enterprise test automation continue to rise to the top for RPA.

Brittle automation—the same core problem that has doomed so many software test automation initiatives—has already emerged as the #1 enemy to RPA success and ROI. As publications such as The Wall Street Journal and Forbes have been reporting, the problem with RPA is that bots break—a lot. RPA users are realizing what testers learned years ago: if your automation can’t adapt to day-to-day changes in interfaces, data sources and format, underlying business processes, etc., then maintaining the automation is going to rapidly eat into your ROI. Moreover, with RPA, the repercussions of broken automation are much more severe. A test that’s not running is one thing; a real business process that’s not getting completed is another. 

Recommendations
The common heritage of software test automation and RPA can be a blessing, not a curse.  Old problems don’t have to be new again with RPA. The key problem with RPA –the construction of sustainable automation—is a skill that most successful testers already possess.

As with test automation initiatives, the success of RPA initiatives ultimately rests on resiliency. It’s essential to find an automation approach that enables business users to rapidly create and update resilient automation for the organization’s core enterprise technology sets (SAP, Salesforce, ServiceNow, Excel, PDF, custom apps, etc.). RPA bots must be resilient to change and bot maintenance must be simple and straightforward…which eliminates the popular script-based approaches. Otherwise, RPA is simply short-lived automation that creates technical debt—leaving the organization at risk when automation fails to execute the required tasks.  

According to a recent Gartner keynote, many organizations are finding that software test automation is a great bridge into RPA initiatives, and “it’s important to utilize test automation assets and test automation teams as you build RPA.” There’s a growing trend of organizations entering into RPA by extending their test automation efforts—and those who have successfully conquered the test automation challenge are especially well-poised for success with RPA.

The post Guest View: RPA: Shiny new trend. Same old automation challenge appeared first on SD Times.

]]>
premium Continuous testing for DevOps: Is it all just a bunch of hype? https://sdtimes.com/contest/continuous-testing-for-devops-is-it-all-just-a-bunch-of-hype/ Tue, 27 Nov 2018 19:37:31 +0000 https://sdtimes.com/?p=33378 Almost exactly one year ago, Forrester confidently predicted that 2018 would be “the year of Enterprise DevOps.” The blog, authored by the late Robert Stroud, began: DevOps has reached “Escape Velocity.” The questions and discussions with clients have shifted from “What is DevOps?” to “How do I implement at scale?” Continuous testing is not far … continue reading

The post <span class="sdt-premium">premium</span> Continuous testing for DevOps: Is it all just a bunch of hype? appeared first on SD Times.

]]>
Almost exactly one year ago, Forrester confidently predicted that 2018 would be “the year of Enterprise DevOps.” The blog, authored by the late Robert Stroud, began:

DevOps has reached “Escape Velocity.” The questions and discussions with clients have shifted from “What is DevOps?” to “How do I implement at scale?”

Continuous testing is not far behind. In early 2014, SD Times proclaimed “Forget ‘Continuous Integration’—the buzzword is now ‘Continuous Testing’” (in the very first article in the publication’s Continuous Testing category).  At the time, the concept of continuous testing seemed about as far-fetched as a Silicon Valley snowstorm to most testers in enterprise organizations—where pockets of DevOps were just surfacing among teams working on “systems of engagement.”

But since 2014, the world has changed. As Forrester predicted, the vast majority of enterprise organizations are now actively practicing and scaling DevOps. And the larger focus on Digital Disruption means that it’s now impacting all IT-related operations: including systems of record as well as systems of engagement.

When ExxonMobil QA manager Ann Lewis so memorably asked, “Is it all just a bunch of hype? Really?” at the Accelerate 2018 Continuous Testing conference, the clear consensus was a resounding “no.” Digital transformation, DevOps and continuous testing have gotten real for the conference attendees, largely composed of QA leaders across Global 2000 organizations. So real, in fact, that their employers cleared their schedules for a week and sent them to Vienna to learn what’s really needed to achieve Continuous Testing for DevOps…in an enterprise environment.    

Here are some of the key lessons learned—shared by leading testing professionals that have already made continuous testing for DevOps a reality in their own organizations:

“Test data is a pain the ass”
Renee Tillet, Manager of DevOps Verify at Duke Energy, offered her perspective on one of the most underestimated pains of Continuous Testing: Test Data Management. Renee asserted:

“If you’re doing test automation, what’s the biggest pain in your ass? It’s test data. We would be in the middle of our sprint—the developers are done, the testers are getting ready to test, and guess what? The tester has no test data. Not only does he not have test data, but he doesn’t have time to go create that test data now. It’s too late.

By the time you get to that user story, your definition of ready should include not just what the developer needs, but also the test data you need to verify it. The test plan needs to be ready, and the data needs to be in the environment—or we don’t accept that story into the sprint.

Initially, we would create parameterized test cases, we’d put data in them, and they would run in the Dev environment. But then we’d try to run them over in the test environment, which was the next higher environment, and they would fail because the data was different. So, we came up with a data strategy that allowed us to use the same test data in all the environments.”

Number of test cases: Less is more
Numerous experts shared that a high number of test cases is no longer something to be worn as a badge of honor. It’s doesn’t help provide the fast feedback that the team expects.  

Andreas Aigner, head of service and security management at the Linde Group, explained:

“We have a lot of examples in the past where we were been proud of having 3,000+ test cases that ran continuously without uncovering any defects. I said, ‘Is that successful? Does that make sense? Don’t you think you have burned resources? At the end of the day, you have to search for high-value test automation, and you have to focus on the business risks.”

Martin Zurl, SPAR ICS, added:

“We rely on risk-based testing to prioritize our test cases. We need to understand the way our customers are thinking and test the most important features—not every feature—because we need to speed up our automation. We need to give developers feedback extremely fast, so we focus on the main paths that our customers follow.”

Democratize test automation
Test automation is just one of the many elements required for continuous testing, but you simply can’t do continuous testing without high levels of test automation. QA leaders across organizations agreed that making test automation accessible and enabling business experts to control their own automation is key for jumpstarting and scaling test automation.

Amber Woods, VP of IT enterprise applications and platforms at Tyson Foods, introduced the concept of democratizing test automation:

“Other scripting tools for test automation were not well adopted well because they didn’t really get traction within each of the teams. We’ve had success democratizing citizen data scientists and citizen integrators with applications like SnapLogic. Now we’re taking that same approach to test automation, using model-based test automation. This allows our business analysts to start test automation in an easy, fast way that will get us away from what we had before, which was a lot of scripting. Our goal is to get heavy, heavy adoption in the test automation space. 

Say you’ve got Team A over here, and the Team B over there. Team B’s leaving at a decent hour of the night, and Team A is working all night. Team A asks, ‘Why are you leaving so early? Don’t you have more testing to do?’ Team B responds, ‘Well we’ve got all our testing automated. I’m going to push a button and I’m going go home for the night.’ That gets teams to adopt test automation.”

Likewise, Ann Lewis, quality manager at ExxonMobil, spoke to the power of enabling more team members to “control their own automation”:

“What warmed my heart is that about six months after we really started getting into test automation, one of the business COE managers called me up and said ‘Wow, where did this come from? I want to put it the hands of all of my business process experts. For the first time, we can control our own test automation. Test automation helps us ensure that, over and over again, business critical functionality works after each application change.’ That actually started a competition amongst different business units—everybody wanted to get on that bandwagon.”

API testing is a faster, more stable way to test ~80 percent of your functionality
Sreeja Nair, product line manager at EdgeVerve, explained why their journey to continuous testing included API testing as well as test automation:

“UI testing is slow—for example, it can take 3 minutes to automate an end-to-end banking flow at the UI level. And if the UI is not ready or it is down, you can’t test at all. Is that a good way to test? Obviously not. We found that the best way to address our problem is to attack the layer below UI presentation layer: the business layer. We realized we could cover 80% of our functionality if we test at the business layer through APIs.  We decided to change our tests from UI-oriented design to API based design.

After we first define our test model, we find out which APIs need to be called and then chain the APIs together according to the component model we have designed. Testing a single API is not API testing. If you have a business scenario to test, you need to integrate your APIs to create realistic service-level integration tests.”

In-sprint testing can’t focus (exclusively) on new tests
Aaron Carmack, automation architect and product owner at Worldpay, explained that one of their keys to advancing from “test automation zero to continuous testing hero” was recognizing that updating test cases as your application evolves is just as important as adding new ones:

“Our QA teams sit down with the dev team and the product owners as user stories are created to learn what these stories involve and what test cases will need to be updated. Once the sprint begins, we start updating those test cases, creating the new critical scenarios that we need, and updating the existing test that we believe will be impacted by the new user stories. We’re updating tests, creating new tests, and then executing tests based on the new user stories—all within the sprint. Also, when we execute the full regression suite, we identify the failures and commit to addressing them within the sprint. That way, false positives don’t undermine our CI/CD process.”

The post <span class="sdt-premium">premium</span> Continuous testing for DevOps: Is it all just a bunch of hype? appeared first on SD Times.

]]>
The broken promise of test automation https://sdtimes.com/legacy-software/broken-promise-test-automation/ https://sdtimes.com/legacy-software/broken-promise-test-automation/#comments Fri, 16 Jun 2017 14:30:17 +0000 https://sdtimes.com/?p=25695 For over two decades now, software testing tool vendors have been tempting enterprises with the promise of test automation. However, the fact of the matter is that most companies have never been able to achieve the desired business results from their automation initiatives. Recent studies report that test automation rates average around 20% overall, and … continue reading

The post The broken promise of test automation appeared first on SD Times.

]]>
For over two decades now, software testing tool vendors have been tempting enterprises with the promise of test automation. However, the fact of the matter is that most companies have never been able to achieve the desired business results from their automation initiatives. Recent studies report that test automation rates average around 20% overall, and from 26-30% for agile adopters.

I believe that several factors contribute to these dismal automation results…

Legacy software testing platforms were designed for a different age
The most commonly-used software testing tools today are predicated on old technology, but enterprise architectures have continued to evolve over the years. Development no longer focuses on building client/server desktop applications on quarterly release cycles — with the luxury of month-long testing windows before each release.

Almost everything has changed since test automation tools like those by Mercury, HP, Micro Focus, Segue, Borland, and IBM were developed. Retrofitting new functionality into fundamentally old platforms is not the same as engineering a solution that addresses these needs natively.

Legacy script-based tests are cumbersome to maintain
Scripts are cumbersome to maintain when developers are actively working on the application. The more frequently the application evolves, the more difficult it becomes to keep scripts in sync. Teams often reach the point where it’s faster to create new tests than update the existing ones. This leads to an even more unwieldy test suite that still (eventually) produces a frustrating number of false positives as the application inevitably continues to change. Exacerbating the maintenance challenge is the fact that scripts are as vulnerable to defects as code—and a defect in the script can cause false positives and/or interrupt test execution.

The combination of false positives, script errors, and bloated test suites creates a burden that few QA teams can overcome. It’s a Sisyphean effort — only the boulder keeps growing larger and heavier.

Software architectures have changed
Software architectures have changed dramatically, and the technology mix associated with modern enterprise applications has grown immensely. We’re trying to migrate away from mainframes and client/server as we shift towards cloud-native applications and microservices. This creates two distinct challenges:

  • Testing these technologies requires either a high degree of technical expertise/specialization or a high level of business abstraction that allows the tester to test without diving into the low-level technical details.
  • Different parts of the application are evolving at different speeds, creating a process cadence mismatch.

The software development process has changed
Although most enterprises today still have some waterfall processes, there’s an undeniable trend towards rapid iterations with smaller release scopes. We’ve shifted from quarterly releases to bi weekly or daily ones — with extraordinary outliers like Amazon releasing new code to production every 11.6 seconds. This extreme compression of release cycles wreaks havoc on testing — especially when most testers must wait days or weeks to access suitable test environment and test data.

The responsibility for quality has changed
In response to the desire for faster release cycles, there’s been a push to “shift left” testing. The people creating the code are assuming more responsibility for quality because it’s become imperative for getting to “done done” on time. However, for large enterprises working on complex applications, developer-led testing focuses primarily on a narrow subset of code and components. Developers typically lack both the time and the access required to test realistic end-to-end business transactions. Although the onus for quality has shifted left, the legacy platforms, rooted in waterfall processes, have a distinct bias towards the right. This makes it difficult to blend both approaches.

Open-source testing tools have changed the industry
The rise of open-source software testing tools such as Selenium and SoapUI have had both positive and negative effects. Traditionally, open-source testing tools are laser-focused on solving a very specific problem for a single user. For example, Selenium has become an extremely popular script-based testing tool for testing web interfaces. Yet, although Selenium offers speed and agility, it does not support end-to-end tests across packaged apps, APIs, databases, mobile interfaces, mainframes, etc.. There’s no doubt that most of today’s enterprise applications feature a web UI that must be tested. However, in large enterprises, that web interface is just one of many elements of an end-to-end business process. The same limitation applies to SoapUI and API testing.

So… now what?
Software testing must change. Today’s software testing challenges cannot be solved by yesterday’s ALM tools. With disruptive initiatives like DevOps, Continuous Delivery, and Agile expanding across all industry segments, software testing becomes the centerpiece for data-driven software release decisions. This next wave of SDLC maturity requires organizations to revamp antiquated testing processes and tools. This means that organizations must have technologies that enable Continuous Testing — or innovative ideas will remain hostage to yesterday’s heavyweight testing tools.

The post The broken promise of test automation appeared first on SD Times.

]]>
https://sdtimes.com/legacy-software/broken-promise-test-automation/feed/ 7
Does your release candidate have an acceptable level of risk? https://sdtimes.com/business-development/release-candidate-acceptable-level-risk/ Tue, 26 Jul 2016 18:00:11 +0000 https://sdtimes.com/?p=19802 Today’s DevOps and “Continuous Everything” initiatives require the ability to assess the risks associated with a release candidate—instantly and continuously. Yet, as the release date looms, development teams are still focused on answering the question, “Are we done testing?” Fundamentally, this is the wrong question. It ties the concept of “quality” to static tests that … continue reading

The post Does your release candidate have an acceptable level of risk? appeared first on SD Times.

]]>
Today’s DevOps and “Continuous Everything” initiatives require the ability to assess the risks associated with a release candidate—instantly and continuously. Yet, as the release date looms, development teams are still focused on answering the question, “Are we done testing?”

Fundamentally, this is the wrong question. It ties the concept of “quality” to static tests that produce multiple, independent and primarily binary data points of pass or fail. This approach results in a lot of data points, but not the information needed to help the business understand the real impact to the end-user experience.

(Related: Putting the test back in DevOps)

Understanding the specific risks associated with each release candidate becomes mission critical as organizations attempt to accelerate the release cycle. Without this visibility and knowledge of the impacts to the business, managers are unable to make the appropriate tradeoff or timing decisions for releasing software.

Instead of “Are we done testing?” we should be asking, “Does the release candidate have an acceptable level of business risk?” This new question is much more complex than it seems at the surface. It carries a few critical assumptions:

  • The inherent business risks associated with a given application and the particular release candidate are well defined.
  • There is an understanding of how to measure each of these defined business risks.
  • A baseline and thresholds are established for defining what constitutes an acceptable level of risk. Some business risks might have zero tolerance and no thresholds for acceptance.
  • Automation is in place to continuously assess the state of the application versus these defined risks.

This is why the concept of Continuous Testing is so critical. Continuous Testing provides an automated, unobtrusive way to obtain immediate feedback on the business risks associated with a software release candidate. It balances the traditional bottom-up tasks associated with software development and testing with a top-down approach focused on safeguarding the integrity of the user experience while protecting the business from the potential impacts of application shortcomings. Given the business expectations at each stage of the SDLC, Continuous Testing delivers a quantitative assessment of risk as well as actionable tasks that help mitigate risks before they progress to the next stage of the SDLC. The goal is to eliminate meaningless activities and produce value-added tasks that drive the development organization towards a successful release.

Continuous Testing is not simply more test automation… nor is it a “plug-and-play” solution. As with all process-driven initiatives, it requires the evolution of people, process and technology. We must accommodate the creative nature of software development as a discipline, yet we must face the overwhelming fact that software permeates every aspect of the business—and software failure now presents the single greatest risk to the organization.

Continuous Testing (when executed correctly) provides four major business benefits. First, it results in clearly delineated business risks associated with each application in the organization’s portfolio, including measurement standards for assessing the level of risk. It guides business and technical teams to collaboratively close the gap between business risk and development activities.

Second, Continuous Testing establishes a safety net that allows software developers to bring new features to market faster. With a trusted test suite ensuring the integrity of the related application components and functionality, developers can immediately assess the impact of code changes. This not only accelerates the rate of change, but also mitigates the risk of software defects reaching your customers.

Third, Continuous Testing allows managers to make better tradeoff decisions. From the business’ perspective, achieving a differentiable competitive advantage by being first to market with innovative software drives shareholder value. Yet, software development is a complex endeavor. As a result, managers are constantly faced with tradeoff decisions in order to meet the stated business objective. By providing a holistic understanding of the risk of release, Continuous Testing helps to optimize the business outcome.

Fourth, when teams are continuously executing a broad set of tests via “sensors” placed throughout the SDLC, they collect metrics regarding the quality of the process as well as the state of the software. The resulting metrics can be used to reexamine and optimize the process itself, including the effectiveness of the tests. This information can be used to establish a feedback loop that helps teams incrementally improve the process. Frequent measurement, tight feedback loops and continuous improvement are all key DevOps principles.

To explore how Continuous Testing accelerates the SDLC, promotes innovation and helps mitigate business risks, we recently published “Continuous Testing for IT Leaders.” This book is written for senior development managers and business executives who need to achieve the optimal balance between speed and quality with software applications that are the primary interface with customers… and ultimately revenue.

The post Does your release candidate have an acceptable level of risk? appeared first on SD Times.

]]>
Rollback as a quality strategy: The ‘pink slime’ of continuous delivery https://sdtimes.com/continuous-delivery/rollback-as-a-quality-strategy-the-pink-slime-of-continuous-delivery/ Mon, 24 Feb 2014 06:00:00 +0000 https://sdtimes.com/rollback-as-a-quality-strategy-the-pink-slime-of-continuous-delivery/ Users may not appreciate being the guinea pigs in your continuous delivery testing process … continue reading

The post Rollback as a quality strategy: The ‘pink slime’ of continuous delivery appeared first on SD Times.

]]>
Not all organizations face the same business risks associated with application failure, and the cost of software quality certainly varies across industries. Remember: The cost of quality isn’t the price of creating quality software; it’s the penalty or risk incurred by failing to deliver quality software. Given that, one thing that doesn’t vary is the fact that organizations that test in production don’t necessarily advertise that they’re relegating a large part of their quality process to unsuspecting customers.

All too often, application updates in a continuous-delivery process are tiered. Updates are premiered at the lowest tier, using the lowest-priority clients as guinea pigs who unwittingly serve as real-time user-acceptance testers. If this real-time user-acceptance testing doesn’t indicate any major problems, the updates are then pushed out to higher-value customers.

However, if the “early adopters” report significant issues in the updated version, the organization rolls it back, tries to resolve the problems, and then starts the tiered process all over again. The organization recognizes that they are putting a certain percentage of their users at risk, but they consider this a necessary evil for getting the release out into the field.

Do you remember when the media exposed the use of “pink slime” as a meat additive? If not, you can see this video for a refresher course on all the grizzly details. Organizations leveraging their unconsenting and unaware users as QA are the equivalent of the meat industry using pink slime: It’s an unpleasant business reality that they would prefer to keep hidden.

Although the use of pink slime might have been a “cash cow” for the beef industry for many years, the exposure of this practice has already driven at least one beef producer to bankruptcy and forced the industry as a whole to think long and hard about whether the business risks are truly worth the cost savings. Likewise, backlash stemming from the current media spotlight on software failures—both functional glitches and security breaches—is starting to force our industry to reassess the true cost of quality for software.

Rollback as a quality strategy undeniably places the user experience at risk. With application switching costs at an all-time low, you’re really opening the door to defection caused by poor user experiences. For example, consider the recent proliferation of Yahoo Mail frustrations that reportedly prompted a mass migration to Gmail and other providers.

Moreover, if the users are actually paying subscribers/customers, it’s even worse: You’re forcing people to pay you in exchange for the “opportunity” to serve as your guinea pig. Once some lower-tier paying customers discover that their dollar is not as valuable as other people’s dollar, you’ve got a big problem.

All this being said, using your customers as real-time user acceptance testers can be a business strategy. However, if software professionals decide to take this route, we really need to ensure that two minimum requirements are met. First, the business must overtly recognize that this practice is part of the organization’s overall execution strategy. Second, the business must truly understand the potential risk or cost of quality associated with this practice.

From my experience, most organizations experience a wide gap between development’s technical decisions and the overarching business drivers that concern executive management. If both the business leaders and the development organization agree to use pink slime, there’s no problem: It’s a business decision. However, if management believes that the organization is providing 100% grass-fed, free-range, all-natural Angus ground beef with no filler—but development is in fact distributing beef laced with an ammonia-treated mash of meat trimmings—you’re undeniably opening the door to problems somewhere down the road.

Now that software has morphed from a business process enabler into a competitive differentiator, business expectations about the speed and reliability of software releases have changed dramatically. With a perfect storm of all-time-low switching costs, downward price pressure, and relentless media coverage of application failures, software quality matters more than ever. Ignoring this sea change regarding tolerance for faulty software is now a tremendous business risk: a risk equivalent to McDonald’s resuming the use of pink slime now that the public is all too aware of what that entails.

Wayne Ariola is Chief Strategy Officer of Parasoft, where he leads the development and execution of the company’s long-term strategy.

The post Rollback as a quality strategy: The ‘pink slime’ of continuous delivery appeared first on SD Times.

]]>
Do your developers make business decisions? https://sdtimes.com/delta-airlines/do-your-developers-make-business-decisions/ Fri, 21 Dec 2012 06:00:00 +0000 https://sdtimes.com/do-your-developers-make-business-decisions/ Programmers can inadvertently get your app into hot water. Consider redrawing your policies to prevent that … continue reading

The post Do your developers make business decisions? appeared first on SD Times.

]]>
Those of us in the development testing business rarely show restraint when software-related news about a failure makes big headlines. It’s bad enough that these organizations take a PR beating, but to kick them while they’re down to hock our wares is uncouth at best. We’re not first in line, nor will we be last, but we’re taking our turn just the same.

Delta’s legal turbulence
An already fragile airline industry took another hit when California Attorney General Kamala Harris filed a lawsuit against Delta Airlines for failing to comply with the state’s Online Privacy Protection Act. The lawsuit is in regard to the Atlanta-based company’s mobile phone app, which is required to post a conspicuous privacy policy that informs app users of what personal information is collected and how it will be used.

In this particular case, Delta is lucky. Aside from the bad publicity and small amount of work to update the app, Delta will likely incur minimal damages from the infraction. Had there been a problem with one of the core features that may result in real damages, such as the “pay for checked bags” feature, the airline may have had a bigger problem on its hands.

Honest mistake? Maybe
We’re not here to judge Delta, or any other company for that matter. We’re only talking about this snafu to make a point: that developers are making business decisions every day, and that these decisions carry real consequences. Their code determines the safety, security, performance and reliability of the software that drives the business, giving them the power to introduce or minimize risks. By allowing developers to make critical business decisions related to the software, managers, directors and C-level executives have delegated to them an extraordinarily high level of business responsibility. Developer decisions directly affect immediate or future success, growth, damages or liabilities, as well as the stability of business leadership positions.

Our example is a classic example of what can happen when developers are left to make business decisions. This is not a knock on developers. We love developers. Making sure that legal standards are met should be the job of the legal department. The development team was probably just excited to get their product out into the market and simply forgot to include the privacy policy. There are ways to align software developer decisions with business expectations, but that’s a topic for another day.

The point is that in the absence of a clearly defined policy that sets expectations on how software is to be designed and developed, developers are left to fill in certain business-related blanks. In most cases, this isn’t the developers’ strong suit. Other mobile app makers may want to take note and implement a policy that requires legal to review their products to ensure compliance with applicable laws. Better yet, why not automate this process in the development stage when the cost of addressing issues is at its lowest?

Changing technology calls for changing practices
The software development world is facing multiple disruptive technologies. The move to cloud-based software, agile development, and the rapidly growing mobile market are just some of the emerging trends for which we must account. These technologies stoke up the fire of concerns that have faced software developers for some time. How do you ensure that the software is safe, secure, reliable, performs well, complies with regulations, and so on?

Without creating policies to ensure that development practices and business expectations are aligned, we are in danger of making the same mistake over and over. Changing technology means new threats and a shift in how we overcome classic problems. And our ideas of how we evolve software must also change.  

Wayne Ariola is VP of Strategy at Parasoft, which sells tools for implementing policy-driven software development.

The post Do your developers make business decisions? appeared first on SD Times.

]]>