Sponsored Archives - SD Times https://sdtimes.com/category/sponsored/ Software Development News Thu, 11 May 2023 14:54:59 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.1 https://sdtimes.com/wp-content/uploads/2019/06/bnGl7Am3_400x400-50x50.jpeg Sponsored Archives - SD Times https://sdtimes.com/category/sponsored/ 32 32 ASTQ Summit brings together test practitioners to discuss implementing automation https://sdtimes.com/test/astq-summit-brings-together-test-practitioners-to-discuss-implementing-automation/ Thu, 11 May 2023 14:54:59 +0000 https://sdtimes.com/?p=51127 Is automated testing worth the expense? Real test practitioners will show how test automation solved many of their quality issues when the Automated Software Testing and Quality one-day virtual event returns on May 16. Produced by software testing company Parasoft, among the topics to be discussed are metrics, how automation can significantly cut test time, … continue reading

The post ASTQ Summit brings together test practitioners to discuss implementing automation appeared first on SD Times.

]]>
Is automated testing worth the expense?

Real test practitioners will show how test automation solved many of their quality issues when the Automated Software Testing and Quality one-day virtual event returns on May 16. Produced by software testing company Parasoft, among the topics to be discussed are metrics, how automation can significantly cut test time, shifting testing left, the use (or not) of generative AI, the synergy between automation and service virtualization, and more.

“We’ve worked really hard to make sure that most of the sessions are coming from the practitioner community,” said Arthur Hicken, chief evangelist at Parasoft. “So people are telling you how they solved their problem – what metrics they use to solve the problems, what the main challenge was, what kind of results they saw, you know what pitfalls they’ve hit.”

As for AI in testing, Hicken said Parasoft has created AI augmentations at every aspect of the testing pyramid, which he acknowledged is getting “kind of long in the tooth,” before adding that it still is useful, helpful advice. “Whether it’s static analysis, unit test, API testing, functional testing, performance testing, UX testing, we’ll talk about how these different things will help you in your day-to-day job.”

He went on to say that he doesn’t believe the things he’s talking about are job killers. “I think they’re just ways to help. I haven’t met any software engineer that says, I don’t have enough to do, I’ve got to pad my work with something. I think just being able to get their job done will make their life better.”

On the subject of generative AI, Hicken says it can be quite smart about some things but struggles with others. So, the more clearly you can draw the boundaries of what you expect it to be able to do, and the more narrow you can scope it down, AI just does a better job.

This, he said, is true of testing in general. “Service virtualization helps you decouple from real-world things that you can’t really control or can’t afford to play with,” he said. “Most people don’t have a spare mainframe. Some people interact with real-world objects. We see that in the healthcare space, where faxes are part of a normal workflow. And so testing becomes very, very difficult.”

Further, he said, “As we use AI to start to increase the amount of testing, we’re doing the permutations, we run into a data problem, we just don’t have enough real data. So it starts synthesizing virtual data. So the service virtualization is a way to synthesize data to get broader coverage. And because of that, there’s always a temptation to use real-world data as your starting point. But in many jurisdictions, real-world data is a pretty big no-no. GDPR doesn’t allow it.”

So, in the end, the question remains: How do you know it was worth it? What did you do to measure? Hicken said, “I don’t believe there’s a universal quality measure or ROI measure; I believe there are lots of fascinating different things that you can look at that might be interesting for you. So I would say look for that.”

Hicken also noted, humorously, that if test automation did not deliver value, the speakers he sought out for ASTQ would not have returned his calls. 

There is still time to register to learn more about automated software testing and Parasoft.

The post ASTQ Summit brings together test practitioners to discuss implementing automation appeared first on SD Times.

]]>
Proper identity verification can result in an increased trust with your customer base https://sdtimes.com/data/proper-identity-verification-can-result-in-an-increased-trust-with-your-customer-base/ Tue, 25 Apr 2023 17:06:59 +0000 https://sdtimes.com/?p=51001 With so much data flowing through modern organizations, verifying that the information on file is correct has become increasingly more difficult.  If a company fails to verify the names, addresses, email addresses, and phone numbers of their users, the overall experience of end users will decline, and the company can end up putting itself at … continue reading

The post Proper identity verification can result in an increased trust with your customer base appeared first on SD Times.

]]>
With so much data flowing through modern organizations, verifying that the information on file is correct has become increasingly more difficult. 

If a company fails to verify the names, addresses, email addresses, and phone numbers of their users, the overall experience of end users will decline, and the company can end up putting itself at risk. 

Global data quality company Melissa came out with its Personator Identity tool in order to fight against this potential business problem, and allow users to be sure that the data they have on file is up to date and accurate.

“If customers will give us a name, address, and date of birth, we then do a real time call to one of the credit bureaus to see if the data matches against the data that they have,” said Michael Lee, sales engineer at Melissa. “After, we will give the status of that data back to the client and say that the name matched, the national ID matched, the address matched, but maybe the date of birth did not match, so we didn’t return that.”

The Personator Identity tool works to verify individuals in real time using several different matching options.

The first option is Proof of Address, which verifies that only the name and address match. The second is Identity Verification (eIDV), meaning that name, address, date of birth, national ID, phone numbers, and emails all match.

The last option is 2×2 Match, which takes two pieces of information, like name and date of birth, and confirms a match against two authoritative sources, like a credit agency, utility company or PEP list (Politically Exposed Persons). 

What is particularly important about this process is that it takes place in real time. Therefore, adequate checks are made seamlessly while onboarding a new customer which is ideal for financial, insurance and retail service providers to deliver a smooth customer experience while protecting against fraud and ensuring appropriate KYC/AML compliance.

“The whole objective is to make sure that you know your customer, and make sure that you are not onboarding anyone that is putting in any fake information,” Lee explained.

According to the company, Personator Identity also screens individuals against national watch lists so organizations can be aware if a name pops up on lists such as politically exposed persons, government sanctions, anti-terrorism, anti-money laundering, and government agency.

Lee stated that this is intended to allow companies to be aware of any potential risk that they may be exposing themselves to. This helps to avoid any backlash that could come from associating with certain political figures or other “high risk individuals.”

“The idea with national watch lists is to ensure that you are not onboarding anyone that has ties with FBI most wanted terrorists, or anyone that might cause potential threat or risk of fines due to the laws,” said Lee.

This all translates to users being able to establish a heightened level of trust with their customer base. By having the ability to verify their information and be sure that the data they have submitted is accurate, businesses can feel confident with the customers that they are allowing to be onboarded.

Lee also explained that the Personator Identity tool works to cleanse and standardize users data so that when it is checked against different databases for accuracy, it is as easily digestible as possible.

“With Personator Identity, this is built in. So, what we do is grab the data that was submitted, do the cleansing and validation step, and then we send it out to different providers to check if it matches or not,” Lee said. 

To learn more about Melissa Personator Identity, visit the website

The post Proper identity verification can result in an increased trust with your customer base appeared first on SD Times.

]]>
‘Flow Triangles’ help organizations ensure teams are working together https://sdtimes.com/value-stream/flow-triangles-help-organizations-ensure-teams-are-working-together/ Mon, 24 Apr 2023 16:25:27 +0000 https://sdtimes.com/?p=50984 There are people who believe that software development is pure art. And there are people who believe that it is basically manufacturing. The reality, of course, is that it’s somewhere in the middle. Because of that, before you can even begin to measure how your team is performing, it’s critically important to understand your organization’s … continue reading

The post ‘Flow Triangles’ help organizations ensure teams are working together appeared first on SD Times.

]]>
There are people who believe that software development is pure art. And there are people who believe that it is basically manufacturing. The reality, of course, is that it’s somewhere in the middle.

Because of that, before you can even begin to measure how your team is performing, it’s critically important to understand your organization’s approach to development and how the teams are structured to maximize that effort.

“Finding good metrics, like flow metrics, end up being a balance between … do you treat what developers are doing as a manufacturing process? Or do you treat it more as a creative process?” said Jeremy Freeman, co-founder and CTO at Allstacks, providers of value stream intelligence software.  

Freeman referred back to the “Iron Triangle” approach to software development quality, which states that you can either develop things quickly, cheaply or at high quality, and everything between them is a tradeoff. 

This approach, he said, can also apply to flow metrics. 

Organizations can optimize more toward speed and predictability, or they can optimize toward data science and problem-solving. “These types of tradeoffs actually permeate all of your business decisions as technology leaders,” he said. “Do you focus on fixing quality? Or do you focus on fixing or shipping new features? And the flow metrics that are now a core component of the SAFe Framework end up having their own sorts of these ‘Flow Triangles.’ There’s your velocity, cycle time and team load. You always want to have really high-velocity routines. And that is intimately linked to how long it takes you to do things, and how many things are being worked on at once?”

Many high-functioning organizations have different teams working at different speeds, using different processes and tools, so coordinating that work is critical. “Thinking about flow metrics as a way to help make sure teams are working together is really important,” Freeman said. “If you imagine a team working on delivering a sprint goal, then you take a step back and think about how the collection of teams is working against shipping a major feature. You have to think about how fast things are getting delivered, and how that impacts your ship time. Are the levers you have to play with as a leader right? So these metrics are really helpful, and flow is really apt.”

Freeman recommends that organizations first figure out where their problems are, with the development team and all stakeholders. Then you can start measuring some coarse things around outcomes, and as you start identifying potential solutions, then you can get tighter and tighter with what you’re measuring. 

He noted that in talking to development teams, it seems like their biggest bottleneck is getting pull requests across the line. “There’s a high cycle time, no one will review my pull request, and that’s preventing us from actually shifting work,” he said. “In the pull request example,” he said, “maybe we’ll go from measuring your request cycle time to measuring how long it takes to get your first review, to know how long it takes you to actually complete any review cycle. And as you build those metrics up, you’ll actually get better information and start to pinpoint and solve problems.” 

The post ‘Flow Triangles’ help organizations ensure teams are working together appeared first on SD Times.

]]>
How Capital One Uses Python to Power Serverless Applications https://sdtimes.com/data/how-capital-one-uses-python-to-power-serverless-applications/ Tue, 18 Apr 2023 16:15:46 +0000 https://sdtimes.com/?p=50934 Cultivating a loyal customer base by providing innovative solutions and an exceptional experience should be the goal of any company, regardless of industry.  This is one of the main reasons why Capital One uses Python to power a large number of serverless applications, giving developers a better experience as they deliver business value to customers. … continue reading

The post How Capital One Uses Python to Power Serverless Applications appeared first on SD Times.

]]>
Cultivating a loyal customer base by providing innovative solutions and an exceptional experience should be the goal of any company, regardless of industry. 

This is one of the main reasons why Capital One uses Python to power a large number of serverless applications, giving developers a better experience as they deliver business value to customers.

Python has a rich toolset with codified best practices that perform well in AWS Lambda. Capital One has been able to take modules, whether they were developed internally or from the Python community, and put them together to build what is necessary inside of a fully managed compute instance.  

“We have vibrant Python and serverless communities within Capital One which has helped us advance this work,” said Brian McNamara, Distinguished Engineer.

Why Python for Serverless

Python and serverless practices are closely aligned in the development lifecycle which allows for quick feedback loops and the ability to scale horizontally. Using Python for serverless also allows for:

  • Faster time to market: Developers use Python to quickly go from ideation to production code. Serverless applications developed with Python allow developers to have their code deployed on a resilient, performant, scalable, and secure platform.
  • Focus on business value: Lower total cost of ownership so developers can focus on features instead of maintaining servers and containers; addressing operating-level system concerns; and managing resilience, autoscaling and utilization.
  • Extremely fast scale: Serverless is event-driven which helps with fast scaling, so it’s important to think of API calls, data in a stream, or a new file to process as events. For example, with built-in retry logic from cloud services, a Python serverless function can process the non-critical path from durable cues so the customer experience is not impacted.
  • Reusable resources: The Python ecosystem provides great resources and the AWS Lambda Powertools Python package is based on a number of open source capabilities. AWS Serverless Application Model also allows for a local testing experience that generates event examples.
  • Flexible coding style: Python provides a flexible coding style allowing developers to blend functional programming, data classes and Object Oriented Programming to process the event.
 Observability Benefits

Additionally, Furman and McNamara emphasized that using Python to power serverless applications has provided Capital One with countless observability benefits so that developers know what is happening inside an application.

“Observability in serverless can be often perceived as more challenging, but it can also be more structured with libraries that codify logs, telemetry data and metric data. This makes it easy to codify best practices,” said Dan Furman, Distinguished Engineer.

Furman and McNamara also pointed out the importance of leveraging the vastness of both the serverless and Python ecosystems. Looking at the knowledge that has been acquired by other members of these communities allows for organizations to gain the benefit of their experiences. 

McNamara and Furman will be giving a presentation on using Python to power serverless applications at PyCon US 2023, taking place at the Salt Palace Convention Center in Salt Lake City Utah from April 19-27. For more information about PyCon US 2023, visit the website.

The post How Capital One Uses Python to Power Serverless Applications appeared first on SD Times.

]]>
Eliminating environment concerns in mobile application testing https://sdtimes.com/test/eliminating-environment-concerns-in-mobile-application-testing/ Mon, 30 Jan 2023 16:00:11 +0000 https://sdtimes.com/?p=50108 Test environments can be a frustrating bottleneck to the testing process and the software development life cycle as a whole. Whether it be unavailable services, devices, or ever-elusive test data, ensuring the right environment for testing creates potential for barriers to shifting left at speed, and cutting corners can put application quality and your business … continue reading

The post Eliminating environment concerns in mobile application testing appeared first on SD Times.

]]>
Test environments can be a frustrating bottleneck to the testing process and the software development life cycle as a whole. Whether it be unavailable services, devices, or ever-elusive test data, ensuring the right environment for testing creates potential for barriers to shifting left at speed, and cutting corners can put application quality and your business at risk. 

A recent study of 1,000 software developers and startup employees found that at least 29% of organizations are using real customer production data in their testing environments. This poses numerous concerns because utilizing real customer data for testing opens the door for violation of GDPR regulations. This, in turn, can lead to loss of resources and reputation for companies. Furthermore, utilizing real data can be disastrous in the event of a data breach – which 45% of companies report experiencing.  

Luckily, there are several steps testers and dev teams can take to ensure they are both eliminating these concerns and testing efficiently.  

This article will take a closer look at some of the most common environment concerns facing testers and dev teams—including the acquisition of usable test data—and explore solutions to eliminating these concerns that fit seamlessly into your CI/CD pipeline. 

Intelligent mocks

Problem: Traditional mocks are too simplistic; legacy service virtualization is too complex 

Traditionally, technical teams have utilized mocks and stubs during the development and testing of their mobile apps. Mocks act as a response to external dependencies that are part of the application’s flow (databases, mainframes, etc.) but are not pertinent to the test at hand. Teams have used mocks so that developers can focus on their code’s functionality and not get sidetracked with these external dependencies. 

Traditional mocks and stubs are limited, however. They provide a simple response to the external dependency to keep the testing moving along. Mocks and stubs do not effectively test real-world scenarios because they do not consider the varied conditions that can arise outside of their particular response. 

But what if you want to test more “real world” conditions? 

Service virtualization allows more in-depth testing than traditional mocks and stubs. However, even if you have access to an expensive service virtualization solution, it will undoubtedly be complex and typically require specialized training or even on-site expertise to facilitate. As such, testers can be stalled in their testing process when waiting for virtual services experts to provide the required virtual services. 

Solution: Mock services are the shift-left answer 

Intelligent mocks, or mock services, are the ideal solution for teams looking for greater agility in their testing process. Intelligent mocks combine the capabilities of mocks and service virtualization to create a testing solution that emulates the behavior, data, and state of external dependencies. You can easily create a slow or garbled response to replicate unexpected real-world conditions, ensuring the application under test is ready for production.  

Mock services are simple to create – simply upload well-known industry specification files such as a Swagger file, WSDL file, or request-response pairs, and create a recording, use the template, or use one of the pre-built common services. Then, share services across the enterprise in an asset repository. These stored intelligent mocks can then be easily accessed for subsequent tests during all stages of the software development lifecycle.

Synthetic Test Data

Problem: Testers have incomplete, incorrect, or unavailable test data

Do any of the following problems sound familiar to you? 

  • Our tests often fail due to outdated or incorrect test data sets. 
  • I cannot test changes to my app early enough because I have outdated test data and no way to create new data sets. 
  • The test data I was provided does not contain unique IDs as expected and that broke my test and delayed my release. 

These problems are just a few of the many issues facing testers when it comes to locating test data for testing mobile apps. While some frequent concerns for mobile testing include requiring a testing environment that is not yet ready or other departments not prioritizing the resources you require, by far the most common concern for testers and dev teams is the lack of relevant or complete test data. 

Many organizations rely on test data management (TDM) systems to create and deliver data; however, this can often result in a wait of days or weeks while the agile testing team waits for the DBA to complete the data task. This often creates substantial delay in release cycles.  

Additionally, with the onset of regulations around personal identifiable information (PII), the challenge to testers lies in creating reliable test data that does not contain any PII. To get around this issue, organizations are trending toward using synthetic data.  

Solution: Having realistic, reusable test data on demand 

When adopting a continuous testing platform, the best options include the ability to generate realistic synthetic test data on the fly for various types of tests and synchronize that data across various components involved in testing. These include the test itself, the test environment, and external dependencies so that testers can work faster and more efficiently. Furthermore, testers can ensure that their app is being tested against relevant, real-world data while alleviating bottlenecks and dependencies in their CI/CD pipeline. 

Some points to consider: 

  • Ideally, a testing platform will be able to quickly generate synthetic data that mirrors real-world data.  
  • Test data generated will be usable across various tests (e.g., functional and performance) and can be reused for future tests.  
  • Synthetic data generation allows teams to be agile and save time and resources by focusing on the test itself—rather than waste resources generating test data.   
  • Testers will be able to work with comprehensive test data with desired variety to achieve better and more robust tests.  
  • Synthetic data generation eliminates PII concerns. 
  • A testing platform ensures that the test data that drives the test is consistent with data in the test environments and external services. 

The key to choosing the right source for your synthetic test data is to adopt a platform that allows you to produce synthetic data to your exact specifications on demand, but also synchronizes the data across tests, environments, and external or mock services. Synchronized data can be reused after your initial test is complete because it resets to its original format and remains referentially intact. For instance, names, addresses, and credit card numbers from the synthetic data set will reset to their original form and will be ready to use in subsequent tests. This process is very cost-effective and saves time since you only have to generate data once for use across multiple tests. With synchronized, synthetic data at your fingertips, testers can eliminate the biggest roadblock to effective testing. 

Virtual Devices  

Problem: Teams want to release high-quality applications more quickly  

While there is no replacement for testing on real devices—particularly during later stage functional and UI tests—testing on simulators and emulators in the early stages of development is an efficient and cost-effective way to speed up the mobile application testing process. Testing on virtual devices earlier in the development lifecycle allows testers to locate glitches and bugs sooner. Furthermore, utilizing virtual devices allows testers access to a broader range of devices as well as access to devices that might otherwise be reserved by another member of your organization.  

Solution: Virtual devices to augment your comprehensive real device lab  

Investing in virtual devices to augment your real device lab is a smart move for testing teams looking to create high-quality mobile apps faster.  

Virtual devices are well-suited for unit testing because simulators and emulators provide quick and relevant feedback in the early stages of development. In addition, a combination of real and virtual devices can perform integration testing, including performance and accessibility testing, quickly and efficiently.  

By testing on a combination of real and virtual devices utilizing the services of a supported virtual device lab—in tandem with your comprehensive real device lab—testing teams can test efficiently at all stages of the software development lifecycle. 

Bottom Line

When it comes to creating high-quality applications that compete in a global marketplace, testing teams must find ways to eliminate common environment concerns that stand in the way. Mock services allow teams to bridge the functionality gaps between traditional mocks and stubs—which are limited—and legacy service virtualization which creates barriers to shifting left—in order to become more agile. When combined with on-demand synthetic test data and complete with synchronization, testers will have the tools and data needed to perform tests throughout the SDLC. Finally, supplementing your real devices with virtual devices allows teams to speed up their testing process and test early and often.  

To learn more about eliminating barriers to application quality, request a free trial at Perfecto.io

The post Eliminating environment concerns in mobile application testing appeared first on SD Times.

]]>
Atlassian to ‘Unleash’ Agile, DevOps best practices at new event https://sdtimes.com/software-development/atlassian-to-unleash-agile-devops-best-practices-at-new-event/ Mon, 09 Jan 2023 20:36:59 +0000 https://sdtimes.com/?p=50018 Struggling with Agile and DevOps implementations? Wondering what the best practices for success are? Join Atlassian on Feb. 9 for a live (in Berlin, Germany) and virtual event called Unleash, at which the company’s customers will describe how they achieved greater efficiency and faster time to software delivery. According the Megan Cook, head of product, … continue reading

The post Atlassian to ‘Unleash’ Agile, DevOps best practices at new event appeared first on SD Times.

]]>
Struggling with Agile and DevOps implementations? Wondering what the best practices for success are?

Join Atlassian on Feb. 9 for a live (in Berlin, Germany) and virtual event called Unleash, at which the company’s customers will describe how they achieved greater efficiency and faster time to software delivery.

According the Megan Cook, head of product, Agile and DevOps, at Atlassian, the event will “flip typical conference formatting on its head” by showcasing those customers that have “optimized their workflow with innovative toolchain solutions, and collaborated from discovery to delivery to build some of the most successful brands and businesses in the world.”

Attendees at Unleash will have the opportunity to engage with Atlassian product leaders such as Cook; Joff Redfern, Atlassian chief product officer; and Justine Davis, head of marketing, Agile and DevOps. In the keynote, they will highlight software development best practices, announce a new Atlassian product, and share the first look at new feature innovations across Jira Software, Jira Work Management, Atlas, and Compass.

That keynote, titled “Level up to multiplayer mode,” will describe how Atlassian connects every member of software teams, with new ways to track insights and ideas in the discovery phase, tighten security during the delivery phase, and manage projects more efficiently using a few “cheat codes” added to Jira Software. “It’s time to level up and enter a new era of multiplayer, multi-phase software development,” Cook said.

“This event really puts customers at the center,” Cook told SD Times. “Not only will we showcase some amazing customer stories in the keynote, but they’ll also present their unique use cases and Atlassian stories throughout the event. Attendees will be the first to learn about the new product we’re launching at the event, and will engage with Atlassian product and company leaders on the event floor. It’s not your average tech conference.”

Unleash will also feature an exhibit hall where Atlassian customers will showcase their workflows and toolchains. Virtual attendees will be able to watch the demos on demand.

The day will conclude with the finale of the first-ever “Devs Unleashed” hackathon, with the finalists showing their projects to a celebrity panel and $93,500 in cash prizes at stake. Registration for the hackathon remains open until Jan. 15.

There is no charge to attend Unleash.

The post Atlassian to ‘Unleash’ Agile, DevOps best practices at new event appeared first on SD Times.

]]>
Platform Engineering is Not New and DevOps is Not Dead https://sdtimes.com/devops/platform-engineering-is-not-new-and-devops-is-not-dead/ Wed, 21 Dec 2022 19:56:16 +0000 https://sdtimes.com/?p=49900 DevOps is dead! Long live platform engineering! Here we go again: another technology hype cycle hailing The New Big Thing, and how The Old Big Thing is dead. But as someone who still believes in DevOps (despite observing many sub-optimal initiatives) and as someone who really does believe in modern platform engineering, I’d like to … continue reading

The post Platform Engineering is Not New and DevOps is Not Dead appeared first on SD Times.

]]>
DevOps is dead! Long live platform engineering!

Here we go again: another technology hype cycle hailing The New Big Thing, and how The Old Big Thing is dead. But as someone who still believes in DevOps (despite observing many sub-optimal initiatives) and as someone who really does believe in modern platform engineering, I’d like to pick apart the topic in a bit more detail and with a bit of history from my time in this space. All histories are imperfect, but some are useful.

A Brief History of Platform Engineering

Building digital platforms as a way to deliver software at scale is not a recent invention, and it predates the emergence of the DevOps movement in the late 2000s. Many large tech companies whose primary business was building software realized decades ago that they could enable developer teams to build, ship, and operate applications more quickly and with higher quality by standardizing infrastructure, building self-service interfaces, providing increasingly higher-level abstractions focused on solving developer problems, and dedicating a team to maintaining all of this as a platform.

However, these teams had to build and operate those platforms entirely from scratch, which required a pool of technically sophisticated (and well-compensated) engineers with operations skills, executive support, and organizational focus, and who were relatively unencumbered with legacy IT, at least compared to your average enterprise company. The tools we take for granted these days around infrastructure as code didn’t exist and, as an industry, we hadn’t yet experienced the transformative impact of public cloud platforms.

In the early 2000s, Amazon created a shared internal IT platform to handle what was described as “undifferentiated heavy-lifting” so developers could better focus on shipping value to customers. A couple of years later, this became available to users outside of Amazon, and rapidly grew from a web application platform to providing infrastructure as a service, transforming our entire industry. Around 2003 or 2004, Google built a dedicated SRE organization and began work on Borg, but the company didn’t make these initiatives known publicly until it published a whitepaper on Borg in 2015 and a book on SRE a year or two later.

Cross-pollination of employees across modern big tech companies meant that these ideas and approaches started to spread – because they worked.

The Start of DevOps

The inefficiencies in many large enterprise development organizations and the problems posed to development teams by increasingly complex infrastructure had been apparent for a while. Attempts to solve this included self-service access to “golden image” virtual machines, self-service software catalogs, and the first few forays into PaaS, often with decent levels of success for new greenfield applications, but much less so for legacy and commercial off-the-shelf applications. It was a significant challenge to enforce mandates across varied internal landscapes (or at least, more varied than what Big Tech companies had), particularly given the regulatory burden under which many of them operated, with decades of relatively heavyweight and manual processes in place.

In parallel with much of this, but in somewhat different environments, the whole DevOps movement began to coalesce in the late 2000s, with one of the significant early moments being John Allspaw and Paul Hammond’s 2009 VelocityConf talk “10+ Deploys Per Day: Dev and Ops Cooperation at Flickr” in which they emphasized the importance of communication, collaboration, and alignment of incentives between operations and development. An entire community emerged to investigate and advance these ideas, and an explosion of interesting open source projects occurred.

Most of this work happened out in the open, and these ideas were adopted by a wide variety of organizations. DevOps was influenced by and borrowed from many prior movements and frameworks, including Agile, Lean Manufacturing, The Toyota Way, and concepts from psychology, cognitive science, organizational dynamics, and industrial safety. This is why I’ve always thought that DevOps is best defined as a loose collection of evolving practices and processes that take context into account.

Scaling DevOps is Difficult

As we’ve tracked in the Puppet State of DevOps Reports for a number of years, while many companies have been successful in implementing DevOps principles and practices, a significant proportion of enterprises have become “stuck in the middle,” with decent success at the individual team level, but not consistently across the whole organization.

In 2018, we first identified that DevOps success within the enterprise required significant standardization on the way to providing self-service as part of our five stages evolutionary model.

The Rise of Platform Engineering

Platform engineering is not new, but it wasn’t a particularly accessible concept if you hadn’t experienced it for yourself. In 2019, a few of the major analyst firms began to identify it as a trend, and Manuel Pais and Matthew Skelton published Team Topologies – an in-depth examination of the topic based upon their extensive experience doing IT consulting and observing patterns that worked in practice. They not only mapped out the organizational structure, but provided prescriptive advice on organizational dynamics – which, in my experience, is the major stumbling block for more traditional organizations.

Platform Engineering and DevOps are Aligned

Given how many large companies have struggled to experience the benefits of DevOps across their organizations, and that this more prescriptive movement of modern platform engineering is proving to deliver value quickly, some have argued that “DevOps is dead” and that modern platform engineering has supplanted it.

This just isn’t true, and we do ourselves a disservice as an industry if we perpetuate it. DevOps has always borrowed ideas from other movements, and platform engineering is just another one to add to the list for organizations of a certain scale and complexity. If you’re working in a small company with a handful of developers (some who are more inclined to the operational side than others), there may be no need for you to take the platform approach, and the principles around DevOps are still a great guiding function.

DevOps is about using automation as a tool to align incentives and increase collaboration across all of the teams involved in the software delivery lifecycle in order to deliver software better, more quickly, and with less stress. Modern platform engineering has taken the already existing platform approach and added an explicit focus on treating the platform as a product rather than as a project, as well as clear guidance for where teams should interact via collaboration, and where they should interact via self-service interfaces.

Like DevOps, platform engineering makes heavy use of automation, focuses on collaboration, requires empathy across organizational functions, and keeps people rather than technology front and center. It’s perfectly aligned with DevOps, and is proving to be a viable way for many enterprises to do DevOps at scale, in highly complex and varied environments.

DevOps isn’t dead. It’s just evolving.

Get the full scoop on how platform engineering enables DevOps at scale in Puppet’s forthcoming State of DevOps Report: Platform Engineering Edition. Sign up now to get the report.

The post Platform Engineering is Not New and DevOps is Not Dead appeared first on SD Times.

]]>
Improve Business Resilience and Customer Happiness with Quality Engineering https://sdtimes.com/testing/improve-business-resilience-and-customer-happiness-with-quality-engineering/ Tue, 08 Nov 2022 20:03:31 +0000 https://sdtimes.com/?p=49537 Today’s global markets are rapidly evolving, with continual shifts in customer needs and preferences across both B2B and B2C industries. It’s becoming increasingly difficult to deliver innovative, high-quality product experiences that retain customers — which ultimately limits the ability for companies to remain competitive. Many companies focus on quickly launching features to attract new customers, … continue reading

The post Improve Business Resilience and Customer Happiness with Quality Engineering appeared first on SD Times.

]]>
Today’s global markets are rapidly evolving, with continual shifts in customer needs and preferences across both B2B and B2C industries. It’s becoming increasingly difficult to deliver innovative, high-quality product experiences that retain customers — which ultimately limits the ability for companies to remain competitive.

Many companies focus on quickly launching features to attract new customers, but it’s product quality that has the greatest impact on the customer experience. That’s because delivering features too fast without adequate testing introduces bugs, leading to a frustrating customer experience.

The question is: how can your organization balance innovation and quality to keep existing customers happy? DevOps and quality engineering allow development teams to introduce new features faster with much more confidence. This is the key to improving customer happiness, and in turn, increasing business resilience in the long run.

The Impact of User Experience on Customer Retention

Companies spend enormous amounts of resources on building a brand that attracts new customers, but a poor user experience can destroy any loyalty in a matter of minutes. In fact, 76% of consumers have said it’s now easier than ever to choose another brand after a subpar experience. A frustrating product issue encourages many customers to look to a competitor that might make them feel more valued through a stronger user experience. 

While marketing teams focus on positive customer experiences to drive sales, the responsibility for customer satisfaction largely shifts to the product team after the purchase. That’s because a key contributor to poor user experiences are bugs and other product defects that impact usability. The product team, therefore, can directly improve the quality of a user experience by reducing the amount of customer-facing product issues.

In B2C markets, consumers know that they can easily turn to a similar product from a competitor, so they expect a very high-quality and innovative experience to stick around. And these consumer expectations are creeping into B2B markets as well. That means product quality plays a fundamental role in building a positive customer experience that retains both B2C and B2B customers.

More Testing Leads to Higher Customer Satisfaction

We already discussed how software testing supports customer happiness during transition phases — such as DevOps adoption — but a quality engineering strategy is crucial to the long-term growth of a business as well. Since quality engineers are responsible for quality throughout the entire user journey, they’re also critical to maintaining a competitive customer experience.

The most straightforward way to improve quality is to increase testing throughout the development process. This might sound expensive and time consuming, but testing early and often can actually minimize the effort to fix bugs. Through automated and AI-augmented testing tools, quality engineers can more easily contribute to delivering a market-leading product that stands out from the competition.

In short, quality engineering is an essential link between development teams and customers. By investing in automated software testing, companies can make a direct impact on customer satisfaction and customer retention without slowing down new product releases. 

Customer Happiness Builds Business Resilience

Most companies recognize that faster release cycles enable development teams to bring new features to market faster, which allows them to attract new customers with innovation during growth periods. But market contractions reveal the true resilience of a business — and a key measure of this is customer retention.

For most businesses, returning customers generate the most revenue because customer acquisition costs continue to rise for both B2C and B2B markets. The ability to improve quality through automated software testing, therefore, can have a greater impact on revenue than delivering new features for some companies.

Continuously improving quality throughout the user experience means existing customers are more likely to remain customers, even during market contractions. That means increasing customer happiness is the key to building business resilience and remaining competitive despite shifts in consumer expectations and market conditions. 

By investing in software testing as part of a quality engineering strategy, companies are really investing in their existing customers. This is the key to growing a competitive and resilient business in today’s loyalty-driven world.

Content provided by Mabl

The post Improve Business Resilience and Customer Happiness with Quality Engineering appeared first on SD Times.

]]>
Using Data to Sustain a Quality Engineering Transformation https://sdtimes.com/test/using-data-to-sustain-a-quality-engineering-transformation/ Thu, 03 Nov 2022 16:26:53 +0000 https://sdtimes.com/?p=49454 DevOps and quality engineering enable better development practices and improve business resiliency, but many teams struggle to sustain this transformation outside of an initial proof of concept. One of the key challenges with scaling DevOps and quality engineering is determining how software testing fits into an overall business strategy. By leveraging automated testing tools that … continue reading

The post Using Data to Sustain a Quality Engineering Transformation appeared first on SD Times.

]]>
DevOps and quality engineering enable better development practices and improve business resiliency, but many teams struggle to sustain this transformation outside of an initial proof of concept. One of the key challenges with scaling DevOps and quality engineering is determining how software testing fits into an overall business strategy.

By leveraging automated testing tools that collect valuable data, organizations can create shared goals across teams that foster a DevOps culture and drive the business forward. Testing data also helps tie quality engineering to customer experiences, leading to better business outcomes in the long run.

Creating Shared Data-Driven Goals

Collaborative testing is essential for scaling DevOps sustainably because it encourages developers to have shared responsibility over software quality. Setting unified goals backed by in-depth testing data can help every team involved with a software project take ownership over its quality. This collaborative approach helps break down the silos that have traditionally prevented organizations from scaling DevOps across teams.

More specifically, testing data and trend reports that can be easily shared across teams make it easier for organizations to maintain focus on the same core goals. Sharing this testing knowledge better aligns testing and development so that quality goals are considered throughout every stage of the software development lifecycle (SDLC). 

When software-related insights can move seamlessly between developers, testers, and product owners, organizations can deliver a higher quality product faster than before. This reinforces the benefits of sharing responsibility for software quality and helps get more teams on board with DevOps and quality engineering throughout the organization.

In short, tracking testing data is crucial for setting goals that scale DevOps adoption across multiple teams and throughout the SDLC. Intelligent reporting and test maintenance also help quality engineering teams implement quality improvements that directly impact DevOps transformation and business outcomes.

Tying Quality Engineering to Customer Experiences

Sharing data and goals can help encourage developer participation with quality engineering efforts, but tying quality to customer outcomes can encourage investment in software quality from the broader organization. The key is using testing data to adapt quality engineering to new features and customer use patterns.

In our previous article, we discussed how quality engineering connects development teams to customers. A quality-centric approach can help retain customers and lead to a more resilient business over time because a poor user experience encourages them to consider a competitor’s product. 

For example, tracking data from quality testing can reveal a decline in application performance before it’s noticeable to users. These types of changes can build up over time and be difficult to detect without data analysis. By sharing these data insights with the development team, however, the issue can be resolved before it leads to a poor customer experience. This means testing data forms an essential link between code and customers.

Actionable insights from testing data can drive a quality engineering strategy that makes a lasting improvement on customer experiences. And this leads to positive business results that encourages larger investments in software quality throughout the organization. Using data to tie software quality to customer experiences, therefore, endorses the role of quality engineering as a key part of DevOps adoption.

Sustainable Quality Engineering and DevOps

As organizations struggle to build sustainable DevOps practices, they should consider how they can leverage the quality engineering team as an enabler. Quality engineering teams have an enormous amount of testing data that can help development teams improve their processes for delivering high-quality software much faster.

However, testing data is only useful if it can be easily shared with the right stakeholders, whether it’s developers or product managers. This requires collaborative testing tools that integrate throughout the SDLC and empower teams to access data that improves their workflows related to software delivery.

In short, testing data can transform a small-scale adoption of DevOps practices into an organization-wide culture of quality. Data-driven collaboration helps align code to customers through shared goals and insights. Over time, this leads to stronger customer experiences and greater business resilience.

Content provided by Mabl

 

The post Using Data to Sustain a Quality Engineering Transformation appeared first on SD Times.

]]>
Cloud-native success requires API security https://sdtimes.com/api/cloud-native-success-requires-api-security/ Thu, 03 Nov 2022 14:10:03 +0000 https://sdtimes.com/?p=49448 The complexity of modern cloud-native applications, which often leverage microservices, containers, APIs, infrastructure-as-code and more to enable speed in app development and deployment, can create security headaches for organizations that fail to put practices in place to mitigate vulnerabilities. With dependencies on databases and third-party APIs, and sensitive information and secrets such as certificates and … continue reading

The post Cloud-native success requires API security appeared first on SD Times.

]]>
The complexity of modern cloud-native applications, which often leverage microservices, containers, APIs, infrastructure-as-code and more to enable speed in app development and deployment, can create security headaches for organizations that fail to put practices in place to mitigate vulnerabilities.

With dependencies on databases and third-party APIs, and sensitive information and secrets such as certificates and passwords exposed, organizations need to have a mechanism

to track and catalog all the APIs used in their environment. They need visibility into all the inbound and outbound traffic, most importantly, to ensure the mutual communication channels are kept safe and that APIs are properly authenticated. 

Proper upfront design and planning of APIs is crucial to help ensure any event-driven APIs are secured and that there is proper handling of all secrets and sensitive data that gets transmitted in the process.

To begin to properly secure cloud-native applications, it is necessary to have a full understanding of the interfaces that are being exposed, Kimm Yeo, who works in application security at Synopsys, wrote in a recent blog post. “Organizations with internally developed cloud-native applications faced a variety of security incidents in recent years, with the leading causes being insecure use of APIs, vulnerable source codes and compromised account credentials,” she wrote.

It is the expanded use of APIs in today’s applications that create the biggest security challenges. In a report, Gartner found that 90% of a web application’s attack surface area are APIs, and that in 2022, APIs would be the most frequent attack vector. 

Effective API security can’t be done by merely protecting and blocking vulnerable APIs with some web firewalls and monitoring tools,” Yeo wrote in a recent blog post. “API-based apps need to be treated and managed as a complete development life cycle of their own. Just as the software app development life cycle goes through upfront planning and design, so must the API life cycle. There needs to be proper API design with API policies built into an organization’s overall business risk and continuity program.”

Yeo points out that traditional application security scanning tools were not designed for cloud-native applications, and lack visibility into modern application development and deployment architectures. This is because, she wrote, that “most API and serverless function calls are event-driven triggers…” 

In her blog, Yeo states that organizations need to view and treat APIs holistically as a life cycle development and deployment framework of its own – like how they look at application development as a life cycle. This would entail up-front design and planning, as well as policies around API management to ensure vulnerabilities are kept to a minimum.

 Further, she encourages organizations to do risk assessments of all API-based applications, with the goal of focusing on those apps with the highest risk factors. She wrote that effective API security practices require continuous testing to verify vulnerable APIs during application tests at runtime compilation with third-party components.

Beyond all that, the use of modern scanning tools and techniques can further ensure that any vulnerabilities can be addressed (or the risk mitigated) before the apps are deployed. SCA, SAST,  and DAST tools – which have been more commonly used as app security test practices – and now, more frequently, IAST tools can provide insights to where those security holes are, so they can be fixed before the application is released, when it is less expensive to remediate and can do less damage to the organization’s business and reputation.

“This,” Yeo wrote, “is the key essence of effective API security strategy in my opinion.  An organization needs the ability to quickly identify and proactively test and remediate the apps with highest risk (as defined by its security policies and API risk classifications) before they go into production release. An API risk classification system can use criteria such as the application’s exposure (internal- or external-facing apps), the types of information it handles (such as PII/ PCI-DSS payment related), the record size that the app manages (which can get into thousands and millions), and the cost of data breaches, disaster recovery, and business continuity impact.

Content provided by SD Times and Synopsys.

 

The post Cloud-native success requires API security appeared first on SD Times.

]]>