DevOps Archives - SD Times https://sdtimes.com/tag/devops/ Software Development News Mon, 08 May 2023 19:05:25 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.1 https://sdtimes.com/wp-content/uploads/2019/06/bnGl7Am3_400x400-50x50.jpeg DevOps Archives - SD Times https://sdtimes.com/tag/devops/ 32 32 Report: Adoption of DevOps practices increasing, while code velocity remains the same https://sdtimes.com/devops/report-adoption-of-devops-practices-increasing-while-code-velocity-remains-the-same/ Mon, 08 May 2023 19:05:25 +0000 https://sdtimes.com/?p=51102 According to the latest State of Continuous Delivery report from the Continuous Delivery Foundation (CDF), the adoption of DevOps is continuing to increase, with 84% of developers participating in DevOps activities in the first quarter of the year. However, the report also found that code velocity has remained steady for the past two years, with … continue reading

The post Report: Adoption of DevOps practices increasing, while code velocity remains the same appeared first on SD Times.

]]>
According to the latest State of Continuous Delivery report from the Continuous Delivery Foundation (CDF), the adoption of DevOps is continuing to increase, with 84% of developers participating in DevOps activities in the first quarter of the year.

However, the report also found that code velocity has remained steady for the past two years, with about 15% of teams being considered top performers, meaning they have lead times of less than one day.

The CDF believes that while DevOps may be a help, it is likely the increasing complexity of projects that is slowing things down. 

Another finding in the report is that despite the increase in DevOps adoption, there hasn’t been an increase in the number of DevOps-related tools over the last year. The average number of tools sits at 4.5 currently. 

However, there is still a strong correlation between the number of tools in place and how likely a team is to be a top performer. These top performers were measured by three metrics: lead time for code changes, deployment frequency, and time to restore service.

The report also found that in general increasing CI/CD tools may increase performance, but interoperability concerns arise when multiple tools are used together. 

“We note that the proportion of top performers remains flat while that of low performers increases dramatically, with an increasing number of self-hosted CI/CD tools used. This suggests that there is a diminishing return from increasing the number of CI/CD tools a developer uses. The usage of an increasing number of tools may also be a response to increased complexity, which is having negative impacts on the performance of these developers. Similarly, the integration of multiple tools may not be optimally implemented, leading to function overlap that is impacting performance,” the report states. 

The report also shows a correlation between speed and stability metrics. 30% of the highest performers in code change lead time were also the highest performers when it came to service restoration. 

Interest in security is also clear from the survey, as testing applications for security measures was done by 37% of developers, rising up to the second most popular DevOps-related activity that teams engage in. 

“Developers who perform build-time security checks in an automated and continuous fashion are the most likely to be top performers, and the least likely to be low performers, across all

three metrics, of the types shown,” the report states. 

The report was conducted in partnership with SlashData, surveying over 125,000 respondents. It was released during the Linux Foundation’s Open Source Summit, happening this week in Vancouver, BC. At the event, the CDF also announced the addition of four new members: F5 NGINX, Prodvana, Salesforce, and Testkube.

The post Report: Adoption of DevOps practices increasing, while code velocity remains the same appeared first on SD Times.

]]>
Digital.ai’s AI powered DevOps platform allows developers to build and deliver code more intelligently https://sdtimes.com/ai/digital-ais-ai-powered-devops-platform-allows-developers-to-build-and-deliver-code-more-intelligently/ Wed, 26 Apr 2023 17:57:11 +0000 https://sdtimes.com/?p=51021 Digital transformation company Digital.ai today announced the release of Corbett, the most recent update to its AI-powered DevOps platform. Corbett is geared at helping organizations deliver applications with better user experiences while also enhancing the productivity of development teams. With Corbett, teams gain access to intelligence features that allow for the leveraging of AI to … continue reading

The post Digital.ai’s AI powered DevOps platform allows developers to build and deliver code more intelligently appeared first on SD Times.

]]>
Digital transformation company Digital.ai today announced the release of Corbett, the most recent update to its AI-powered DevOps platform. Corbett is geared at helping organizations deliver applications with better user experiences while also enhancing the productivity of development teams.

With Corbett, teams gain access to intelligence features that allow for the leveraging of AI to deliver improved software. Digital.ai now integrates and centralizes data from more sources from development through to production, so users can avoid ruined reports and analytics and use AI to predict possible outcomes based on past results.

This release brings new persona-based dashboards that can be used to analyze data for every stage of application development and delivery while also showing performance indicators for team and princess efficiencies as well as risk reduction. 

The company stated that this expands on the existing predictive intelligence capabilities that apply machine learning to help teams assess alternatives, manage tradeoffs, and make choices in a more predictive way.

“The release of Corbett underscores our commitment to provide an open DevOps platform built for the enterprise and to deliver targeted solutions that support the complexity and scale of the world’s largest organizations,” said Derek Holt, CEO of Digital.ai. “With the Corbett release we take another big step forward as we deliver all new versions of our market-leading offerings and dramatically enhance the role of data, ML and AI within software development and delivery. As the pace of AI-based innovation continues to accelerate we are excited to harness the power of AI to unlock value not just for individuals, but for teams and organizations.”

Among the key enhancements included in Corbett is an improved Jailbreak Bypass Detection so that users can automatically detect the newest bypasses. This works to frustrate attackers and prevent jailbreaks.

Furthermore, teams get Release Manager and Platform Engineering Intelligence which works to cut back on risk by analyzing release dependencies and success trends to improve upon the identification of why releases might fail.

Lastly, Corbett offers the ability to integrate security checks into release pipelines for better governance. This lets organizations automatically check whether or not a list of protections has been applied to a mobile app by utilizing a new integration with OPA.

To learn more, read the blog post

The post Digital.ai’s AI powered DevOps platform allows developers to build and deliver code more intelligently appeared first on SD Times.

]]>
Platform engineering brings consistency to tools, processes under one umbrella https://sdtimes.com/software-development/platform-engineering-brings-consistency-to-tools-processes-under-one-umbrella/ Thu, 09 Mar 2023 18:46:03 +0000 https://sdtimes.com/?p=50529 When creating a platform engineering team, an important first step is the interview process. What do developers want and need? What works, and what doesn’t?  Sounds like what companies do when reaching out to customers about new rollouts, right? Well, it is, when you consider your development team as being customers of the platform. “Treat … continue reading

The post Platform engineering brings consistency to tools, processes under one umbrella appeared first on SD Times.

]]>
When creating a platform engineering team, an important first step is the interview process. What do developers want and need? What works, and what doesn’t? 

Sounds like what companies do when reaching out to customers about new rollouts, right? Well, it is, when you consider your development team as being customers of the platform.

“Treat your developers, treat your DevOps teams, as your own internal customer and interview them,” urged Bill Manning, Solution Engineering Manager at JFrog, which offers a Software Supply Chain platform to speed the secure delivery of new applications and features. Once you’ve listened to the developers, Manning went on, you can roll their feedback into defining your platform engineering approach, which helps organizations find ways to be more efficient, and to create more value by streamlining development. 

The reason platform engineering is becoming increasingly important is that over time the process of designing and delivering software has become more complex, requiring a number of different tools and customizations, according to Sean Pratt, product marketing manager at JFrog. “When that happens,” he said, “You lack repeatable processes that can be tracked and measured over time.” 

Standardization and intelligent consolidation of tool sets, which can reduce the time, effort and cost needed to manage the sprawl many organizations face, is but one of the core tenets of platform engineering that JFrog talks about. ​​Among the others are reduction of cognitive load, reduction of repetitive tasks through automation, reusable components and tools, repeatable processes, and the idea of developer self-service.

Organizations using DevOps practices have seen the benefits of bringing developers and operations together, to get new features released faster through the implementation of smaller cycles, microservices, GitOps and the cloud. The downside? Coders have now found themselves smack-dab in the middle of operations. 

“The complexity [of software] has increased, and even though the tool sets in a way were supposed to simplify, they’ve actually increased it,” Manning said. “A lot of developers are suffering from cognitive overload, saying, ‘Look, I’m a coder. I signed up to build stuff.’ Now they have to go in and figure out how they are going to deploy [and] what is going to be running inside the container. These are things a lot of developers didn’t sign up for.”

Platform engineering has grown out of the need to address the burden organizations have placed on their development teams. By shifting left more practices with which developers are unfamiliar, there’s more responsibility on today’s developers to do more than just design elegant applications.

This all takes a toll on developers. Automating things like Terraform to provision infrastructure, or Helm charts for Kubernetes, for example, frees up developers to do what they do best – innovate and create new features at the pace the business needs to achieve. A developer would rather get a notification that a particular task is done rather than having to dive in and do it manually. 

While platform engineering can help standardize on tools, organizations still want to offer developers flexibility. “In a microservice world, for example, certain teams might need to use certain tools to get their job done. One might need to use Java with Jenkins for one project, while another team uses Rust with JFrog Pipelines to execute another project,” Pratt said. “So there’s a need for a solution that can bring all those pieces together under one umbrella, which is something JFrog does to help put consistent processes and practices in place across teams.” 

To be sure, a mentality shift is required for successful platform engineering.  “You know what, maybe we don’t need 25 tools. Maybe we can get away with five. And we might have to make some compromises, but that’s okay. Because the thing is, it’s actually beneficial in the long term.” Regardless of how many tools you settle on, Manning had a final piece of advice, “Think about how you bring them all together; that’s where universal and integrated platforms can help connect the disparate tools you need.”  

Content provided by SD Times and JFrog.

The post Platform engineering brings consistency to tools, processes under one umbrella appeared first on SD Times.

]]>
Tackling today’s software supply chain issues with DevOps-centric security https://sdtimes.com/security/tackling-todays-software-supply-chain-issues-with-devops-centric-security/ Fri, 27 Jan 2023 18:25:25 +0000 https://sdtimes.com/?p=50187 Developers, and the software they develop, are the most popular attack vector for today’s hackers and bad actors. The many development tools and processes, not to mention thousands of open-source libraries and binaries, all introduce opportunities for malicious or even accidental injection of risk across the entire software supply chain.  In response to this expanding … continue reading

The post Tackling today’s software supply chain issues with DevOps-centric security appeared first on SD Times.

]]>
Developers, and the software they develop, are the most popular attack vector for today’s hackers and bad actors. The many development tools and processes, not to mention thousands of open-source libraries and binaries, all introduce opportunities for malicious or even accidental injection of risk across the entire software supply chain.  In response to this expanding threat landscape, developers, security leaders, and operations teams are struggling to find a more effective way to secure their software ecosystem.

Increasingly, organizations are adopting DevSecOps, which focuses on “shift left” security, the idea of introducing security practices earlier in the software development life cycle. Practically speaking, however, DevSecOps is more of an overall strategy or approach, rather than a concrete set of responsibilities assigned to a specific group or individual.  DevSecOps  is best used to define how an organization addresses product security, or establish a cultural and technical “shift left” within the integrated development environment. It can also provide an organizational framework to address security efforts between compliance, security and development teams.

The reality, however, is that while both security and development teams are committed to fortifying the business, collaboration between the two groups can be challenging.  A company’s security teams are tasked to do whatever it takes to secure the business, while developers prefer to write quality code instead of spending their day fixing vulnerabilities.

It is the DevOps team that in fact owns the specific responsibilities, tasks and budget needed to secure the software supply chain.

Defining DevOps-Centric security

As the name implies, DevOps teams manage the operational side of software development and are responsible for each step of the software development life cycle (SDLC).  While security teams set policies and development teams write code, DevOps teams manage the SDLC workflow. They are the actual owners of the software supply chain.

DevOps teams are also the logical owners for software supply chain security.  DevOps teams have the resources, skills and accountability to identify and address security issues across the entire DevOps workflow, from development to runtime to deployment. DevOps teams are involved in every step of the software development process, so they’re ideally suited to serve as a bridge between security teams, responsible for compliance and business requirements, and development teams, which can get overwhelmed with security requests, processes and regulations that are not their core competency.

DevOps-centric security delivers an end-to-end view of an organization’s software supply chain and flags a multitude of vulnerabilities and weaknesses such as CVEs, configuration issues, secrets exposure, and infrastructure-as-code violations. It also suggests remediation strategies at each stage of the software development life cycle, from code to container, to device.

How does DevOps-Centric security work?

A DevOps-centric approach to security builds on the rigorous process and continuous, automated testing that’s the hallmark of all DevOps teams. More importantly, it guides organizations with a clear understanding of each vulnerability and suggests actions to efficiently fix the issues.

Focus on binaries as well as source code

The modern software supply chain has just one core asset that is delivered into production: the software binary, which takes many forms – from package, to container, to archive file.  Attackers are increasingly focusing on attacking binaries, as they contain more information than source code alone. By analyzing the binary as well as the source code, DevOps teams can provide a more complete picture of any impact or point of exploitation. This helps eliminate complexity and streamlines security detection, assessment, and remediation efforts.

Contextual analysis: Determining which vulnerabilities, weaknesses, and exposures need remediation and the most cost-effective way to do it

Serious vulnerabilities are being identified daily through the efforts of researchers and bug bounty programs.  Yet these CVEs may or may not be exploitable, depending on factors such as the application’s configurations, use of authentication mechanisms, and exposure of keys. DevOps-centric security looks at the context in which software is operating to prioritize and recommend how to remediate vulnerabilities quickly and effectively, without wasting developers’ time on non-applicable issues.  It’s particularly important to be able to scan and analyze containers for open-source vulnerabilities, since the use of containers to hide malicious code is now on the rise.

Providing a holistic view of the software supply chain

Through their involvement in each step of the software development process, DevOps teams offer a holistic view of a company’s software supply chain and all its weaknesses.  DevOps-centric security analyzes binaries, infrastructure, integrations, releases, and flows all in one place, eliminating the confusion of disparate security systems with varying or limited  information, and inconsistent reporting.  Thus, when you implement security using DevOps processes, you not only scan to identify problems within the software, but also help developers prioritize and fix them quickly and easily

The post Tackling today’s software supply chain issues with DevOps-centric security appeared first on SD Times.

]]>
Atlassian to ‘Unleash’ Agile, DevOps best practices at new event https://sdtimes.com/software-development/atlassian-to-unleash-agile-devops-best-practices-at-new-event/ Mon, 09 Jan 2023 20:36:59 +0000 https://sdtimes.com/?p=50018 Struggling with Agile and DevOps implementations? Wondering what the best practices for success are? Join Atlassian on Feb. 9 for a live (in Berlin, Germany) and virtual event called Unleash, at which the company’s customers will describe how they achieved greater efficiency and faster time to software delivery. According the Megan Cook, head of product, … continue reading

The post Atlassian to ‘Unleash’ Agile, DevOps best practices at new event appeared first on SD Times.

]]>
Struggling with Agile and DevOps implementations? Wondering what the best practices for success are?

Join Atlassian on Feb. 9 for a live (in Berlin, Germany) and virtual event called Unleash, at which the company’s customers will describe how they achieved greater efficiency and faster time to software delivery.

According the Megan Cook, head of product, Agile and DevOps, at Atlassian, the event will “flip typical conference formatting on its head” by showcasing those customers that have “optimized their workflow with innovative toolchain solutions, and collaborated from discovery to delivery to build some of the most successful brands and businesses in the world.”

Attendees at Unleash will have the opportunity to engage with Atlassian product leaders such as Cook; Joff Redfern, Atlassian chief product officer; and Justine Davis, head of marketing, Agile and DevOps. In the keynote, they will highlight software development best practices, announce a new Atlassian product, and share the first look at new feature innovations across Jira Software, Jira Work Management, Atlas, and Compass.

That keynote, titled “Level up to multiplayer mode,” will describe how Atlassian connects every member of software teams, with new ways to track insights and ideas in the discovery phase, tighten security during the delivery phase, and manage projects more efficiently using a few “cheat codes” added to Jira Software. “It’s time to level up and enter a new era of multiplayer, multi-phase software development,” Cook said.

“This event really puts customers at the center,” Cook told SD Times. “Not only will we showcase some amazing customer stories in the keynote, but they’ll also present their unique use cases and Atlassian stories throughout the event. Attendees will be the first to learn about the new product we’re launching at the event, and will engage with Atlassian product and company leaders on the event floor. It’s not your average tech conference.”

Unleash will also feature an exhibit hall where Atlassian customers will showcase their workflows and toolchains. Virtual attendees will be able to watch the demos on demand.

The day will conclude with the finale of the first-ever “Devs Unleashed” hackathon, with the finalists showing their projects to a celebrity panel and $93,500 in cash prizes at stake. Registration for the hackathon remains open until Jan. 15.

There is no charge to attend Unleash.

The post Atlassian to ‘Unleash’ Agile, DevOps best practices at new event appeared first on SD Times.

]]>
Copado launches new DevOps marketplace for plug-and-play integrations https://sdtimes.com/devops/copado-launches-new-devops-marketplace-for-plug-and-play-integrations/ Thu, 08 Dec 2022 20:36:10 +0000 https://sdtimes.com/?p=49793 The low-code DevOps company Copado is launching a new marketplace to help companies find pre-built solutions from itself, its partners, and the Copado community. These solutions can be used to extend the features of Copado’s DevOps platform for Salesforce. Companies can benefit from the existing expertise of experts who have already solved DevOps challenges and … continue reading

The post Copado launches new DevOps marketplace for plug-and-play integrations appeared first on SD Times.

]]>
The low-code DevOps company Copado is launching a new marketplace to help companies find pre-built solutions from itself, its partners, and the Copado community. These solutions can be used to extend the features of Copado’s DevOps platform for Salesforce.

Companies can benefit from the existing expertise of experts who have already solved DevOps challenges and are now sharing that knowledge. 

The DevOps Exchange is launching with over 40 listings, and more will be added. The company hopes that the marketplace will serve as a one-stop shop for customers who are looking to “accelerate their digital transformation journey.”

The company also explained that solutions within the cloud can help with even the most complex situations, such as end-to-end business processes that span multiple clouds. 

“The Copado DevOps Exchange can unlock an organization’s potential to automate anything in the software delivery lifecycle. The possibilities are endless,” said David Brooks, senior vice president of product strategy at Copado. 

Simon Whight, platform technical architect for Zen Internet, which uses Copado, added: “The main driver for us to work with Copado was that it allowed us to achieve mouse-click deployments. If anything requires a command line interface, I prefer it to sync with Copado to keep the technology barrier accessible at an admin level. With Copado’s DevOps Exchange, I’m excited to have access to a one-stop shop to find complementary DevOps products that are compatible with the Copado platform.”

The post Copado launches new DevOps marketplace for plug-and-play integrations appeared first on SD Times.

]]>
Why using IaC alone is a half-baked infrastructure strategy https://sdtimes.com/software-development/why-using-iac-alone-is-a-half-baked-infrastructure-strategy/ Wed, 23 Nov 2022 17:32:28 +0000 https://sdtimes.com/?p=49643 The shift to a developer-centric vision of infrastructure that started about 15 years ago offered users frequent updates and a way to simplify API-centric automation. Infrastructure as Code (IaC) became the standard method for software developers to describe and deploy cloud infrastructure. While on the surface, having more freedom sounds like a nearly utopian scenario … continue reading

The post Why using IaC alone is a half-baked infrastructure strategy appeared first on SD Times.

]]>
The shift to a developer-centric vision of infrastructure that started about 15 years ago offered users frequent updates and a way to simplify API-centric automation. Infrastructure as Code (IaC) became the standard method for software developers to describe and deploy cloud infrastructure. While on the surface, having more freedom sounds like a nearly utopian scenario for developers, it has become a nightmare for operations teams who are now tasked with understanding and managing the infrastructure and the underpinning tools in the DevOps toolchain. As cloud infrastructure became commoditized, new limitations emerged alongside the broader adoption of IaC, limitations that can have negative impacts for the overall business.

If you think of application environments like a pizza (or in my case, a vegan pizza), IaC is just the unbaked dough, and the individual IaC files alone are simply flour, salt, yeast, water and so on. Without the other necessary components like the data, network topology, cloud services and environment services – the toppings, if you will – you don’t have a complete environment. Additionally, the need for proper governance, cost controls, and improved cross-team collaboration has become even more critical. 

While the needs of developers are application-centric, IaC is infrastructure-centric. There is a disconnect between the expectations of the development and operations teams that creates delays, security risks, and friction between those two teams. For IaC to be used effectively, securely and in a scalable manner, there are some challenges that need to be addressed.

Let’s discuss the top four challenges of IaC and how developer and DevOps teams can overcome these pain points and obstacles using Environments-as-a-Service (EaaS). 

Integrating IaC assets 

One of today’s central challenges is in generating a pipeline that provides a way to deploy infrastructure assets continuously and consistently. Many DevOps organizations are sitting on top of mountains of IaC files, and it’s a monumental task for these teams to understand, track and deploy the right infrastructure for the right use case. 

EaaS solves this problem by automating the process of discovering, identifying, and modeling infrastructure into complete, automated environments that include all the elements that the end user requires. 

Furthermore, EaaS solutions eliminate the application environment bottleneck and enable faster innovation at scale by defining elements in modular templates, otherwise known as “blueprints,” and help organizations manage the environments throughout the entire application life cycle. Existing IaC scripts can easily be imported and managed in an infrastructure stack, or users can choose to build “blueprints” from scratch. 

Distributing the right environments to the right developers

Using the wrong environment definitions in different stages of the SDLC is like using a chainsaw to slice your pizza; it won’t get the job done right and could create more problems. It’s crucial for developers to have access to properly configured environments for their use case. Developers don’t necessarily have the expertise to properly configure environments. Yet, in some cases, they’re expected to, or they attempt to do it because there aren’t enough people in their organization with the cloud infrastructure skills to do so in a timely manner. The result could be an environment that’s horribly misconfigured like putting sauce on top of your pizza (sorry, Chicago) or even worse, pineapple and ham (not sorry).

Organizations should distribute complete environments to their developers with “baked-in” components and customized policies and permissions. To accomplish this, most EaaS solutions have the ability to provide a self-service environment catalog that simplifies this process, while also dramatically reducing provisioning times. Operations teams can take advantage of role-based policies, so developers have access only to the environments that are appropriate for their use case, ensuring consistency throughout the pipeline.  Consumption of this service should be available via command line or API, so it can seamlessly integrate into your CI/CD pipeline.

Managing the environment life cycle & controlling costs 

The orchestration of environments is only one piece of the pie. It has to be served, consumed, and then, of course, you have to clean up afterward. In addition to configuring and serving up the right environments for the developers to consume, EaaS allows for seamless enforcement of policy, compliance, and governance throughout the entire environment life cycle, providing information on how infrastructure is being used. During deployment, end users can set the environments for a specified runtime, automating teardown once resources are no longer required to ensure the leanest possible consumption of cloud resources. 

We all know there’s no such thing as a free lunch, so understanding and managing cloud resource costs is a crucial element of the full environment life cycle and demonstrates the business value of a company’s infrastructure. By leveraging auto-tagging and custom-tagging capabilities, businesses can easily track how environments are deployed in a centralized way, providing complete operational transparency, and ensuring resources are being provisioned in line with an organization’s prescribed standards. Understanding the business context behind cloud resource consumption allows businesses to optimize costs and better align those expenses with specific projects, applications, or development teams.

Creating a reliable IaC infrastructure 

There are several critical steps to ensure infrastructure reliability. This includes depositing IaC code into a source control repository, versioning it, running tests against it, packaging it, and deploying it in a testing environment – all before delivering it to production in a safe, secure, and repeatable manner. 

In maintaining a consistent and repeatable application architecture, the objective is to treat IaC like any application code. You can meet the changing needs of software development by creating a continuous IaC infrastructure pipeline that is interwoven with the software development and delivery process, leveraging best practices from software delivery, and transposing them to the infrastructure delivery process.

To ensure that your infrastructure is reliable, you must consider the larger picture. IaC has become ubiquitous and has certainly advanced infrastructure provisioning, but that’s where it ends. Organizations need to start thinking about not just configuring and provisioning infrastructure but managing the entire life cycle of complete environments to realize the true value of infrastructure. Just like you wouldn’t go to a pizza parlor and order a blob of raw dough, you wouldn’t serve your developers just the infrastructure – they need the complete environment.

Using EaaS, developers are able to achieve their project objectives, support the entire stack, integrate IaC assets, and deliver comprehensive environments needed to orchestrate the infrastructure life cycle. Buon appetito!

The post Why using IaC alone is a half-baked infrastructure strategy appeared first on SD Times.

]]>
Improve Business Resilience and Customer Happiness with Quality Engineering https://sdtimes.com/testing/improve-business-resilience-and-customer-happiness-with-quality-engineering/ Tue, 08 Nov 2022 20:03:31 +0000 https://sdtimes.com/?p=49537 Today’s global markets are rapidly evolving, with continual shifts in customer needs and preferences across both B2B and B2C industries. It’s becoming increasingly difficult to deliver innovative, high-quality product experiences that retain customers — which ultimately limits the ability for companies to remain competitive. Many companies focus on quickly launching features to attract new customers, … continue reading

The post Improve Business Resilience and Customer Happiness with Quality Engineering appeared first on SD Times.

]]>
Today’s global markets are rapidly evolving, with continual shifts in customer needs and preferences across both B2B and B2C industries. It’s becoming increasingly difficult to deliver innovative, high-quality product experiences that retain customers — which ultimately limits the ability for companies to remain competitive.

Many companies focus on quickly launching features to attract new customers, but it’s product quality that has the greatest impact on the customer experience. That’s because delivering features too fast without adequate testing introduces bugs, leading to a frustrating customer experience.

The question is: how can your organization balance innovation and quality to keep existing customers happy? DevOps and quality engineering allow development teams to introduce new features faster with much more confidence. This is the key to improving customer happiness, and in turn, increasing business resilience in the long run.

The Impact of User Experience on Customer Retention

Companies spend enormous amounts of resources on building a brand that attracts new customers, but a poor user experience can destroy any loyalty in a matter of minutes. In fact, 76% of consumers have said it’s now easier than ever to choose another brand after a subpar experience. A frustrating product issue encourages many customers to look to a competitor that might make them feel more valued through a stronger user experience. 

While marketing teams focus on positive customer experiences to drive sales, the responsibility for customer satisfaction largely shifts to the product team after the purchase. That’s because a key contributor to poor user experiences are bugs and other product defects that impact usability. The product team, therefore, can directly improve the quality of a user experience by reducing the amount of customer-facing product issues.

In B2C markets, consumers know that they can easily turn to a similar product from a competitor, so they expect a very high-quality and innovative experience to stick around. And these consumer expectations are creeping into B2B markets as well. That means product quality plays a fundamental role in building a positive customer experience that retains both B2C and B2B customers.

More Testing Leads to Higher Customer Satisfaction

We already discussed how software testing supports customer happiness during transition phases — such as DevOps adoption — but a quality engineering strategy is crucial to the long-term growth of a business as well. Since quality engineers are responsible for quality throughout the entire user journey, they’re also critical to maintaining a competitive customer experience.

The most straightforward way to improve quality is to increase testing throughout the development process. This might sound expensive and time consuming, but testing early and often can actually minimize the effort to fix bugs. Through automated and AI-augmented testing tools, quality engineers can more easily contribute to delivering a market-leading product that stands out from the competition.

In short, quality engineering is an essential link between development teams and customers. By investing in automated software testing, companies can make a direct impact on customer satisfaction and customer retention without slowing down new product releases. 

Customer Happiness Builds Business Resilience

Most companies recognize that faster release cycles enable development teams to bring new features to market faster, which allows them to attract new customers with innovation during growth periods. But market contractions reveal the true resilience of a business — and a key measure of this is customer retention.

For most businesses, returning customers generate the most revenue because customer acquisition costs continue to rise for both B2C and B2B markets. The ability to improve quality through automated software testing, therefore, can have a greater impact on revenue than delivering new features for some companies.

Continuously improving quality throughout the user experience means existing customers are more likely to remain customers, even during market contractions. That means increasing customer happiness is the key to building business resilience and remaining competitive despite shifts in consumer expectations and market conditions. 

By investing in software testing as part of a quality engineering strategy, companies are really investing in their existing customers. This is the key to growing a competitive and resilient business in today’s loyalty-driven world.

Content provided by Mabl

The post Improve Business Resilience and Customer Happiness with Quality Engineering appeared first on SD Times.

]]>
Using Data to Sustain a Quality Engineering Transformation https://sdtimes.com/test/using-data-to-sustain-a-quality-engineering-transformation/ Thu, 03 Nov 2022 16:26:53 +0000 https://sdtimes.com/?p=49454 DevOps and quality engineering enable better development practices and improve business resiliency, but many teams struggle to sustain this transformation outside of an initial proof of concept. One of the key challenges with scaling DevOps and quality engineering is determining how software testing fits into an overall business strategy. By leveraging automated testing tools that … continue reading

The post Using Data to Sustain a Quality Engineering Transformation appeared first on SD Times.

]]>
DevOps and quality engineering enable better development practices and improve business resiliency, but many teams struggle to sustain this transformation outside of an initial proof of concept. One of the key challenges with scaling DevOps and quality engineering is determining how software testing fits into an overall business strategy.

By leveraging automated testing tools that collect valuable data, organizations can create shared goals across teams that foster a DevOps culture and drive the business forward. Testing data also helps tie quality engineering to customer experiences, leading to better business outcomes in the long run.

Creating Shared Data-Driven Goals

Collaborative testing is essential for scaling DevOps sustainably because it encourages developers to have shared responsibility over software quality. Setting unified goals backed by in-depth testing data can help every team involved with a software project take ownership over its quality. This collaborative approach helps break down the silos that have traditionally prevented organizations from scaling DevOps across teams.

More specifically, testing data and trend reports that can be easily shared across teams make it easier for organizations to maintain focus on the same core goals. Sharing this testing knowledge better aligns testing and development so that quality goals are considered throughout every stage of the software development lifecycle (SDLC). 

When software-related insights can move seamlessly between developers, testers, and product owners, organizations can deliver a higher quality product faster than before. This reinforces the benefits of sharing responsibility for software quality and helps get more teams on board with DevOps and quality engineering throughout the organization.

In short, tracking testing data is crucial for setting goals that scale DevOps adoption across multiple teams and throughout the SDLC. Intelligent reporting and test maintenance also help quality engineering teams implement quality improvements that directly impact DevOps transformation and business outcomes.

Tying Quality Engineering to Customer Experiences

Sharing data and goals can help encourage developer participation with quality engineering efforts, but tying quality to customer outcomes can encourage investment in software quality from the broader organization. The key is using testing data to adapt quality engineering to new features and customer use patterns.

In our previous article, we discussed how quality engineering connects development teams to customers. A quality-centric approach can help retain customers and lead to a more resilient business over time because a poor user experience encourages them to consider a competitor’s product. 

For example, tracking data from quality testing can reveal a decline in application performance before it’s noticeable to users. These types of changes can build up over time and be difficult to detect without data analysis. By sharing these data insights with the development team, however, the issue can be resolved before it leads to a poor customer experience. This means testing data forms an essential link between code and customers.

Actionable insights from testing data can drive a quality engineering strategy that makes a lasting improvement on customer experiences. And this leads to positive business results that encourages larger investments in software quality throughout the organization. Using data to tie software quality to customer experiences, therefore, endorses the role of quality engineering as a key part of DevOps adoption.

Sustainable Quality Engineering and DevOps

As organizations struggle to build sustainable DevOps practices, they should consider how they can leverage the quality engineering team as an enabler. Quality engineering teams have an enormous amount of testing data that can help development teams improve their processes for delivering high-quality software much faster.

However, testing data is only useful if it can be easily shared with the right stakeholders, whether it’s developers or product managers. This requires collaborative testing tools that integrate throughout the SDLC and empower teams to access data that improves their workflows related to software delivery.

In short, testing data can transform a small-scale adoption of DevOps practices into an organization-wide culture of quality. Data-driven collaboration helps align code to customers through shared goals and insights. Over time, this leads to stronger customer experiences and greater business resilience.

Content provided by Mabl

 

The post Using Data to Sustain a Quality Engineering Transformation appeared first on SD Times.

]]>
KubeCon 2022: GitLab announces new Security and Governance updates, Slim.AI launches Container Intelligence, Sigstore announces free software signing service, and more https://sdtimes.com/software-development/kubecon-2022-gitlab-announces-new-security-and-governance-updates-slim-ai-launches-container-intelligence-sigstore-announces-free-software-signing-service-and-more/ Tue, 25 Oct 2022 18:42:16 +0000 https://sdtimes.com/?p=49372 More exciting new releases and product updates were revealed today as KubeCon 2022 continues.  GitLab announces new Security and Governance updates GitLab today announced new enhancements to its Security and Governance solution which aims to help organizations integrate security and compliance in every step of the software development lifecycle as well as secure their software … continue reading

The post KubeCon 2022: GitLab announces new Security and Governance updates, Slim.AI launches Container Intelligence, Sigstore announces free software signing service, and more appeared first on SD Times.

]]>
More exciting new releases and product updates were revealed today as KubeCon 2022 continues. 

GitLab announces new Security and Governance updates

GitLab today announced new enhancements to its Security and Governance solution which aims to help organizations integrate security and compliance in every step of the software development lifecycle as well as secure their software supply chain.

According to the company, these enhancements are intended to provide visibility and management over security findings and compliance requirements, as well as deliver an improved software supply chain security experience.

Among these enhancements are the ability to ingest software bill of materials reports and build artifact signing. Additionally, users will be better equipped to proactively identify vulnerabilities and fulfill compliance and regulatory standards. 

Slim.AI launches Container Intelligence

The cloud-native optimization and security company Slim.AI launched Container Intelligence to allow users to gain insights into what’s in the most popular container images that they’re baking into their software every day.

Container Intelligence works to scan over 160 popular public container images making up 30% of total global pull volume utilizing a combination of both open-source and proprietary scanning tools.

With this release, users gain access to publicly available container profile pages on the Slim.AI website; vulnerability counts by severity, container construction details, and package information; fully searchable and categorized containers; and the most updated data. 

Sigstore announces free software signing service

Sigstore today announced the general availability of its free software signing service. This release is intended to offer open source communities access to production-grade stable services for artifact signing and verification.

According to sigstore, the company’s goal is to provide a set of tools designed to improve supply chain security by simplifying the process of signing, verifying, and checking the software developers are building and consuming.

Sigstore stated that it will operate the service with a 99.5% uptime SLO and round-the-clock pager support. Project sponsors Google, Red Hat, GitHub, and Chainguard have helped make this possible by providing the resources that are essential to service level objectives. 

JFrog’s Pyrsia initiative incubating under CD Foundation

The liquid software company JFrog has announced that Pyrsia, an open-source software community initiative that utilizes blockchain technology in order to secure software packages, is now an incubating project under the Continuous Delivery Foundation.

“We’re excited to join our long-time partners at the CD Foundation in creating a groundswell around Pyrsia to further its mission to better secure the software supply chain,” said Stephen Chin, VP of developer relations at JFrog and governing board member for the CD Foundation. “With the CD Foundation’s support, and that of our incredible industry partners, developers can leverage Pyrsia to have peace-of-mind in knowing their open source components have not been compromised, and confidently deliver secure software at scale.”

With this incubation, JFrog and the CD Foundation intend to grow Pyrsia’s backing and engagement through a centralized governance model as well as a defined roadmap, and representation within the wider technology and open-source communities.

The post KubeCon 2022: GitLab announces new Security and Governance updates, Slim.AI launches Container Intelligence, Sigstore announces free software signing service, and more appeared first on SD Times.

]]>