Industry Spotlight Archives - SD Times https://sdtimes.com/category/industry-spotlight/ Software Development News Wed, 29 Mar 2023 15:14:59 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.1 https://sdtimes.com/wp-content/uploads/2019/06/bnGl7Am3_400x400-50x50.jpeg Industry Spotlight Archives - SD Times https://sdtimes.com/category/industry-spotlight/ 32 32 Quality assurance assures great user experiences https://sdtimes.com/test/quality-assurance-assures-great-user-experiences/ Wed, 29 Mar 2023 15:14:59 +0000 https://sdtimes.com/?p=50729 The user experience has become critically important in today’s digital world, even as organizations struggle to align testing with the speed of delivery. Functional tests, performance tests and UI tests, among others, can reveal if an application isn’t behaving or performing as expected. But on their own, they can’t tell you if your user is … continue reading

The post Quality assurance assures great user experiences appeared first on SD Times.

]]>
The user experience has become critically important in today’s digital world, even as organizations struggle to align testing with the speed of delivery.

Functional tests, performance tests and UI tests, among others, can reveal if an application isn’t behaving or performing as expected. But on their own, they can’t tell you if your user is having a great experience. And as we know, a poor experience can lead to losing customers and revenue, as well as  damage your company’s reputation.

To ensure a good user experience, organizations need to understand their products, they need to know their markets and they need to have empathy for their users. Once that’s established, according to Gevorg Hovsepyan, head of product at test automation platform mabl, you need to make sure your testing strategy aligns with that.

“You need to have a good pulse on what your customers are experiencing, and the quality of that,” Hovsepyan said. “Because ultimately, your goal is to deliver a great customer experience. It’s not just to make sure your API endpoint provides the right JSON structure.” 

With changes in the markets and the need to have everything digital drive faster delivery and better experiences, you need to do UI testing to understand the performance, you need to understand the accessibility, and you need to appreciate the impact on the organization’s business and revenue if those things aren’t addressed, he said.

“For example,” Hovsepyan explained, “if you’re an airline and plan to offer discounted fares on a particular date, your website needs to be able to handle that surge in traffic.  If your website doesn’t perform to enable 10,000 people to buy those tickets, or 1,000 people to buy those tickets, then your bottom line takes a direct hit. Your CFOs and your executives will look at that and ask what happened, and those would-be customers are less likely to book another trip with you.” 

This has led to a shift in mindset to determine where – and how – you test the experience of your customer. It has become increasingly important for the entire organization to contribute to  quality.

Hovsepyan said mabl believes everyone in the organization should be able to participate in building high-quality software, and approaches testing  from a low-code perspective that enables product managers, business teams and engineers who wouldn’t always participate in quality to be able to quickly create tests or reports that are important to them.

Mabl sees quality engineering as a strategic practice that integrates testing into development pipelines to improve the customer experience and business outcomes. Similarly to DevOps, quality engineering seeks to bring teams from across the software development organization together to establish a shared understanding of quality and how everyone can contribute to it. 

Hovsepyan said that low-code test automation enables everyone to participate in testing and contribute to quality engineering, even if they don’t have a lot of coding experience.

“At mabl, we believe that quality is a combination of multiple things from functional to non-functional. So our solution is a modern SaaS cloud platform that unifies all testing capabilities.” Beyond functional testing, mabl has added visual testing, PDF testing, accessibility testing and performance reporting, bringing different testing capabilities into a single unified quality engineering platform that enables users to assess quality, he explained.

Taking steps toward quality engineering

Hovsepyan said first and foremost, organizations should start with a strategic mindset and seek to understand the state of your business, what your business is trying to accomplish, and how quality-related issues might contribute to your business performance – positively or negatively. “If you don’t do that,” he said, “selling your ideas down the road is going to get increasingly harder.”

Once you understand the state of the business, he advised doing a self-assessment to determine the state of quality within your company. “This doesn’t necessarily include understanding the quality of your technology,” he pointed out. “It’s also understanding your org structure, and the skill sets you have in your team. How do you see your plans developing? How can you broaden quality contributions so that testing matches the needs of your customers in the long-term?” 

Finally, he said, assess the maturity of your testing capabilities. Is the team mostly doing manual testing, or is some automation involved? Do you have scripts and infrastructure in place? Then, he concluded, look for modern technologies that are coming to market to help accelerate the journey toward quality engineering.

Content provided by SD Times and mabl

The post Quality assurance assures great user experiences appeared first on SD Times.

]]>
Platform engineering brings consistency to tools, processes under one umbrella https://sdtimes.com/software-development/platform-engineering-brings-consistency-to-tools-processes-under-one-umbrella/ Thu, 09 Mar 2023 18:46:03 +0000 https://sdtimes.com/?p=50529 When creating a platform engineering team, an important first step is the interview process. What do developers want and need? What works, and what doesn’t?  Sounds like what companies do when reaching out to customers about new rollouts, right? Well, it is, when you consider your development team as being customers of the platform. “Treat … continue reading

The post Platform engineering brings consistency to tools, processes under one umbrella appeared first on SD Times.

]]>
When creating a platform engineering team, an important first step is the interview process. What do developers want and need? What works, and what doesn’t? 

Sounds like what companies do when reaching out to customers about new rollouts, right? Well, it is, when you consider your development team as being customers of the platform.

“Treat your developers, treat your DevOps teams, as your own internal customer and interview them,” urged Bill Manning, Solution Engineering Manager at JFrog, which offers a Software Supply Chain platform to speed the secure delivery of new applications and features. Once you’ve listened to the developers, Manning went on, you can roll their feedback into defining your platform engineering approach, which helps organizations find ways to be more efficient, and to create more value by streamlining development. 

The reason platform engineering is becoming increasingly important is that over time the process of designing and delivering software has become more complex, requiring a number of different tools and customizations, according to Sean Pratt, product marketing manager at JFrog. “When that happens,” he said, “You lack repeatable processes that can be tracked and measured over time.” 

Standardization and intelligent consolidation of tool sets, which can reduce the time, effort and cost needed to manage the sprawl many organizations face, is but one of the core tenets of platform engineering that JFrog talks about. ​​Among the others are reduction of cognitive load, reduction of repetitive tasks through automation, reusable components and tools, repeatable processes, and the idea of developer self-service.

Organizations using DevOps practices have seen the benefits of bringing developers and operations together, to get new features released faster through the implementation of smaller cycles, microservices, GitOps and the cloud. The downside? Coders have now found themselves smack-dab in the middle of operations. 

“The complexity [of software] has increased, and even though the tool sets in a way were supposed to simplify, they’ve actually increased it,” Manning said. “A lot of developers are suffering from cognitive overload, saying, ‘Look, I’m a coder. I signed up to build stuff.’ Now they have to go in and figure out how they are going to deploy [and] what is going to be running inside the container. These are things a lot of developers didn’t sign up for.”

Platform engineering has grown out of the need to address the burden organizations have placed on their development teams. By shifting left more practices with which developers are unfamiliar, there’s more responsibility on today’s developers to do more than just design elegant applications.

This all takes a toll on developers. Automating things like Terraform to provision infrastructure, or Helm charts for Kubernetes, for example, frees up developers to do what they do best – innovate and create new features at the pace the business needs to achieve. A developer would rather get a notification that a particular task is done rather than having to dive in and do it manually. 

While platform engineering can help standardize on tools, organizations still want to offer developers flexibility. “In a microservice world, for example, certain teams might need to use certain tools to get their job done. One might need to use Java with Jenkins for one project, while another team uses Rust with JFrog Pipelines to execute another project,” Pratt said. “So there’s a need for a solution that can bring all those pieces together under one umbrella, which is something JFrog does to help put consistent processes and practices in place across teams.” 

To be sure, a mentality shift is required for successful platform engineering.  “You know what, maybe we don’t need 25 tools. Maybe we can get away with five. And we might have to make some compromises, but that’s okay. Because the thing is, it’s actually beneficial in the long term.” Regardless of how many tools you settle on, Manning had a final piece of advice, “Think about how you bring them all together; that’s where universal and integrated platforms can help connect the disparate tools you need.”  

Content provided by SD Times and JFrog.

The post Platform engineering brings consistency to tools, processes under one umbrella appeared first on SD Times.

]]>
Enhance Web Applications With a File Upload Service https://sdtimes.com/api/enhance-web-applications-with-a-file-upload-service/ Thu, 20 Oct 2022 16:42:16 +0000 https://sdtimes.com/?p=49320 When looking to add a file uploader solution to your web application or website, finding one that will bring the most value to end users is essential. The answer should be built with security in mind, be easy to use, and have a lot of useful features. Web applications are the foundation of modern business … continue reading

The post Enhance Web Applications With a File Upload Service appeared first on SD Times.

]]>
When looking to add a file uploader solution to your web application or website, finding one that will bring the most value to end users is essential. The answer should be built with security in mind, be easy to use, and have a lot of useful features.

Web applications are the foundation of modern business and personal life. Whether you’re developing a new app or just want to enhance an existing one, there’s no shortage of tools available. 

In this post, we’ll look at one tool that can help developers enhance their web applications with file uploaders.

What are the Essentials To File Uploading?

 – Faster development and less code: File uploads are not part of your core functionality, so they provide an excellent opportunity to offload functionality that’s not core to your app. What’s more, adding file uploads to your app is a simple process that requires less code than other types of functionality. 

– Higher conversions: File uploads can make your site more usable for your visitors by letting them select the best file for their needs. 

– More efficient workflows: File uploads can be used to make collaborative workflows more efficient. You can use them to allow people to upload content and make it available for viewing. 

– Better user experience: People don’t have to make a trip to a print shop just to get the file they need. Now, they can just find the file on your site and download it. 

– Fewer headaches for IT: File uploads can make it easier for people to get their work done. This will reduce the number of headaches IT departments have to deal with.

 – Higher customer satisfaction: Letting people upload files to your site will make it easier for them to get what they need. This will increase customer satisfaction.

Why Automation Matters

The ability to automatically optimize your images sets the best file upload services apart from the rest. With tools like Filestack, you can upload an image, select where it should be published, and resize it. 

The best part is that you won’t have to manually take care of this each time you upload a new image. That saves you time and effort. 

– Image transformation: You can also use Filestack’s image transformation functionality to optimize your images. Images are often too large, so if you want to make them smaller, you can use Filestack’s image transformation functionality to do this automatically. 

– Image branding: If you want to brand your images, you can use Filestack to watermark them. This can be helpful when you allow user-submitted images.

– Image analysis: Filestack also lets you analyze your images. You can use this functionality to perform tasks like removing metadata and identifying faces in pictures. This is helpful if you want to comply with privacy laws or need to identify people in images.

The Importance of Security in a File Uploader

Security is essential for any app, and file uploaders are no exception. You need to be sure that your users’ files are secure. A file uploader should provide end-to-end encryption so that files are fully secure at all times. 

File encryption

Encryption is used to protect the content of files by converting them into a form that is unreadable to anyone except the file owner. Once encrypted, files can be stored on servers in any location and sent over the internet to be viewed by the file owner. 

Data encryption

Data encryption is a method of protecting data as it passes between servers and users. When data is encrypted, it’s converted into a form unreadable by anyone except the servers responsible for decrypting it.

Tokenization

Tokenization is a method of storing sensitive information like passwords, account numbers, and social security numbers in a database without actually storing the information itself. Instead, the data is converted into a long string of seemingly random characters called a token.

With a good file uploader, the process users go through when uploading images, PDFs, videos, and other documents is simplified, making the application or website more oriented towards a positive user experience. 

The uploader can also ensure that images load quickly and reliably, cutting down on the number of clicks and wait times needed to use a web application. 

Using a reliable file uploader for frameworks like Angular has many benefits for applications, including the ability to split code into different sections, which makes the developer’s job easier. Also, Angular has a declarative user interface (UI), which lets developers build UIs with the data they receive.

With Angular file upload, users can take advantage of several unique features, such as localization, a wide range of themes, Angular’s ability to work with all platforms, and a time progress bar.

Third-Party File Uploader

Filestack’s Angular file upload solution has all these features and a few more. These include an elegant user interface, different ways to upload files besides the picker widgets, drag-and-drop features, and an integration that can be changed to fit your needs.

With Filestack’s API, uploads, URL ingestion, and integration with IOS and Android devices are all greatly improved, making the user experience better as a whole.

“We have thousands of users uploading files on our platform daily.” Upload failures can decrease conversion by creating a poor user experience. Dorian Collier, director of products at the digital entertainment studio JibJab, said, “We’re thrilled to see Filestack taking on this challenge to make sure that our users have a smooth experience.”  

Filestack’s File Uploader for Angular lets users change, convert, and optimize images, files, and videos before they appear in the user’s app.

This file uploader also allows for preparing responsive audio, video, image, and large files, on a mobile network, high latency networks, integration with cloud providers, and multiple other aspects that add complexity to the build and maintenance. 

Large files, crowded mobile networks, networks with high latency, integration with cloud providers, and a variety of other factors make development and maintenance more difficult.

Conclusion 

Filestack’s API is built to be simple and efficient to avoid unnecessary complexity for the business. Enhancing web applications doesn’t need to be complicated. 

Ready-made solutions are available for developers to use and integrate to deliver a superb file upload experience. 

Sign up for free to easily add this image-processing service to your web applications. 

Content provided by SD Times and Filestack

 

The post Enhance Web Applications With a File Upload Service appeared first on SD Times.

]]>
FusionAuth and Permit.io solve tough authentication and authorization problems together https://sdtimes.com/security/fusionauth-and-permit-io-solve-tough-authentication-and-authorization-problems-together/ Tue, 13 Sep 2022 14:59:34 +0000 https://sdtimes.com/?p=48850 If you are a developer, there is a good chance that in your professional life you have been required to develop authentication and authorization management systems for an application from scratch. In fact, there’s a good chance you’ve even had to rebuild it several times in a very short period, based on the evolution of … continue reading

The post FusionAuth and Permit.io solve tough authentication and authorization problems together appeared first on SD Times.

]]>
If you are a developer, there is a good chance that in your professional life you have been required to develop authentication and authorization management systems for an application from scratch. In fact, there’s a good chance you’ve even had to rebuild it several times in a very short period, based on the evolution of the application and changes in the requirements of the product and its customers.

Authentication and authorization management are integral parts of building an application. Over time, as the application grows, the task of developing these systems becomes more and more complex. As a result, companies waste valuable developer time figuring out how to build them more efficiently. This means that at the end of the day, the company’s development team is not so concerned with developing the main product, but rather the surrounding bits. In addition, there is also the issue of security. The field of IAM (Identity and access management) is constantly evolving to meet regulatory requirements and in response to an increasing number of cyber-attacks, which unfortunately takes up a lot of developers’ time. Developers are required to take all this into account when they develop the authentication and authorization management systems for their products, and they have to make sure the two systems work well together.

Enter FusionAuth & Permit.io

A new webinar will make this complicated story really simple. It is organized by Permit.io, which develops a popular permissions and access control management solution with low code. The webinar will feature Permit’s founder and CEO Or Weis in conversation with Dan Moore of FusionAuth, whose developer-focused platform enables efficient, secure customer authentication. The webinar will be  moderated by Apple’s Cloud infrastructure engineering manager Cheryl Hung. 

This webinar will take place on Sept. 20 at 9am PT / 12pm ET and is intended for developers of all levels. Participation is free of charge and accessible by prior registration.

The webinar will include an overview of the entire IAM world, with an emphasis on authentication and authorization, how the two solutions intertwine and how they can be used together without making mistakes along the way. Participants will learn what the authentication system is, what the authorization management system is, and how to build such systems yourself versus leveraging FusionAuth & Permit.io’s developer platforms. Participants will learn in detail how the FusionAuth and Permit.io solutions work both individually, in addition to together. All those in the panel experienced a similar story, which will probably sound familiar to many developers – they worked for a company that developed an application that needed authentication and authorization management systems and had to invest precious time and resources to build one. Moreover, many times they had to rebuild these systems several times very quickly as the application developed and the requirements changed. The goal of the upcoming event is to learn how developers can avoid this in the future by using efficient and flexible tooling.

To register for the Sept. 20 webinar, click here.

 

The post FusionAuth and Permit.io solve tough authentication and authorization problems together appeared first on SD Times.

]]>
SBOMs can help ensure software integrity https://sdtimes.com/security/sboms-can-help-ensure-software-integrity/ Thu, 11 Aug 2022 17:05:41 +0000 https://sdtimes.com/?p=48570 To secure the software in your supply chain, there’s a lot of hype today about the need for an SBOM (software bill of materials). But what does that really mean for development teams today? BOMs have been used for years by organizations; they are a list of the raw materials, sub-assemblies, intermediate assemblies, sub-components, parts, … continue reading

The post SBOMs can help ensure software integrity appeared first on SD Times.

]]>
To secure the software in your supply chain, there’s a lot of hype today about the need for an SBOM (software bill of materials). But what does that really mean for development teams today?

BOMs have been used for years by organizations; they are a list of the raw materials, sub-assemblies, intermediate assemblies, sub-components, parts, and the quantities of each needed to manufacture an end product. 

In today’s software world, it applies to all the code that goes into an application, license requirements for third-party components, dependencies on other components, and compliance with any other industry-specific regulations. According to a May 2021 executive order from U.S. President Joe Biden aimed at tightening up cybersecurity, “an SBOM is useful to those who develop or manufacture software, those who select or purchase software, and those who operate software.”

Michael White, technical director and principal architect at the Software Integrity Group at Synopsys, said there are a couple of different ways to look at SBOMs – either as a static artifact or report, or as a process. “As we add components into our software, or change the version of the components, or update the components, we should be maintaining that SBOM on an ongoing basis,” he said. The continual process of software maintenance, he pointed out, saves you from having to scramble to assemble all the information about changes. As a continual process, you’re building up the SBOM piece by piece as you go along.

As for what SBOMs mean for developers, White said those are the people who are in the middle of the supply chain, as producers of software and consumers of software used to create their applications. As such, they have to worry about two different sets of obligations, White explained. “They have to worry about doing what they’re required to do for the end user of our product. But then also, are we passing that requirement down to the people that we consume software from?” 

With open source, that could be in the form of generating export information about a particular package; with commercial software, an organization should have the requirement that the supplier provide an SBOM. “That kind of information should kind of filter down the supply chain so that the information kind of bubbles up again.”

Today’s modern software comes with a long tail of dependencies, and studies have shown that as much as 90% of a modern application today is not written as first-party code by your development team, White said. “The SBOM does have to include your own components, the things you’re developing,” he said, as well as components assembled from other sources.

White said Synopsys talks more about building trust than simply discussing security, because organizations also have to think about safety, quality, compliance – and how to make that available to developers.

“We’re very much about the developer experience,” White said. “So, surfacing up that information at the right time, providing meaningful feedback that tells developers about something they can understand and act on. Once that is embedded and visible in the process, a lot of other concerns go away. It keeps the security people happy, it keeps the market compliance people happy, and the legal team and risk team happy.”

With its platform, White said, Synopsys is building the bridge between developers and the other stakeholders in an application to ensure those requirements are being met as well.

Content provided by SD Times and Synopsys

The post SBOMs can help ensure software integrity appeared first on SD Times.

]]>
Optimize continuous delivery with continuous reliability https://sdtimes.com/devops/optimize-continuous-delivery-with-continuous-reliability/ Wed, 10 Aug 2022 14:32:57 +0000 https://sdtimes.com/?p=48549 The 2021 State of DevOps report indicates that greater than 74% of organizations surveyed have Change Failure Rate (CFR) greater than 16% (the report provides a range from 16% to 30%). Of these, a significant proportion (> 35%) likely have CFRs exceeding 23%.  This means that while organizations seek to increase software change velocity (as … continue reading

The post Optimize continuous delivery with continuous reliability appeared first on SD Times.

]]>
The 2021 State of DevOps report indicates that greater than 74% of organizations surveyed have Change Failure Rate (CFR) greater than 16% (the report provides a range from 16% to 30%). Of these, a significant proportion (> 35%) likely have CFRs exceeding 23%. 

This means that while organizations seek to increase software change velocity (as measured by the other DORA metrics in the report), a significant number of deployments result in degraded service (or service outage) in production and subsequently require remediation (including hotfix, rollback, fix forward, patch etc.). The frequent failures potentially impair revenue and customer experience, as well as incur significant costs to remediate. 

Most customers whom we speak to are unable to proactively predict the risk of a change going into production. In fact, the 2021 State of Testing in DevOps report also indicates that greater than 70% of organizations surveyed are not confident about the quality of their releases. A smaller, but still significant, proportion (15%) “Release and Pray” that their changes won’t degrade production. 

Reliability is a key product/service/system quality metric.  CFR is one of many reliability metrics. Other metrics include availability, latency, thruput, performance, scalability, mean time between failures, among others. While reliability engineering in software has been an established discipline, we clearly have a problem ensuring reliability.  

In order to ensure reliability for software systems, we need to establish practices that plan for, specify, engineer, measure and analyze reliability continuously along the DevOps life cycle. We call this “Continuous Reliability” (CR).  

Key Practices for Continuous Reliability 

Continuous Reliability derives from the principle of “Continuous Everything” in DevOps. The emergence (and adoption) of Site Reliability Engineering (SRE) principles has led to CR evolving to be a key practice in DevOps and Continuous Delivery. In CR, the focus is to take a continuous proactive approach at every step of the DevOps lifecycle to ensure that reliability goals will be met in production. 

This implies that we are able to understand and control the risks of changes (and deployments) before they make it to production. 

The key pillars of CR are shown in the figure below:

CR is not, however, the purview of site reliability engineers (SREs) alone. Like other DevOps practices, CR requires active collaboration among multiple personas such as SREs, product managers/owners, architects, developers, testers, release/deployment engineers and operations engineers. 

Some of the key practices for supporting CR (that are overlaid on top of the core SRE principles) are described below.

1)    Continuous Testing for Reliability

Continuous Testing (CT) is an established practice in Continuous Delivery. However, the use of CT for continuous reliability validation is less common. Specifically for validation of the key reliability metrics (such as availability, latency, throughput, performance, scalability), many organizations still use waterfall-style performance testing, where most of the testing is done in long duration tests before release. This not only slows down the deployment, but does an incomplete job of validation. 

Our recommended approach is to validate these reliability metrics progressively at every step of the CI/CD lifecycle. This is described in detail in my prior blog on Continuous Performance Testing

2)    Continuous Observability 

Observability is also an established practice in DevOps. However, most observability solutions (such as Business Services Reliability) focus on production data and events. 

What is needed for CR is to “shift-left” observability into all stages of the CI/CD lifecycle, so that reliability insights can be gleaned from pre-production data (in conjunction with production data). For example, it is possible to glean reliability insights from patterns of code changes (in source code management systems), test results and coverage, as well as performance monitoring by correlating such data with past failure/reliability history in production.   

Pre-production environments are more data rich than production environments (in terms of variety); however, most of the data is not correlated and mined for insights. Such observability requires us to set up “systems of intelligence” (SOI, see figure below) where we continuously collect and analyze pre-production data along the CI/CD lifecycle to generate a variety of reliability predictions as and when applications change (see next section). 

3)      Continuous Failure, Risk Insights and Prediction 

An observability system in pre-production allows us to continuously assess and monitor failure risk along the CI/CD lifecycle. This allows us to proactively assess (and even predict) the failure risk associated with changes. 

For example, we set up a simple SOI for an application (using Google Analytics) where I collected code change data (from the source code management system) as well as history of escaped defects (from past deployments to production). By correlating such data (gradient boosted tree algorithm), I was able to establish an understanding of what code change patterns resulted in higher levels of escaped defects. In this case, I found a significant correlation between code churn and defects leaked (see figure below).

We were then able to use the same analytics to predict how escaped defects would change based on code churn in my current deployment (see inset in the figure above). 

While this is a very simple example of reliability prediction using a limited data set, we can do continuous failure risk prediction by exploiting a broader set of data from pre-production, including testing and deployment data. 

For example, in my previous article on Continuous Performance Testing, I discussed various approaches for performance testing of component-based applications. Such testing generates a huge amount of data that is extremely difficult to process manually. An observability system can then be used to collect the data to establish baselines of component reliability and performance, and in turn used to generate insights in terms of how system reliability may be impacted by changes in individual application components (or other system components). 

4)    Continuous Feedback  

One of the key benefits of an observability system is to be able to provide quick and continuous feedback to the development/test/release/SRE teams on the risk associated with changes and provide helpful insights on how to address them. This would allow development teams to proactively address these risks before the changes are deployed to production. For example, development teams can be alerted as soon as performing a commit (or a pull request) of the failure risks associated with the changes they have made. Testers can get feedback on the tests that are the most important to run. Similarly, SREs can get early planning insights into the level of error budgets they need to plan for the next release cycle. 

Next up: Continuous Quality 

Reliability, however, is just one dimension of application/system quality. It does not, for example, fully address how we maximize customer experience that is influenced by other factors such as value to users, ease of use, and more. In order to get true value from DevOps and Continuous Delivery initiatives, we need to establish practices for predictively attaining quality – we call this “Continuous Quality.” I will discuss this in my next blog. 

The post Optimize continuous delivery with continuous reliability appeared first on SD Times.

]]>
Asking developers to do security is a risk in itself without training https://sdtimes.com/security/asking-developers-to-do-security-is-a-risk-in-itself/ Mon, 01 Aug 2022 20:33:50 +0000 https://sdtimes.com/?p=48441 As the pace and complexity of software development increases, organizations are looking for ways to improve the performance and effectiveness of their application security testing, including “shifting left” by integrating security testing directly into developer tools and workflows. This makes a lot of sense, because defects, including security defects, can often be addressed faster and … continue reading

The post Asking developers to do security is a risk in itself without training appeared first on SD Times.

]]>
As the pace and complexity of software development increases, organizations are looking for ways to improve the performance and effectiveness of their application security testing, including “shifting left” by integrating security testing directly into developer tools and workflows. This makes a lot of sense, because defects, including security defects, can often be addressed faster and more cost-effectively if they are caught early. Issues found during downstream testing or in production result in costly and disruptive rework.

Organizations have come to understand that the cost to remediate defects grows exponentially the farther along into production an application travels. Prevention costs are the least expensive, while the cost of correcting something is 10x greater, and the cost of an application failure is 100x greater.

So asking developers to prevent defects is an important step, but most developers aren’t security experts, and tools that are optimized for the needs of the security team can be too complex and disruptive to be embraced by developers. To make matters worse, these solutions often require developers to leave their integrated development environment (IDE) to analyze issues and determine potential fixes. All this tool- and context-switching kills developer productivity, so even though teams recognize the upside of checking their code and open-source dependencies for security issues, they avoid using the security tools they’ve been given due to the downside of decreased productivity.

To help developers maintain productivity without sacrificing security, they should look for a comprehensive SAST solution that identifies security and quality defects early in the software development life cycle (SDLC), they should look for solutions that:

  • enable them to find issues quickly as they code. If developers can fix these issues in real-time, that means these issues don’t leave the developer workstation;
  • provide a full scan if they need it; and
  • see issues on the servers from CI/CD scans directly in their IDE without having to scan locally in the IDE.

In response to these needs, Synopsys developed Code Sight and recently released Code Sight Standard Edition (SE). Code Sight SE is an IDE-based application security solution that helps developers find and fix security issues as they code, without switching tools or interrupting their workflow.

“We have spent enormous amounts of time designing Code Sight,” said Raj Kesarapalli, senior manager of product management at Synopsys. He said the core strength of Code Sight is its ability to give priority to developer relevancy. It delivers that benefit by identifying vulnerabilities while still in the developer environment. It also ensures that no new issues are introduced as a result of the changes made.

It will scan only the select files in question for issues. It handles the remaining hundreds or thousands of files by leveraging context from a previous scan. Making use of that vast knowledge base eliminates the need for an immediate and lengthy comprehensive scan of the full universe of files. This frees the developer to continue writing code at the same time that issues are being found and fixed − all within the developer environment.

The process is not unlike the way a spell-checker operates in a Microsoft Word document, said Kesarapalli: While corrections are being made to specific words or phrases in the document, the author or editor is able to continue working, losing little or no time as the process goes forward.

For a software team, that means a major productivity gain.

“This gives them what is relevant and what they can find quickly,” he said. At the same time, fewer flaws make their way to the extended cycle of central analysis. “It short-circuits the loop for some of the issues,” Kesarapalli said.

Code Sight enhances  developer productivity and Its early intervention means there is less for the rest of the team to do. In fact, some of the issues caught early on in the development environment never find their way to the other stakeholders at all.

Developers anywhere in the world can gain access to the software by downloading a free trial that enables them to start using it in less than five minutes. The link to the download is: 

https://marketplace.visualstudio.com/items?itemName=SynopsysCodeSight.vscode-codesight

Another way to preview Code Sight Standard is with this demo video:

https://community.synopsys.com/s/article/Getting-Started-With-Code-Sight-Standard-Edition

Content provided by SD Times and Synopsys

The post Asking developers to do security is a risk in itself without training appeared first on SD Times.

]]>
Combining Static Application Security Testing (SAST) and Software Composition Analysis (SCA) Tools https://sdtimes.com/cicd/combining-static-application-security-testing-sast-and-software-composition-analysis-sca-tools/ Tue, 26 Jul 2022 15:25:46 +0000 https://sdtimes.com/?p=48371 When creating, testing, and deploying software, many development companies now use proprietary software and open source software (OSS).    Proprietary software, also known as closed-source or non-free software, includes applications for which the publisher or another person reserves licensing rights to modify, use, or share modifications. Examples include Adobe Flash Player, Adobe Photoshop, macOS, Microsoft … continue reading

The post Combining Static Application Security Testing (SAST) and Software Composition Analysis (SCA) Tools appeared first on SD Times.

]]>
When creating, testing, and deploying software, many development companies now use proprietary software and open source software (OSS)
 

Proprietary software, also known as closed-source or non-free software, includes applications for which the publisher or another person reserves licensing rights to modify, use, or share modifications. Examples include Adobe Flash Player, Adobe Photoshop, macOS, Microsoft Windows, and iTunes. 

In contrast, OSS grants users the ability to use, change, study, and distribute the software and its source code to anyone on the internet. Accordingly, anyone can participate in the development of the software. Examples include MongoDB, LibreOffice, Apache HTTP Server, and the GNU/Linux operating system. 

This means that many organizations are using third-party code and modules for their OSS. While these additions are incredibly useful for many applications, they can also expose organizations to risks. According to Revenera’s 2022 State of the Software Supply Chain Report, 64% of organizations were impacted by software supply chain attacks caused by vulnerabilities in OSS dependencies. 

Although OSS can expose organizations to risks, avoiding OSS software and dependencies is not practical. OSS software and dependencies now play an integral role in development. This is particularly the case for JavaScript, Ruby, and PHP application frameworks, which tend to use multiple OSS components. 

Since software companies cannot realistically avoid using OSS, cybersecurity teams must avoid vulnerabilities associated with OSS by employing software composition analysis (SCA) tools. Additionally, they need to combine SCA with static application security testing (SAST), since proprietary software such as Microsoft Windows and Adobe Acrobat is also used.

Read to learn more about SAST and SCA. This article will also explain how cybersecurity teams can combine SAST and SCA into a comprehensive cybersecurity strategy.

What Is SAST?

SAST is a code scanning program that reviews proprietary code and application sources for cybersecurity weaknesses and bugs. Also known as white box testing, SAST is considered a static approach because it analyzes code without running the app itself. Since it only reads code line by line and doesn’t execute the program, SAST platforms are extremely effective at removing security vulnerabilities at every page of the software product development lifecycle (SDLC), particularly during the first few stages of development. 

Specifically, SAST programs can help teams:

  • Find common vulnerabilities, such as buffer overflow, cross-site scripting, and SQL injection
  • Verify that development teams have conformed to development standards
  • Root out intentional breaches and acts, such as supply chain attacks
  • Spot weaknesses before the code goes into production and creates vulnerabilities
  • Scan all possible states and paths for proprietary software bugs of which development teams were not aware
  • Implement a proactive security approach by reducing issues early in the SDLC

SAST plays an integral role in software development. By giving development teams real-time feedback as they code, SAST can help teams address issues and eliminate problems before they go to the next phase of the SDLC. This prevents bugs and vulnerabilities from accumulating. 

What Is SCA?

SCA is a code analysis tool that inspects source code, package managers, container images, binary files, and lists them in an inventory of known vulnerabilities called a Bill of Materials (BOM). The software then compares the BOM with databases that hold information about common and known vulnerabilities, such as the U.S. National Vulnerability Database (NVD). The comparison enables cybersecurity teams to spot critical legal and security vulnerabilities and fix them.

Some SCA tools can also compare their inventory of known vulnerabilities to discover licenses connected with the open-source code. Cutting edge SCAs may also be able to:

  • Analyze overall code quality (i.e., history of contributions and version control)
  • Automate the entire process of working with OSS modules, including selection and blocking them from the IT environment as needed
  • Provide ongoing alerts and monitoring for vulnerabilities reported after an organization deploys an application
  • Detect and map known OSS vulnerabilities that can’t be found through other tools
  • Map legal compliance risks associated with OSS dependencies by identifying the licenses in open-source packages
  • Monitor new vulnerabilities 

Every software development organization should consider getting SCA for legal and security compliance. Secure, reliable, and efficient, SCA allows teams to track open-source code with just a few clicks of the mouse. Without SCA, teams need to manually track open-source code, a near-impossible feat due to the staggering number of OSS dependencies. 

How To Use SAST and SCA To Mitigate Vulnerabilities

Using SAST and SCA to mitigate vulnerabilities is not as easy as it seems. This is because using SAST and SCA involves much more than just pressing buttons on a screen. Successfully implementing SAST and SCA requires IT and cybersecurity teams to establish and follow a security program across the organization, an endeavor that can be challenging.

Luckily, there are a few ways to do this:

1. Use The DevSecOps Model

Short for development, security, and operations, DevSecOps is an approach to platform design, culture, and automation that makes security a shared responsibility at every phase of the software development cycle. It contrasts with traditional cybersecurity approaches that employ a separate security team and quality assurance (QA) team to add security to software at the end of the development cycle. 

Cybersecurity teams can follow the DevSecOps model when using SAST and SCA to mitigate vulnerabilities by implementing both tools and approaches at every phase of the software development cycle. To start, they should introduce SAST and SCA tools to the DevSecOps pipeline as early in the creation cycle as possible. Specifically, they should introduce the tools during the coding stage, during which time the code for the program is written. This will ensure that:

  • Security is not just an afterthought
  • The team has an unbiased way to root out bugs and vulnerabilities before they reach critical mass

Although it can be difficult to convince teams to adopt two security tools at once, it is possible to do with a lot of planning and discussion. However, if teams prefer to only use one tool for their DevSecOps model, they could consider the alternatives below.

2. Integrate SAST and SCA Into the CI/CD Pipeline

Another way to use SAST and SCA together is to integrate them into CI/CD pipeline.

Short for continuous integration, CI refers to a software development approach where developers combine code changes in a centralized hub multiple times per day. CD, which stands for continuous delivery, then automates the software release process.

Essentially, a CI/CD pipeline is one that creates code, runs tests (CI), and securely deploys a new version of the application (CD). It is a series of steps that developers need to perform to create a new version of an application. Without a CI/CD pipeline, computer engineers would have to do everything manually, resulting in less productivity.

The CI/CD pipeline consists of the following stages:

  1. Source. Developers start running the pipeline, by changing the code in the source code repository, using other pipelines, and automatically-scheduled workflows.
  2. Build. The development team builds a runnable instance of the application for end-users.  
  3. Test. Cybersecurity and development teams run automated tests to validate the code’s accuracy and catch bugs. This is where organizations should integrate SAST and SCA scanning.
  4. Deploy. Once the code has been checked for accuracy, the team is ready to deploy it. They can deploy the app in multiple environments, including a staging environment for the product team and a production environment for end-users.
3. Create a Consolidated Workflow with SAST and SCA.

Finally, teams can use SAST and SCA together by creating a consolidated workflow.

They can do this by purchasing cutting-edge cybersecurity tools that allow teams to conduct SAST and SCA scanning at the same time and with the same tool. This will help developers and the IT and cybersecurity teams save a lot of time and energy.

Experience the Kiuwan Difference

With so many SAST and SCA tools on the market, it can be challenging for organizations to pick the right tools for their IT environments. This is particularly true if they have limited experience with SAST and SCA tools.

This is where Kiuwan comes in. A global organization that designs tools to help teams spot vulnerabilities, Kiuwan offers Code Security (SAST) as well as Insights Open Source (SCA).

Kiuwan Code Security (SAST) can empower teams to:

  • Scan IT environments and share results in the cloud
  • Spot and remediate vulnerabilities in a collaborative environment
  • Produce tailored reports using industry-standard security ratings so teams can understand risks better
  • Create automatic action plans to manage tech debt and weaknesses
  • Give teams the ability to choose from a set of coding rules to customize the importance of various vulnerabilities for their IT environment

Kiuwan Insights Open Source (SCA) can help companies:

  • Manage and scan open source components 
  • Automate code management so teams can feel confident about using OSS
  • Integrate seamlessly into their current SDLC and toolkit

Interested in learning more about how Kiuwan’s products? Get demos of Kiuwan’s security solutions today. Developers will see how easy it is to initiate a scan, navigate our seamless user interface, create a remediation action plan, and manage internal and third-party code risks.

Content provided by Kiuwan. 

The post Combining Static Application Security Testing (SAST) and Software Composition Analysis (SCA) Tools appeared first on SD Times.

]]>
Optimize data transfer and integrate file transfer in your automation workflows https://sdtimes.com/data/optimize-data-transfer-and-integrate-file-transfer-in-your-automation-workflows/ Fri, 01 Apr 2022 13:00:01 +0000 https://sdtimes.com/?p=47101 Workload automation is a critical piece of digital transformation. It can enable practitioners to schedule and execute business process workflows, optimize data transfer and processing and cut down on errors and delays in execution of the business processes themselves.  Businesses currently have three main approaches to modernization and digital transformation. One is that they are … continue reading

The post Optimize data transfer and integrate file transfer in your automation workflows appeared first on SD Times.

]]>
Workload automation is a critical piece of digital transformation. It can enable practitioners to schedule and execute business process workflows, optimize data transfer and processing and cut down on errors and delays in execution of the business processes themselves. 

Businesses currently have three main approaches to modernization and digital transformation.

One is that they are in some cases still investing in legacy systems that could be distributed. The second approach is that businesses are looking to readjust and re-architect with a lift-and- shift type of approach to the different applications to run on the cloud. Lastly, they are looking to rebuild and re-invent their applications to become cloud-native. 

All of these different strategies have a common factor: the business processes are interconnected with the platforms and the heterogeneous systems that bring together challenges and risks. 

“Application workloads are no longer sitting in predefined data centers and are now spread across multiple clouds, bringing a challenge that they need to be managed and mitigated,” said Francesca Curzi, the HCL software global sales leader for workload automation, mainframe, and data platform. 

Customers need to embrace a systematic approach, avoiding islands of automation where each context is being managed by a different tool. Organizations also need to manage their data flows as more data becomes available. Here the file transfer capability is becoming more and more important to be really interconnected, Curzi added. 

The new HCL Workload Automation v.10 launched on March 4th offers unique technology to enable this kind of digital transformation and to tackle these challenges. It can execute any type of job anywhere: on-premises or on the cloud on the cloud of one’s choice. The tool leverages historical workload execution data with AI to expose observable data and provide an enhanced operational experience.

“It removes these islands of automation across different applications and brings unique capabilities with advanced models into the market,” said Marco Cardelli, HWA lead product manager.

HCL Workload Automation can optimize data transfers and processing by leveraging a single point of control and integration for MFT, RPA, and big data applications. 

Schedulers and operators will benefit from the tool’s flexibility and executives can feel safer with a robust long-time market leader technology that takes care of business continuity.

All of the plugins that come with the new version provide a way to orchestrate different applications without needing to write a script to manage them. Users of Workload Automation v.10 have a doc plugin panel in the web user interface to define specifically what kind of job they want and they just have to provide parameters to orchestrate it. 

The solution offers ERP integrations such as SAP, Oracle E-Business, PeopleSoft and big data integrations like Informatica, Hadoop, Cognos, DataStage, and more. It offers multiple ways to manage message queues, web services, restful APIs, and more. 

Last, but also very important, HCL is also automating some RPA tools, offering the possibility to orchestrate the execution of the bots, in particular on Automation Anywhere on Blue Prism, as well as IBM RPA planned for this year. 

Users will also benefit from AI and ML capabilities. Version 10 offers anomaly detection and identification of patterns in the workload execution.

“In the future, we also want to take care of noise reduction related to alerts and messages of the product to help our operators to fix job issues, providing root cause analysis, and suggest self- healing based on historical data, and to also improve the usability of the dynamic workload console by allowing AI to help customers to define objects to find features and so on,” Curzi said. 

There is also a new component called the AI Data Advisory available for containers. It uses big data machine learning in LED analytics technologies on Workload Automation data and provides anomaly detection. At that point, it’s possible to use a specific UI that provides historical data analysis for jobs and workstations, empowering operators.

With digital transformation, organizations can take advantage of the most advanced workload scheduling, managed file transfer, and real-time monitoring capabilities solution for continuous automation. In addition, organizations can keep control of their automation processes from a single point of access and monitoring! For more information, click here.

Start your 90 day free trial and get hands-on experience with a one-stop automation platform, click here.

Content provided by SD Times and HCL Workload Automation

The post Optimize data transfer and integrate file transfer in your automation workflows appeared first on SD Times.

]]>
Prevention in the age of the never-ending attack surface https://sdtimes.com/security/prevention-in-the-age-of-the-never-ending-attack-surface/ Wed, 09 Mar 2022 18:46:07 +0000 https://sdtimes.com/?p=46831 When we talk about progress, typically, digital advancement is at the forefront of the conversation. We want everything better, faster, more convenient, more powerful, and we want to do it for less money, time, and risk. For the most part, these “impossible” objectives are eventually met; it might take several years and multiple versions (and … continue reading

The post Prevention in the age of the never-ending attack surface appeared first on SD Times.

]]>
When we talk about progress, typically, digital advancement is at the forefront of the conversation. We want everything better, faster, more convenient, more powerful, and we want to do it for less money, time, and risk. For the most part, these “impossible” objectives are eventually met; it might take several years and multiple versions (and a team of developers who might start a coup if they’re asked to switch gears on feature design one more freaking time), but every day, code is out there changing the world. 

However, with great software expansion comes great responsibility, and the reality is, we’re simply not ready to deal with it from a security perspective. Software development is no longer an island, and when we account for all aspects of software-powered risk – everything from the cloud, embedded systems in appliances and vehicles, our critical infrastructure, not to mention the APIs that connect it all – the attack surface is borderless and out of control. 

We can’t expect a magical time where each line of code is meticulously checked by seasoned security experts – that skills gap is not closing any time soon – but we can, as an industry, adopt a more holistic approach to code-level security.

Let’s explore how we can corral that infinite attack surface with the tools at hand:

Be realistic about the level of business risk (and what you’re willing to accept)

Perfect security is not sustainable, but neither is putting on a blindfold and pretending everything is blue skies. We already know that organizations knowingly ship vulnerable code, and clearly, this is a calculated risk based on time to market with new features and products. 

Security at speed is a challenge, especially in places where DevSecOps isn’t the standard development methodology. However, we only need to look at the recent Log4Shell exploit to discover how relatively small security issues in code have opened up opportunities for a successful attack, and see that the consequences of those calculated risks to shipping lower-quality code could be far greater than projected.

Get comfortable with being an (access) control freak

An alarming number of costly data breaches are caused by poorly configured cloud storage environments, and the potential of sensitive data exposure resulting from access control errors continues to haunt security teams in most organizations. 

In 2019, Fortune 500 company First American Financial Corp. found this out the hard way. An authentication error – one that was relatively straightforward to remediate – led to the exposure of over 800 million records, including bank statements, mortgage contracts, and photo IDs. Their document links required no user identification or login, rendering them accessible to anyone with a web browser. Worse still, they were logged with sequential numbers, meaning a simple change of number in the link exposed a new data record. 

This security issue was internally identified before being exposed in the media, however, failings in categorizing it properly as a high-risk security issue, and failure to report it to senior management for urgent remediation caused a fallout that is still being navigated today.

There is a reason that broken access control now sits at the very top of the OWASP Top 10: it’s as common as dirt, and developers need verified security awareness and practical skills to navigate best practices around authentication and privileges in their own builds, ensuring checks and measures are in place to protect sensitive data exposure. 

The nature of APIs makes them especially relevant and tricky; they are very chatty with other applications by design, and development teams should have visibility across all potential access points. After all, they can’t take into consideration unknown variables and use cases in their quest to provide safer software.

Analyze your security program: how much emphasis is placed on prevention?

It makes sense that a large component of a security program is dedicated to incident response and reaction, but many organizations are missing valuable risk minimization by not utilizing all the resources available to prevent a security incident in the first place.

Sure, there are comprehensive stacks of security tooling that assist in uncovering problematic bugs, but almost 50% of companies admitted to shipping code they knew was vulnerable. Time constraints, the complexity of toolsets, and a lack of trained experts to respond to reporting all contribute to what has essentially been a calculated risk, but the fact that code needs to be secured in the cloud, in applications, in API functionality, embedded systems, libraries, and an ever-broadening landscape of technology, ensures we will always be one step behind with the current approach.

Security bugs are a human-caused problem, and we can’t expect robots to do all the fixing for us. If your development cohort is not being effectively upskilled – not just a yearly seminar, but proper educational building blocks – then you are always at risk of accepting low-quality code as standard, and the security risk that goes with it. 

Have you overestimated the readiness of your developers?

Developers are rarely assessed on their secure coding abilities, and it’s not their priority (nor is it a KPI in a lot of cases). They cannot be the fall guys for poor security practices if they’re not shown a better path or told it is a measurement of their success. 

Too often, though, there is an assumption within organizations that the guidance provided has been effective in preparing the engineering team to mitigate common security risks. Depending on the training and their awareness to apply security best practices, they may not be prepared to be that desirable first line of defense (and stop endless injection flaws clogging up pentest reports). 

The ideal state is that learning pathways of increasing complexity are completed, with the resulting skills verified to ensure it actually works for the developer in the real world. However, this requires a cultural standard where developers are considered from the beginning, and correctly enabled. If we as an industry are going out into the wilderness to defend this vast landscape of code, we’ve created ourselves, we’ll need all the help we can get… and there is more right in front of us than we realize.

 

The post Prevention in the age of the never-ending attack surface appeared first on SD Times.

]]>