monitoring Archives - SD Times https://sdtimes.com/tag/monitoring/ Software Development News Tue, 04 Apr 2023 20:36:55 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.1 https://sdtimes.com/wp-content/uploads/2019/06/bnGl7Am3_400x400-50x50.jpeg monitoring Archives - SD Times https://sdtimes.com/tag/monitoring/ 32 32 vFunction enables continuous monitoring, detection, and drift issues with latest release https://sdtimes.com/monitor/vfunction-enables-continuous-monitoring-detection-and-drift-issues-with-latest-release/ Tue, 04 Apr 2023 20:36:55 +0000 https://sdtimes.com/?p=50806 The vFunction Continuous Modernization Manager (CMM) platform is now available, enabling software architects to shift left and find and fix application architecture anomalies. vFunction also announced a new version of vFunction Assessment Hub and updates to vFunction Assessment Hub. CMM observes Java and .NET applications and services to set baselines and monitor for any architectural … continue reading

The post vFunction enables continuous monitoring, detection, and drift issues with latest release appeared first on SD Times.

]]>
The vFunction Continuous Modernization Manager (CMM) platform is now available, enabling software architects to shift left and find and fix application architecture anomalies. vFunction also announced a new version of vFunction Assessment Hub and updates to vFunction Assessment Hub.

CMM observes Java and .NET applications and services to set baselines and monitor for any architectural drift and erosion. It can help companies detect critical architectural anomalies such as new dead code in the application or the emergence of unnecessary code.

“Application architects today lack the architectural observability, visibility, and tooling to understand, track, and manage architectural technical debt as it develops and grows over time,” said Moti Rafalin, the founder and CEO at vFunction. “vFunction Continuous Modernization Manager allows architects to shift left into the ongoing software development lifecycle from an architectural perspective to manage, monitor, and fix application architecture anomalies on an iterative, continuous basis before they erupt into bigger problems.”

The platform also identifies the introduction of a new service or domain and newly identified common classes that can be added to a common library to prevent further technical debt. 

Finally, it monitors and alerts when new dependencies are introduced that expand architectural technical debt, and identifies the highest technical debt classes that contribute to application complexity. Users are notified of changes through Slack, email, and the vFunction Notifications Center, allowing architects to then configure schedules for learning, analysis, and baseline measurements through the vFunction Continuous Modernization Manager.

The latest release of vFunction Modernization Hub 3.0 allows modernization teams to collaborate more effectively by working on different measurements in parallel and later merging them into one measurement. Additionally, the vFunction Assessment Hub now includes a Multi-Application Assessment Dashboard that allows users to track and compare different parameters for hundreds of applications, such as technical debt, aging frameworks, complexity, and state, among others. 

All three products are available in the company’s Application Modernization Platform. 

The post vFunction enables continuous monitoring, detection, and drift issues with latest release appeared first on SD Times.

]]>
Vulnerability discovered in Spring that enables DoS attacks https://sdtimes.com/security/vulnerability-discovered-in-spring-that-enables-dos-attacks/ Tue, 28 Mar 2023 19:02:28 +0000 https://sdtimes.com/?p=50717 An Expression Denial of Service (DoS) vulnerability was found by Code Intelligence in the Spring Framework, a popular Java application development framework.  “As part of our efforts to improve the security of open-source software, we continuously test open-source projects with our JVM fuzzing engine Jazzer in Google’s OSS-Fuzz. One of our tests yielded a Denial … continue reading

The post Vulnerability discovered in Spring that enables DoS attacks appeared first on SD Times.

]]>
An Expression Denial of Service (DoS) vulnerability was found by Code Intelligence in the Spring Framework, a popular Java application development framework. 

“As part of our efforts to improve the security of open-source software, we continuously test open-source projects with our JVM fuzzing engine Jazzer in Google’s OSS-Fuzz. One of our tests yielded a Denial of Service vulnerability in the Spring Framework (CVE-2023-20861),” Dae Glendowne, an application security engineer at Code Intelligence wrote in a blog post. “Spring is one of the most widely used frameworks for developing web applications in Java. As a result, vulnerabilities have an amplified impact on all applications that rely on the vulnerable version.”

In Spring Framework 5.3.x and previous versions, a StringBuilder is used to create the repeated text in a for-loop which can lead to a legitimate OutOfMemoryError that can then be used as a “gadget” to easily generate large strings in SpEL expressions, which can result in a vulnerability. 

By exploiting the vulnerability, it is possible for a user to provide a specially crafted SpEL expression that causes a DoS condition, according to Code Intelligence.

One already released fix adds  limit checks for the effective size of repeated text as well as the length of a regular expression supplied to the matches operator. Users of older, unsupported versions should upgrade to versions 6.0.7+ or 5.3.26+ for the fix. 

The post Vulnerability discovered in Spring that enables DoS attacks appeared first on SD Times.

]]>
New Relic announces JFrog integration to provide a single point of access for monitoring https://sdtimes.com/monitoring/new-relic-announces-jfrog-integration-to-provide-a-single-point-of-access-for-monitoring/ Wed, 15 Mar 2023 15:44:34 +0000 https://sdtimes.com/?p=50569 Observability company New Relic and DevOps company JFrog today announced an integration to give engineering teams a single point of access to monitor software development operations. With this integration, users are able to access real-time visibility into CI/CD pipelines, APIs, and web application development workflows so that DevOps and security leaders can solve software supply … continue reading

The post New Relic announces JFrog integration to provide a single point of access for monitoring appeared first on SD Times.

]]>
Observability company New Relic and DevOps company JFrog today announced an integration to give engineering teams a single point of access to monitor software development operations.

With this integration, users are able to access real-time visibility into CI/CD pipelines, APIs, and web application development workflows so that DevOps and security leaders can solve software supply chain performance and security issues.

Additionally, site reliability engineers, security, and operations teams are enabled to consistently monitor the health, security, and usage trends through each stage of the software development lifecycle.

The integration allows engineering teams to track key metrics and generate alerts in New Relic to identify performance degradation so that administrators can manage performance, mitigate risks, and remediate any issues in a single view. 

“Today’s developers need a 360-degree view of applications to monitor and remediate both performance and security, no matter if they’re running on-premises, in the cloud, or at the edge,” said Omer Cohen, executive vice president of strategy at JFrog. “Our integration with New Relic gives DevOps, security, and operations teams the real-time insights needed to optimize their software supply chain environment and accelerate time to market.”

Preconfigured New Relic dashboards also bring a complete view of performance data, artifact usage, and security metrics from JFrog Artifactory and JFrog Xray environments alongside their telemetry data.

To get started, visit the website

The post New Relic announces JFrog integration to provide a single point of access for monitoring appeared first on SD Times.

]]>
Spotify is introducing new plugins for Backstage https://sdtimes.com/software-development/spotify-is-introducing-new-plugins-for-backstage/ Thu, 15 Dec 2022 18:08:33 +0000 https://sdtimes.com/?p=49848 Spotify launched its Spotify Plugins for Backstage subscription as an open beta to all Backstage adopters. It contains a bundle of five plugins: Soundcheck, Role-Based Access Control, Skill Exchange, Pulse, and Insights.  “The Spotify Plugins for Backstage bundle is the next step toward Spotify’s goal to share what we’ve learned with the world. We’re confident … continue reading

The post Spotify is introducing new plugins for Backstage appeared first on SD Times.

]]>
Spotify launched its Spotify Plugins for Backstage subscription as an open beta to all Backstage adopters. It contains a bundle of five plugins: Soundcheck, Role-Based Access Control, Skill Exchange, Pulse, and Insights. 

“The Spotify Plugins for Backstage bundle is the next step toward Spotify’s goal to share what we’ve learned with the world. We’re confident these plugins will take each and every developer portal built on Backstage to the next level, by making developers in your organization happier and more productive,” Austin Lamon, Director, GM of Backstage at Spotify wrote in a blog post. 

The first plugin, Soundcheck, codifies engineering best practices to improve quality, reliability, security, and alignment throughout a software ecosystem in a gamified way. It includes an entity page that  provides a comprehensive snapshot of a specific entity’s tech health, pass or fail checks, and an overview page that provides a grid view of all entities owned by specific groups inside the organization.

The next plugin, Role-Based Access Control helps users manage access and protect their data in Backstage to meet evolving security and compliance needs. 

Skill Exchange is an internal marketplace to promote and seek out unique, on-the-job learning opportunities for developers and other members of the tech ecosystem. It offers search and quick navigation to learning opportunities. 

Pulse lets users track productivity and satisfaction metrics so that they can visualize metrics in Backstage.

Lastly, Insights enables developers to see how the organization is performing in Backstage by underscoring on the roadmap where things need to be doubled down on or deprecated. 

“These plugins have been put through endless hours of internal use, iteration, and improvement. We know they work because they’ve done amazing things for us — from Soundcheck bringing test flakiness to below 1% to Skill Exchange powering thousands of collaborative hacks between teams,” Lamon added. 

The post Spotify is introducing new plugins for Backstage appeared first on SD Times.

]]>
Edge Delta announces free edition https://sdtimes.com/software-development/edge-delta-announces-free-edition/ Tue, 15 Nov 2022 17:38:08 +0000 https://sdtimes.com/?p=49589 Edge Delta, provider of observability tools, today introduced a free version of its product in order to bring users intelligent and automated monitoring and troubleshooting for applications and services running in Kubernetes. The free edition is designed to deliver time-to-value and allow engineers to spend their time on core tasks.  Additionally, it works to detect … continue reading

The post Edge Delta announces free edition appeared first on SD Times.

]]>
Edge Delta, provider of observability tools, today introduced a free version of its product in order to bring users intelligent and automated monitoring and troubleshooting for applications and services running in Kubernetes.

The free edition is designed to deliver time-to-value and allow engineers to spend their time on core tasks. 

Additionally, it works to detect “unknown unknowns,” or anomalies and issues that an organization has not built rules or logics to catch.

“Developers want two main things from their observability tools – an easy and credible way to check the health of their services; and a quick and seamless way to troubleshoot any issues that arise,” says Ozan Unlu, CEO and founder of Edge Delta. “Traditional observability tools weren’t built to prioritize the developer experience, requiring a lot of time and effort that takes developers away from building great software. We want to give this valuable time back to them by providing an elegant and free solution for monitoring and troubleshooting applications running in Kubernetes environments.”

According to the company, this release is best suited for smaller, resources-limited development teams. A few of the core benefits include:

  • Automated manual toil to help development teams with continuous delivery by enabling observability tooling to keep up with modern software delivery
  • Reduce the noise of log data to help developers make sense of datasets by running analytics on log data when it is created
  • Find and troubleshoot every issue to help teams understand the root cause of them

To get started with the free edition, visit the website

The post Edge Delta announces free edition appeared first on SD Times.

]]>
AI needs automated testing, monitoring https://sdtimes.com/ai/ai-needs-automated-testing-monitoring/ Fri, 04 Nov 2022 16:17:10 +0000 https://sdtimes.com/?p=49510 In the 1990’s, when software started to become ubiquitous in the business world, quality was still a big issue. It was common for new software and upgrades to be buggy and unreliable, and rollouts were difficult.  Software testing was mostly a manual process, and the people developing the software typically also tested it. Seeing a need in … continue reading

The post AI needs automated testing, monitoring appeared first on SD Times.

]]>
In the 1990’s, when software started to become ubiquitous in the business world, quality was still a big issue. It was common for new software and upgrades to be buggy and unreliable, and rollouts were difficult. 

Software testing was mostly a manual process, and the people developing the software typically also tested it. Seeing a need in the market, consultancies started offering outsourced software testing. While it was still primarily manual, it was more thorough. Eventually, automated testing companies emerged, performing high-volume, accurate feature and load testing. Soon after, automated software monitoring tools emerged, to help ensure software quality in production. Eventually, automated testing and monitoring became the standard, and software quality soared, which of course helped accelerate software adoption.

AI model development is at a similar inflection point. AI and machine learning technologies are being adopted at a rapid pace, but quality varies. Often, the data scientists developing the models are also the ones manually testing them, and that can lead to blind spots. Testing is manual and slow. Monitoring is nascent and ad hoc. And AI model quality is suffering, becoming a gating factor for the successful adoption of AI. In fact, Gartner estimates that 85 percent of AI projects fail.

The stakes are getting higher. While AI was first primarily used for low-stakes decisions such as movie recommendations and delivery ETAs, more and more often, AI is now the basis for models that can have a big impact on people’s lives and on businesses. Consider credit scoring models that can impact a person’s ability to get a mortgage, and the Zillow home-buying model debacle that led to the closure of the company’s multi-billion dollar line of business buying and

flipping homes. Many organizations learned too late that COVID-19 broke their models – changing market conditions left models with outdated variables that no longer made sense (for instance, basing credit decisions for a travel-related credit card on volume of travel, at a time when all non-essential travel had halted).

Not to mention, regulators are watching. Enterprises must do a better job with AI model testing if they want to gain stakeholder buy-in and achieve a return on their AI investments. And history tells us that automated testing and monitoring is how we do it.

Emulating testing approaches in software development

First, let’s recognize that testing traditional software and testing AI models require significantly different processes. That is because AI bugs are different. AI bugs are complex statistical data anomalies (not functional bugs), and the AI blackbox makes it really hard to identify and debug them. As a result, AI development tools are immature and not prepared for dealing with high-stakes use cases.

AI model development differs from software development in three important ways:

– It involves iterative training/experimentation vs. being task- and completion-oriented;

– It’s predictive vs. functional; and

– Models are created via black-box automation vs. designed by humans.

Machine learning also presents unique technical challenges that aren’t present in traditional software – chiefly:

– Opaqueness/Black box nature

– Bias and fairness

– Overfitting and unsoundness

– Model reliability

– Drift

The training data that AI and ML model development depend on can also be problematic. In the software world, you could purchase generic software testing data, and it could work across different types of applications. In the AI world, training data sets need to be specifically formulated for the industry and model type in order to work. Even synthetic data, while safer and easier to work with for testing, has to be tailored for a purpose.

Taking proactive steps to ensure AI model quality

So what should companies leveraging AI models do now? Take proactive steps to work automated testing and monitoring into the AI model lifecycle. A solid AI model quality strategy will encompass four categories:

– Real-world model performance, including conceptual soundness, stability/monitoring and reliability, and segment and global performance.

– Societal factors, including fairness and transparency, and security and privacy

– Operational factors, such as explainability and collaboration, and documentation

– Data quality, including missing and bad data

For AI models to become ubiquitous in the business world – as software eventually did – the industry has to dedicate time and resources to quality assurance. We are nowhere near the five-9’s of quality that’s expected for software, but automated testing and monitoring is putting us on the path to get there.

 

The post AI needs automated testing, monitoring appeared first on SD Times.

]]>
Instilling QA in AI Model Development https://sdtimes.com/ai/instilling-qa-in-ai-model-development/ Mon, 17 Oct 2022 17:36:19 +0000 https://sdtimes.com/?p=49283 In the 1990’s, when software started to become ubiquitous in the business world, quality was still a big issue. It was common for new software and upgrades to be buggy and unreliable, and rollouts were difficult. Software testing was mostly a manual process, and the people developing the software typically also tested it. Seeing a … continue reading

The post Instilling QA in AI Model Development appeared first on SD Times.

]]>
In the 1990’s, when software started to become ubiquitous in the business world, quality was still a big issue. It was common for new software and upgrades to be buggy and unreliable, and rollouts were difficult. Software testing was mostly a manual process, and the people developing the software typically also tested it. Seeing a need in the market, consultancies started offering outsourced software testing. While it was still primarily manual, it was more thorough. Eventually, automated testing companies emerged, performing high-volume, accurate feature and load testing. Soon after, automated software monitoring tools emerged, to help ensure software quality in production. Eventually, automated testing and monitoring became the standard, and software quality soared, which of course helped accelerate software adoption. 

AI model development is at a similar inflection point. AI and Machine Learning technologies are being adopted at a rapid pace, but quality varies. Often, the data scientists developing the models are also the ones manually testing them, and that can lead to blind spots. Testing is manual and slow. Monitoring is nascent and ad hoc. And AI model quality is suffering, becoming a gating factor for the successful adoption of AI. In fact, Gartner estimates that 85 percent of AI projects fail.

The stakes are getting higher. While AI was first primarily used for low-stakes decisions such as movie recommendations and delivery ETAs, more and more often, AI is now the basis for models that can have a big impact on people’s lives and on businesses. Consider credit scoring models that can impact a person’s ability to get a mortgage, and the Zillow home-buying model debacle that led to the closure of the company’s multi-billion dollar line of business buying and flipping homes. Many organizations learned too late that Covid broke their models – changing market conditions left models with outdated variables that no longer made sense (for instance, basing credit decisions for a travel-related credit card on volume of travel, at a time when all non-essential travel had halted).

Not to mention, regulators are watching.

Enterprises must do a better job with AI model testing if they want to gain stakeholder buy-in and achieve a return on their AI investments. And history tells us that automated testing and monitoring is how we do it.

Emulating testing approaches in software development

First, let’s recognize that testing traditional software and testing AI models require significantly different processes. That is because AI bugs are different. AI bugs are complex statistical & data anomalies (not functional bugs), and the AI blackbox makes it really hard to identify and debug them. As a result, AI development tools are methodologies that are immature and not prepared for dealing with high stakes use cases.  

AI model development differs from software development in three important ways:

  • It involves iterative training/experimentation vs being task and completion oriented;
  • It’s predictive vs functional; and 
  • Models are created via black-box automation vs human designed.

Machine Leading also presents unique technical challenges that aren’t present in traditional software – chiefly:

  • Opaqueness/Black box nature
  • Bias and fairness
  • Overfitting and unsoundness
  • Model reliability
  • Drift

The training data that AI and ML model development depend on can also be problematic. In the software world, you could purchase generic software testing data, and it could work across different types of applications. In the AI world, training data sets need to be specifically formulated for the industry and model type in order to work. Even synthetic data, while safer and easier to work with for testing, has to be tailored for a purpose. 

Taking proactive steps to ensure AI model quality

So what should companies leveraging AI models do now? Take proactive steps to work automated testing and monitoring into the AI model lifecycle. 

A solid AI model quality strategy will encompass four categories:

  • Real-world model performance, including conceptual soundness, stability/monitoring and reliability, and segment and global performance.
  • Societal factors, including fairness and transparency, and security and privacy
  • Operational factors, such as explainability and collaboration, and documentation
  • Data quality, including missing and bad data

All are crucial towards ensuring AI model quality. 

For AI models to become ubiquitous in the business world – as software eventually did – the industry has to dedicate time and resources to quality assurance. We are nowhere near the five nines of quality that’s expected for software, but automated testing and monitoring is putting us on the path to get there.

The post Instilling QA in AI Model Development appeared first on SD Times.

]]>
Using GPT-3 for root cause incident summarization of incidents https://sdtimes.com/monitor/using-gpt-3-for-root-cause-incident-summarization-of-incidents/ Wed, 08 Sep 2021 17:59:06 +0000 https://sdtimes.com/?p=45223 The complexity of today’s distributed microservices applications makes it tough to track down the root cause when a problem occurs. The time-proven method of drilling down on monitoring dashboards and then digging into logs simply takes too long. Hunting through huge volumes of logs is tedious and interpreting them is difficult. It also requires an … continue reading

The post Using GPT-3 for root cause incident summarization of incidents appeared first on SD Times.

]]>
The complexity of today’s distributed microservices applications makes it tough to track down the root cause when a problem occurs. The time-proven method of drilling down on monitoring dashboards and then digging into logs simply takes too long. Hunting through huge volumes of logs is tedious and interpreting them is difficult. It also requires an enormous amount of skill and experience to understand what the logs mean and to identify the significant factors that relate to the root cause. Worse, this kind of approach ties up the most critical engineering and DevOps resources, preventing them from doing something that could be more valuable to the business. 

It’s no wonder machine learning (ML) applied to logs is gaining momentum. It turns out that when an application problem occurs, the patterns in the logs will change in a noticeable way. Using the right approach, the ML can find these anomalous patterns and distill them into a small sequence of log lines that explain the root cause. Imagine the time savings of having to only review 20 log lines curated by the ML, instead of hunting through the many millions of log lines that were generated while the problem took place. Using ML on logs completely revolutionizes the troubleshooting process – speeding up incident resolution time and freeing up key engineers to work on new features instead of fighting fires.

While ML transforms the process of hunting through logs, it does not fully solve the challenge for all users. Even with the best machine learning techniques, there is a last mile problem: a skilled human with the right knowledge of the part of the application or infrastructure that has failed is normally required to interpret the log lines. Think of the possibilities if the reliance on key engineering resources could be eliminated by using AI to interpret those same log lines. 

That’s where a natural language model such as OpenAI’s GPT-3 comes in. The log lines, together with an appropriate prompt, are processed by GPT-3 and what is returned is a simple, plain language sentence that summarizes the problem. Engineers at my company have been experimenting with GPT-3 for the past six months, and, although not perfect, the results are nothing short of amazing. Here are a few examples:

  • The memory cgroup was out of memory, so the kernel killed process **** and its child ****.
  • The file system was corrupted.
  • The cluster was under heavy load, and the scheduler was unable to schedule the pod.

In each case, the right engineer could have come to the same conclusion by analyzing the root cause reports for a few minutes. But what the above shows is that we no longer need to rely on having the “right engineer” available as the front line for incident response and resolution. Now, even the most junior member of a team can quickly get a sense of the problem, triage the situation and assign the incident to a suitable engineer to remediate it. 

There’s also another impactful use case that plain language summarization opens up to all users – proactive incident detection. The same machine learning technique that can uncover the set of log lines that explain root cause when a problem has occurred can also be used to proactively detect the presence of any kind of problem, even if symptoms have not yet manifested themselves. This approach can often uncover subtle bugs and other conditions early, allowing engineers to improve product quality and proactively fix the issues before they cause more widespread production problems.

For this to work, the ML needs to constantly scan incoming log streams for the presence of anomalous patterns that indicate potential problems. This allows it to catch almost any kind of problem, even new or rare ones that are otherwise hard to detect. However, not all incidents that it detects will be related to problems that you care about. For example, upgrading a microservice can cause a significant changes in log patterns that the machine learning will highlight, however, this is not an actual problem. In order to make the determination of whether a proactively detected problem is important, someone needs to review the small set of anomalous log lines. Generally, this extra effort will be well worthwhile if it prevents a problem from manifesting into a critical incident.

Once again, plain language summarization can be a tremendous help. Instead of the proactive review process being the task of a senior engineering team with the correct level of understanding of the product and logs, it can be carried out by someone of almost any skill level just by glancing at the short English language summaries that are produced. Very quickly, the important “proactive” incidents can be surfaced and dealt with swiftly.  

The foundation and usefulness of plain language summarization comes from the machine learning technique that is used to analyze the logs in the first place. If this is not done well, then summarization will not work at all since the underlying data will be flawed. However, if ML can find the right log lines, then making them available in simple summary form significantly increases the usefulness and audience of ML-based log analysis.

Together, unsupervised ML-based root cause analysis and AI-based plain language summarization provide a more complete approach to automated troubleshooting. They unburden development teams of the painful task of hunting through log files when an incident occurs, allowing them to work on far more interesting and important problems.

The post Using GPT-3 for root cause incident summarization of incidents appeared first on SD Times.

]]>
SD Times news digest: TypeScript 4.4 beta, Rust support improvements in Linux kernel, Sauce Labs acquires Backtrace https://sdtimes.com/msft/sd-times-news-digest-typescript-4-4-beta-rust-support-improvements-in-linux-kernel-sauce-labs-acquires-backtrace/ Tue, 06 Jul 2021 14:58:39 +0000 https://sdtimes.com/?p=44641 Some of the major highlights of the TypeScript 4.4 beta are control flow analysis of aliased conditions, symbol and template string pattern index signatures and more.  With control flow analysis of aliased conditions enabled, developers don’t have to convince TypeScript of a variable’s type whenever it is used because the type-checker leverages something called control … continue reading

The post SD Times news digest: TypeScript 4.4 beta, Rust support improvements in Linux kernel, Sauce Labs acquires Backtrace appeared first on SD Times.

]]>
Some of the major highlights of the TypeScript 4.4 beta are control flow analysis of aliased conditions, symbol and template string pattern index signatures and more. 

With control flow analysis of aliased conditions enabled, developers don’t have to convince TypeScript of a variable’s type whenever it is used because the type-checker leverages something called control flow analysis to deduce the type within every language construct.

TypeScript also now lets users describe objects where every property has to have a certain type using index signatures to form dictionary-like types, where string keys can be used to index into them with square brackets.

Additional details on all of the highlights in the new version are available here

Rust support improvements in Linux kernel 

The Linux kernel received several major improvements to overall Rust support including removed panicking allocations, added support for the beta compiler as well as testing.

The goal with the improvements is to have everything the kernel needs in the upstream ‘alloc’ and to drop it from the kernel tree. ‘Alloc’ is now compiled with panicking allocation methods disabled, so that they cannot be used within the kernel by mistake.

As for compiler support, Linux is now using the 1.54-beta1 version as its reference compiler. At the end of this month, `rustc` 1.54 will be released, and the kernel will move to that version as the new reference. 

Additional details on all of the support improvements are available here.

Sauce Labs acquires Backtrace

Sauce Labs announced that it has acquired Backtrace, a provider of error monitoring solutions for software teams. 

 “Combined with our recent acquisitions of API Fortress, AutonomIQ, and TestFairy, the addition of Backtrace extends Sauce Labs solutions to meet every stage of the development journey. We’re thrilled to welcome the talented people and products of Backtrace and look forward to supporting their high-quality innovation as part of the Sauce Labs team,” said Aled Miles, president and CEO of Sauce Labs.

Backtrace offers a cross-platform error monitoring solution for desktop, mobile, devices, game consoles, and server platforms that helps organizations reduce debugging time and improve software quality.

Apache weekly update

Last week at the Apache Software Foundation (ASF) saw the release of Apache Camel 3.11, which includes a new ‘camel-kamelet-main’ component intended for developers to try out or develop custom Kamelets, a ‘getSourceTimestamp’ API on ‘Message’ and more.

Apache MetaModel, which was a common interface for discovery, exploration of metadata and querying of different types of data sources has been retired. 

Also, Apache Druid was found to have a vulnerability that authenticated users to read data from other sources than intended.

Other new releases last week included Apache Geode 1.13.3 and 1.12.3. Additional details on all news from the ASF are available here.  

The post SD Times news digest: TypeScript 4.4 beta, Rust support improvements in Linux kernel, Sauce Labs acquires Backtrace appeared first on SD Times.

]]>
Bugsnag’s new error monitoring features aim to simplify app dev https://sdtimes.com/monitor/bugsnags-new-error-monitoring-features-aim-to-simplify-app-dev/ Wed, 26 May 2021 17:23:08 +0000 https://sdtimes.com/?p=44130 The SmartBear and application stability management company Bugsnag announced new error monitoring capabilities designed to improve collaboration and team alignments. The features are designed to support code ownership and accelerate the debugging process, especially for large engineering teams, according to the company.  “Most apps have a variety of engineers, including separate engineering teams, working from … continue reading

The post Bugsnag’s new error monitoring features aim to simplify app dev appeared first on SD Times.

]]>
The SmartBear and application stability management company Bugsnag announced new error monitoring capabilities designed to improve collaboration and team alignments. The features are designed to support code ownership and accelerate the debugging process, especially for large engineering teams, according to the company. 

“Most apps have a variety of engineers, including separate engineering teams, working from a single code base. When something goes wrong, all engineers are alerted about the software bug. They then have to figure out where the error occurred and who is responsible for fixing it, which is a cumbersome and inefficient process,” said James Smith, the senior vice president of the Bugsnag Product Group at SmartBear. “Bugsnag’s new features eliminate this guesswork and deliver true code ownership so engineering teams can easily identify, own, prioritize and remedy bugs.”

Among the new features is the NDK stack frames feature, which provides complete visibility into the section of code that was running when an Application Not Responding (ANR) error occurred. It also provides visibility into iOS app hangs through stack traces and breadcrumbs.

Code ownership is supported with automatic error alerts that send notifications about bugs to specific teams that own the problematic part of the codebase. 

With the new capabilities, teams can select the type of breadcrumb or search it through keywords to investigate what a user was doing leading up to the error. This way, they can check whether it happened on a jailbroken iOS or rooted Android device. They can also see which operating systems the error is affecting for multi-platform applications. 

New diagnostics can show whether a crash occurred while an app was launching and they can prioritize fixing high-impact crashes. Engineering teams can also snooze errors that are not causing a significant enough problem. 

Also, the updated version of Bugsnag can now integrate with Microsoft Teams.

The post Bugsnag’s new error monitoring features aim to simplify app dev appeared first on SD Times.

]]>