Edge Archives - SD Times https://sdtimes.com/tag/edge/ Software Development News Thu, 04 May 2023 19:58:09 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.1 https://sdtimes.com/wp-content/uploads/2019/06/bnGl7Am3_400x400-50x50.jpeg Edge Archives - SD Times https://sdtimes.com/tag/edge/ 32 32 Microsoft Bing AI moves to Open Preview, eliminating waitlist https://sdtimes.com/microsoft/microsoft-bing-ai-moves-to-open-preview-eliminating-waitlist/ Thu, 04 May 2023 19:58:09 +0000 https://sdtimes.com/?p=51091 Microsoft announced that it is opening Bing’s new AI chat feature to more people by moving from limited preview to open preview and eliminating the waitlist for trial as part of its initiative for the next generation of AI-powered Bing and Edge. Users can simply sign into Bing with their Microsoft account. Microsoft also announced … continue reading

The post Microsoft Bing AI moves to Open Preview, eliminating waitlist appeared first on SD Times.

]]>
Microsoft announced that it is opening Bing’s new AI chat feature to more people by moving from limited preview to open preview and eliminating the waitlist for trial as part of its initiative for the next generation of AI-powered Bing and Edge. Users can simply sign into Bing with their Microsoft account.

Microsoft also announced that it’s moving from text-only search & chat to one that’s more visual with rich image/video answers and new multimodal support coming shortly. Users can get more visual answers including charts and graphs and updated formatting of answers, to help them find information more easily. Image Creator has also been expanded to all languages in Bing.

Microsoft Edge will be redesigned with a sleeker and enhanced UI and is adding the ability to incorporate visual search in chat so that users can upload images and search the web for related content.

Chat history allows users to pick up where they left off and return to previous chats in Bing chat with chat history. Chats can then be moved to Edge Sidebar so that they can be kept on hand while browsing. 

Microsoft stated that it will soon add export and share functionalities into chat for times when people want to easily share conversations with others on social media.

“The new AI-powered Bing has already helped people more easily find or create what they are looking for, making chat a great tool for both understanding and taking action. The integration of Image Creator saves you time by completing the task of creating the image you need right within chat,”  Yusuf Mehdi, corporate vice president and consumer chief marketing officer wrote in a blog post that contains additional details on the new features. 

The post Microsoft Bing AI moves to Open Preview, eliminating waitlist appeared first on SD Times.

]]>
SD Times Open-Source Project of the Week: Luos https://sdtimes.com/software-development/sd-times-open-source-project-of-the-week-luos/ Fri, 16 Sep 2022 13:00:07 +0000 https://sdtimes.com/?p=48910 Luos is an open-source lightweight library that enables developers to develop and scale their edge and embedded distributed software.  Developers can create portable and scalable packages that they can share with teams and communities and the project’s engine encapsulates embedded features in services with APIs, providing direct access to hardware.  Remote control enables users to … continue reading

The post SD Times Open-Source Project of the Week: Luos appeared first on SD Times.

]]>
Luos is an open-source lightweight library that enables developers to develop and scale their edge and embedded distributed software. 

Developers can create portable and scalable packages that they can share with teams and communities and the project’s engine encapsulates embedded features in services with APIs, providing direct access to hardware. 

Remote control enables users to access the topology and routing table from anywhere and they can monitor their devices with several SDKs including Python, TS, Browser app, and others coming soon. Luos detects all services in a system and allows one to access and adapt to any feature anywhere. 

“Most of the embedded developments are made from scratch. By using the Luos engine, you will be able to capitalize on the development you, your company, or the Luos community already did. The re-usability of features encapsulated in Luos engine services will fasten the time your products reach the market and reassure the robustness and the universality of your applications,” the developers behind the project wrote on its website. 

Additional features that Luos can power include event-based polling, service aliases management, data auto-update, self healing and more.

The post SD Times Open-Source Project of the Week: Luos appeared first on SD Times.

]]>
Akka switches to Business Source License version 1.1 https://sdtimes.com/software-development/akka-switches-to-business-source-license-version-1-1/ Wed, 07 Sep 2022 17:04:54 +0000 https://sdtimes.com/?p=48801 Lightbend announced that it is switching the license for Akka, a set of open-source libraries for designing scalable, resilient systems that span cores and networks. The project ran on the Apache 2.0 license which has become increasingly risky when a small company solely carries the maintenance effort even though it is still the de facto … continue reading

The post Akka switches to Business Source License version 1.1 appeared first on SD Times.

]]>
Lightbend announced that it is switching the license for Akka, a set of open-source libraries for designing scalable, resilient systems that span cores and networks.

The project ran on the Apache 2.0 license which has become increasingly risky when a small company solely carries the maintenance effort even though it is still the de facto license for the open-source community, according to Jonas Bonér, CEO and founder of Lightbend in a blog post. 

The new license, Business Source License (BSL) v1.1, freely allows for using code for development and other non-production work such as testing. Production use of the software now requires a commercial license from Lightbend, the company behind Akka. 

“Sadly, open source is prone to the infamous ‘Tragedy of the commons’, which shows that we are prone to act in our self-interest, contrary to the common good of all parties, abdicating responsibility if we assume others will take care of things for us. This situation is not sustainable and one in which everyone eventually loses,” Bonér wrote. “So what does sustainable open source look like? I believe it’s where everyone—users and developers—contributes and are in it together, sharing accountability and ownership.”

Bonér added that BSL v1.1 provides an incentive for large businesses to contribute back to Akka and to Lightbend. 

The BSL v1.1 license also has an additional usage grant to cover open source usage of Akka, such as part of the Play Framework and it will indefinitely return to Apache 2.0 after three years. 

The commercial license for Akka will be available at no charge for companies with less than $25 million in annual revenue.

“By enabling early-stage companies to use Akka in production for free, we hope to continue to foster the innovation synonymous with the startup adoption of Akka,” Bonér wrote. 

Moving forward, Akka will also gain new short-term features, security fixes, JDK and Scala support, and long-term innovation projects such as Akka Edge, which provides a feature set for building edge-native applications. 

 

The post Akka switches to Business Source License version 1.1 appeared first on SD Times.

]]>
Section’s new Kubernetes Edge interface allows organizations to deploy apps to the edge https://sdtimes.com/softwaredev/sections-new-kubernetes-edge-interface-allows-organizations-to-deploy-apps-to-the-edge/ Tue, 05 Apr 2022 15:13:47 +0000 https://sdtimes.com/?p=47145 Section announced a new Kubernetes Edge Interface (KEI) to allow organizations to deploy application workloads across a distributed edge as if it were a single cluster.  With the new interface, development teams can use familiar tools such as kubectl or Helm and deploy applications to a multi-cloud, multi-region and multi-provider network.  Section’s patented Adaptive Edge … continue reading

The post Section’s new Kubernetes Edge interface allows organizations to deploy apps to the edge appeared first on SD Times.

]]>
Section announced a new Kubernetes Edge Interface (KEI) to allow organizations to deploy application workloads across a distributed edge as if it were a single cluster. 

With the new interface, development teams can use familiar tools such as kubectl or Helm and deploy applications to a multi-cloud, multi-region and multi-provider network. 

Section’s patented Adaptive Edge Engine (AEE) employs policy-driven controls to automatically tune, shape and optimize application workloads in the background across Section’s Composable Edge Cloud.

“Edge deployment is simply better than centralized data centers or single clouds in most every important metric – performance, scale, efficiency, resilience, usability, etc.,” said Stewart McGrath, Section’s CEO. “Yet organizations historically put off edge adoption because it’s been complicated. With Section’s KEI, teams don’t have to change tools or workflows; the distributed edge effectively becomes a cluster of Kubernetes clusters and our AEE automation and Composable Edge Cloud handles the rest.”

Developers can use it to configure service discovery, routing users to the best container instance, define complex applications such as composite ones that consist of many containers, define system resource allocations, and much more. 

The post Section’s new Kubernetes Edge interface allows organizations to deploy apps to the edge appeared first on SD Times.

]]>
SD Times Open-Source Project of the Week: WireMock https://sdtimes.com/softwaredev/sd-times-open-source-project-of-the-week-wiremock/ Fri, 10 Dec 2021 14:00:57 +0000 https://sdtimes.com/?p=46041 WireMock is a simulator for HTTP-based APIs that enables users to stay productive when an API that one depends on doesn’t exist or is incomplete. It supports the testing of edge use cases and failure modes that the real API won’t reliably produce.  The company behind the project, MockLab, was recently acquired by UP9. The … continue reading

The post SD Times Open-Source Project of the Week: WireMock appeared first on SD Times.

]]>
WireMock is a simulator for HTTP-based APIs that enables users to stay productive when an API that one depends on doesn’t exist or is incomplete. It supports the testing of edge use cases and failure modes that the real API won’t reliably produce. 

The company behind the project, MockLab, was recently acquired by UP9. The rapid growth of microservice adoption and the booming API economy grew the popularity of WireMock to 1.6 million monthly downloads.

“The number of APIs created every day is growing exponentially. Developers need tools to ensure the reliability and security of their APIs, while still staying productive,” said Alon Girmonsky, CEO and co-founder of UP9. “WireMock is a significant player in the API economy, and by combining it with UP9’s existing API monitoring and traffic analysis capabilities, modern cloud-native developers can now develop faster and find problems quicker.”

Users can run WireMock from within their Java application, JUnit test, Servlet container, or as a standalone process.

The project can also match request URLs, methods, headers, cookies, and bodies using a wide variety of strategies. 

WireMock is distributed via Maven Central and can be included in your project using common build tools’ dependency management.

“With the rise in popularity of microservices along with supplier, partner and cloud APIs as essential building blocks of modern software, developers need tools that help manage the complexity and uncertainty this brings,” said Tom Akehurst, creator of WireMock and CTO of UP9. “WireMock allows developers to quickly create mocks (or simulations) of APIs they depend on, allowing them to keep building and testing when those APIs haven’t been built yet, don’t provide (reliable!) developer sandboxes, or cost money to call. It simulates faults and failure modes that are hard to create on demand and can be used in many environments, from unit test on a laptop all the way up to a high-load stress test.”

Additional details on WireMock are available here.

 

The post SD Times Open-Source Project of the Week: WireMock appeared first on SD Times.

]]>
Developers are gaining more tools for the edge https://sdtimes.com/iot/developers-are-gaining-more-tools-for-the-edge/ Mon, 04 Oct 2021 13:00:08 +0000 https://sdtimes.com/?p=45454 The edge is growing, and cloud providers know it. That’s why they’re creating more tools to help with embedded programming.  According to IDC’s research, edge computing is growing, with 73% of companies in 2021 saying that computing is a strategic initiative for them and they are already making investments to adopt it. Last year, especially, … continue reading

The post Developers are gaining more tools for the edge appeared first on SD Times.

]]>
The edge is growing, and cloud providers know it. That’s why they’re creating more tools to help with embedded programming. 

According to IDC’s research, edge computing is growing, with 73% of companies in 2021 saying that computing is a strategic initiative for them and they are already making investments to adopt it. Last year, especially, saw a lot of that growth, according to Dave McCarthy, the research vice president of Cloud and Edge Infrastructure Services at IDC.

Major cloud providers have already realized the potential for the technology and are adding edge capabilities to their toolkit, which now change the way developers can build for that technology. 

“AWS was trying to ignore what was happening in the on-premises and edge world thinking that everything would go to the cloud,” McCarthy said. “So they finally kind of realized that in some cases, cloud technologies, the cloud mindset, I think works in a lot of different places, but the location of where those resources are has to change.”

For example, in December 2020, AWS came out with AWS Wavelength, which is a service that enables users to deliver ultra-low latency applications for 5G devices. In a way, AWS is embedding some of their cloud platform inside of telco networks such as Verizon, McCarthy explained. 

Also, last year, AWS rewrote Greengrass, an open-source edge runtime, to be more friendly to cloud-native types of environments. Meanwhile, Microsoft is doing the same with its own IoT platform. 

“This distribution of infrastructure is becoming more and more relevant. And the good news for developers is it gives them so much more flexibility than they had in the past; flexibility about saying, I don’t have to compromise anymore because my cloud native kind of development strategy is limited to certain deployment locations. I can go all-in on cloud native, but now I have that freedom to deploy anywhere,” McCarthy said. 

Development for these types of devices has also significantly changed since its early stages. 

At first, the world of embedded systems was that intelligent devices gathered info on the world. Then, AI was introduced and all of that data that was acquired began being processed in the cloud. Now, the world of edge computing is about moving real-time analysis to happen at the edge. 

“Where edge computing came in was to marry the two worlds of IoT and AI or just this intelligence system concept in general, but to do it completely autonomously in these locations,” McCarthy said. “Not only were you collecting that data, but you had the ability to understand it and take action, all within that sort of edge location. That opened the door to so many more things.”

In the early days of the embedded software world, everything seemed very unique, which required specialized frameworks and a firm understanding of how to develop for embedded operating systems. That has now changed with the adoption of standardized development platforms, according to McCarthy. 

Support for edge deployments

A lot more support for deployments at the edge can now be seen in cloud native and container-based applications.

“The fact that the industry, in general, has started to align around Kubernetes as being the main orchestration platform for being able to do this just means that now it’s easier for developers to think about building applications using that microservices mindset, they’re putting that code in containers with the ability to place those out at the edge,” McCarthy said. “Before, if you were an embedded developer, you had to have this specialized skill set. Now, this is becoming more available to a wider set of developers that maybe didn’t have that background.”

Some of the more traditional enterprise environments, like VMware or Red Hat, also have been looking at how to extend their platforms to the edge. Their strategy, however, has been to take their existing products and figure out how to make them more edge-friendly. 

In many cases, that means being able to support smaller configurations, being able to handle situations where the edge environment might be disconnected. 

This is different from the approach of a company like SUSE, which has a strategy to create some edge-specific things, according to McCarthy. When you look at SUSE’s Enterprise Linux, you know, they have created a micro version that’s specifically designed for the edge.

“These are two different ways of tackling the same problem,” McCarthy said. “Either way, I think they’re both trying to attack this from that perspective of let’s create standardization with familiar tools so that developers don’t have to relearn how to do things. In some respects, what you’re doing is abstracting some of the complexity of what might be at the edge, but give them that flexibility of deployment.”

This standardization has proven essential because the further you move towards the edge, there is greater diversity in hardware types. Depending on the type of sensors being dealt with, there can be issues with communication protocols and data formats. 

This happens especially in vertical industries such as manufacturing that already have legacy technology that needs to be brought into this new world, McCarthy said. However, this level of uniqueness is becoming rarer than before with less on the unique side and more being standardized. 

Development requirements differ 

Developing for the edge is different than for other form factors because edge devices have a longer lifespan than things that can be found in a data center; something that’s always been true in the embedded world. Developers now have to think about the longer lifespan of both the hardware and the software that sits on top of it.

At the same time, though, the fast pace of today’s development world has driven the demand to deliver new features and functionalities faster, even for these devices, according to McCarthy. 

That’s why the edge space has seen the prevalence of device management capabilities offered by cloud providers that give enterprises information about whether they can turn off that device, update the firmware of that device, or change configurations. 

In addition to elucidating the life cycle, device management also helps out with security, because it offers guidance on what data to pull back to a centralized location versus what can potentially be left out on the edge. 

“This is so you can get a little bit more of that agility that you’ve seen in the cloud, and try to bring it to the edge,” McCarthy said. “It will never be the same, but it’s getting closer.”

Decentralization a challenge

Developing for the edge still faces challenges due to its decentralization nature, which requires more monitoring and control than a traditional centralized computing model would need, according to Mrudul Shah, the CTO of Technostacks, a mobile app development company in the United States and India.

Connectivity issues can cause major setbacks on operations, and often the data that is processed at the edge is not discarded, which causes unnecessary data stuffing, Shah added. 

The demand for application use cases at these different edge environments is certainly extending the need for developers to consider the requirements in that environment for that particular vertical industry, according to Michele Pelino, a principal analyst at Forrester.

Also, the industry has had a lot of device fragmentation, so there is going to be a wide range of vendors that say they can help out with one’s edge requirements. 

“You need to be sure you know what your requirements are first, so that you can really have an apples to apples conversation because they are going to be each of those vendor categories that are going to come from their own areas of expertise to say, ‘of course, we can answer your question,’ but that may not be what you need,” Pelino said. 

Currently, for most enterprise use cases for edge computing, commodity hardware and software will suffice. When sampling rates are measured in milliseconds or slower, the norms are low-power CPUs, consumer-grade memory and storage, and familiar operating systems like Linux and Windows, according to Brian Gilmore, the director of IoT Product Management at InfluxData,  an open-source time series database.

The analytics here are applied to data and events measured in human time, not scientific time, and vendors building for the enterprise edge are likely able to adapt applications and architectures built for desktops and servers to this new form factor.  

“Any developer building for the edge needs to evaluate which of these edge models to support in their applications. This is especially important when it comes to time series data, analytics, and machine learning,” Gilmore said. “Edge autonomy, informed by centralized — currently in the cloud — evaluation and coordination, and right-place right-time task execution in the edge, cloud, or somewhere in between, is a challenge that we, as developers of data analytics infrastructure and applications, take head on.”

No two edge deployments the same

An edge architecture deployment asks for comprehensive monitoring, critical planning, and strategy as no two edge deployments are the same. It is next to impossible to get IT staff to a physical edge site, so deployments should be critically designed as a remote configuration to provide resilience, fault tolerance and self-healing capabilities, Technostacks’ Shah explained.

In general, a lot of the requirements that developers need to account for will depend on the environment that edge use case is being developed for, according to Forrester’s Pelino. 

“It’s not that everybody is going in one specific direction when it comes to this. So you sort of have to think about the individual enterprise requirements for these edge use cases and applications with their developer approach, and sort of what makes sense,” Pelino said. 

To get started with their edge strategy, organizations need to first make sure that they have their foundation in place, usually starting with their infrastructure, IDC’s McCarthy explained. 

“So it means making sure that you have the ability to place applications where you need so that you have the management and control planes to address the hardware, the data, and the applications,” McCarthy explained. 

Companies also need to layer that framework for future expansion as the technology becomes even more prevalent. 

“Start with the use cases that you need to address for analytics, for insight for different kinds of applications, where those environments need to be connected and enabled, and then say ok, these are the types of edge requirements I have in my organization,” Forrester’s Pelino said. “Then you can speak to your vendor ecosystem about do I have the right security, analytics, and developer capabilities in-house, or do I need some additional help?” 

When adopted correctly, edge environments can provide many benefits.

Low latency is one of the key benefits of computing at the edge, along with the ability to do AI and ML analytics in different locations which might have not been possible before, which can save cost by not sending everything to the cloud. 

At the edge, data collection speeds can approach near-continuous analog to digital signal conversion outputs of millions of values per second, and maintaining that precision is key to many advanced use cases in signal processing and anomaly detection. In theory, this requires specific hardware and software considerations — FPGA, ASIC, DSP, and other custom processors, highly accurate internal clocks, hyper-fast memory, real-time operating systems, and low-level programming which eliminates internal latency, InfluxData’s Gilmore explained. 

Despite popular opinion, the edge is beneficial for security

Security has come up as a key challenge for edge adoption because there are more connected assets that contain data, and there is also an added physical component for those devices to get hacked. But, it can also improve security. 

“You see people are concerned about the fact that you’re increasing the attack surface, and there’s all of this chance for somebody to insert malware into the device. And unfortunately, we’ve seen examples of this in the news where devices have been compromised. But, there’s another side of that story,” IDC’s McCarthy said. “If you look at people who are concerned about data sovereignty, like having more control about where data lives and limiting the movement of data, there is another storyline here about the fact that edge actually helps security.”

Security comes into play at many different levels of the edge environment. It is necessary at the point of connecting the device to the network, at the data insight analytics piece in terms of ensuring who gets access to it, and security of the device itself, Forrester’s Pelino explained.

Also, these devices are now operating in global ecosystems, so organizations need to determine if they match the regulatory requirements of that area. 

Security capabilities to address many of these concerns are now coming from the different cloud providers, and also chipset manufacturers offer different levels of security to their components. 

In edge computing, any data traversing the network back to the cloud or data center can also be secured through encryption against malicious attacks, Technostacks’ Shah added. 

What constitutes edge is now expanding

The edge computing field, in general, is now expanding to fields such as autonomous driving, real-time insight into what’s going on in a plant or a manufacturing environment, or even what’s happening with particular critical systems in buildings or different spaces such as transportation or logistics, according to Pelino. It is growing in any business that has a real-time need or has distributed operations. 

“When it comes to the more distributed operations, you see a lot happening in retail. If you think about typical physical retailers that are trying to close that gap between the commerce world, they have so much technology now being inserted into those environments,  whether it’s just the point of sale system, and digital signage, and inventory tracking,” IDC’s McCarthy said. 

The edge is being applied to new use cases as well. For example, Auterion builds drones that they can then give to fire services. Whenever there’s a fire, the drone immediately shoots and sends back footage of what is happening in that area before the fire department gets there and says what kind of fire to prepare for and to be able to scan whether there are any people in there. Another new edge use case is the unmanned Boeing MQ-25 aircraft that can connect with a fighter at over 500 miles per hour autonomously. 

“While edge is getting a lot of attention it is still not a replacement for cloud or other computing models, it’s really a complement,” McCarthy said. “The more that you can distribute some of these applications and the infrastructure underneath, it just enables you to do things that maybe you were constrained on before.”

Also, with remote work on the rise and the aggressive acceleration of businesses leveraging digital services, edge computing is imperative for a cheaper and reliable data processing architecture, according to Technostacks’ Shah. 


Companies are seeing benefits in moving to the edge
Infinity Dish

Infinity Dish, which offers satellite television packages, has adopted edge computing in the wake of the transition to the remote workplace. 

“We’ve found that edge computing offers comparable results to the cloud-based solutions we were using previously, but with some added benefits,” said Laura Fuentes, operator of Infinity Dish. “In general, we’ve seen improved response times and latency during data processing.”

Further, by processing data on a local device, Fuentes added that the company doesn’t need to worry nearly as much when it comes to data leaks and breaches as it did using cloud solutions.

Lastly, the transmission costs were substantially less than they would be otherwise. 

However, Fuentes noted that there were some challenges with the adoption of edge. 

On the flip side, we have noticed some geographic discrepancies when attempting to process data. Additionally, we had to put down a lot of capital to get our edge systems up and running—a challenge not all businesses will have the means to solve,” Fuentes said.  

Memento Memorabilia

Kane Swerner, the CEO and co-founder of Memento Memorabilia, said that as her company began implementing edge throughout the organization, hurdles and opportunities began to emerge. 

Memento Memorabilia is a company that offers private signing sessions to guarantee authentic memorabilia from musicians, celebrities, actors, and athletes to fans.

“We can simply target desired areas by collaborating with local edge data centers without engaging in costly infrastructure development,” Swerner said. “To top it all off, edge computing enables industrial and enterprise-level companies to optimize operating efficiency, improve performance and safety, automate all core business operations, and guarantee availability most of the time.” 

However, she said that one significant worry regarding IoT edge computing devices is that they might be exploited as an entrance point for hackers. Malware or other breaches can infiltrate the whole network via a single weak spot.


There are 4 critical markers for success at the edge

A recent report by Wind River, a company that provides software for intelligent connected systems, found that there are four critical markers for successful intelligent systems: true compute on the edge, a common workflow platform, AI/ML capabilities, and ecosystems of real-time applications. 

The report “13 Characteristics of an Intelligent Systems Future” surveyed technology executives across various mission-critical industries and revealed the 13 requirements of the intelligent systems world for which industry leaders must prepare. The research found that 80% of these technology leaders desire intelligent systems success in the next five years.

True compute at the edge, by far the largest of the characteristics of the survey at 25.5% of the total share, is the ability of devices to fully function in near-latency-free mode on the farthest edge of the cloud, for example, a 5G network, an autonomous vehicle, or a highly remote sensor in a factory system. 

The report stated that by 2030, $7 trillion of the U.S. economy will be driven by the machine economy, in which systems and business models increasingly engage in unlocking the power of data and new technology platforms. Intelligent systems are helping to drive the machine economy and more fully realize IoT, according to the report. 

Sixty-two percent of technology leaders are putting into place strategies to move to an intelligent systems future, and 16% are already committed, investing, and performing strongly. It’s estimated that this 16% could realize at least four times higher ROI than their peers who are equally committed but not organized for success in the same way. 

The report also found that the two main challenges for adopting an intelligent systems infrastructure are a lack of skills in this field and security concerns. 

“So when we did the simulation work with about 500 executives, and said, look, here are the characteristics, play with them, we got like 4,000-plus simulations, things like common workflow platform, having an ecosystem for applications that matter, were really important parts of trying to break that lack of skill or lack of human resource in this journey,” said Michael Gale, Chief Marketing Officer at Wind River.

For some industries, the move to edge is essential for digital transformation, Gale added. 

“Digital Transformation was an easy construct in finance, retail services business. It’s really difficult to understand in industrial because you don’t really have to have a lot of humans to be part of it. It’s a machine-based environment,” Gale said. “I think it’s a realization intelligence systems model is the transformation moment for the industrial sector. If you’re going to have a full lifecycle intelligence systems business, you’re going to be a leader. If you’re still trying to do old things, and wrap them with intelligent systems, you’re not going to succeed, you have to undergo this full transformational workflow.”

 

The post Developers are gaining more tools for the edge appeared first on SD Times.

]]>
Wind River acquires Particle Design https://sdtimes.com/softwaredev/wind-river-acquires-particle-design/ Mon, 20 Sep 2021 16:37:57 +0000 https://sdtimes.com/?p=45306 Wind River has announced that it completed the acquisition of the UI/UX design company Particle Design which brings UI/UX capabilities to the new Wind River Studio offering.  Particle Design offers end-to-end UX research services that employ a range of methodologies  from ethnographic research to user evaluations and usability testing; its design services include prototyping, interaction … continue reading

The post Wind River acquires Particle Design appeared first on SD Times.

]]>
Wind River has announced that it completed the acquisition of the UI/UX design company Particle Design which brings UI/UX capabilities to the new Wind River Studio offering. 

Particle Design offers end-to-end UX research services that employ a range of methodologies  from ethnographic research to user evaluations and usability testing; its design services include prototyping, interaction design, and wireframing.

RELATED CONTENT: New Wind River Studio release delivers automation across SDLC

The new Wind River Studio is a cloud-native platform for the development, deployment, operations, and servicing of mission-critical intelligent systems through one source. 

The acquisition will expand the UI/UX capabilities to include cognitive UI, which uses AI/ML to predict and anticipate the needs and behaviors of the user bringing a more contextual, personalized, intelligent assistant-type UX. 

“In the new intelligent machine economy that we’re enabling with our customers, the user experience is more important than ever. We’re thrilled to welcome the industry-leading Particle design team to Wind River,” said Kevin Dallas, president and CEO of Wind River. “The graphical, natural, and cognitive UI/UX expertise that Particle brings to Wind River Studio will further advance our mission of enabling our customers to realize the AI-infused, digital future of the planet.”

The post Wind River acquires Particle Design appeared first on SD Times.

]]>
Infrastructure management going extinct with serverless https://sdtimes.com/softwaredev/infrastructure-management-going-extinct-with-serverless/ Fri, 03 Sep 2021 13:30:48 +0000 https://sdtimes.com/?p=45171 It’s no surprise that organizations are trying to do more with less. In the case of managing infrastructure, they’re in fact trying to do much more in the area of provisioning software — not by lessening it but by eliminating infrastructure altogether, through the use of serverless technology.  According to Jeffrey Hammond, the vice president … continue reading

The post Infrastructure management going extinct with serverless appeared first on SD Times.

]]>
It’s no surprise that organizations are trying to do more with less. In the case of managing infrastructure, they’re in fact trying to do much more in the area of provisioning software — not by lessening it but by eliminating infrastructure altogether, through the use of serverless technology. 

According to Jeffrey Hammond, the vice president and principal analyst at Forrester, one in four developers are now regularly deploying to public clouds using serverless technology, going up from 19% to 24% since last year. This compares to 28% of respondents that said that they are regularly deploying with containers.

The main reason containers are a little bit ahead is that when organizations are trying to modernize existing apps, it’s a little bit easier to go from a virtual machine to a container than it is to embrace serverless architecture, especially if one is using something like AWS Lambda, which you requires writing applications that are stateless, according to Hammond.

Also, the recently released Shift to Serverless survey conducted by the cloud-native programming platform provider Lightbend found that 83% of respondents said they were extremely satisfied with their serverless application development solutions. However, only a little over half of the organizations expect that making the switch to serverless will be easy.

“If I just basically want to run my code and you worry about scaling it then a serverless approach is a very effective way to go. If I don’t want to worry about having to size my database, if I just want to be able to use it as I need it, serverless extensions for things like Aurora make that a lot easier,” Hammond said. “So basically as a developer, when I want to work at a higher level, when I have a very spiky workload, when I don’t particularly care to tune my infrastructure, I’d rather just focus on solving my business problem, a serverless approach is the way to go.” 

While serverless is seeing a pickup in new domains, Doug Davis, who heads up the CNCF Serverless Working Group and is an architect and product manager at IBM Cloud Code Engine, said that the main change in serverless is not in the technology itself, but rather providers are thinking of new ways to reel people in to their platforms. 

“Serverless is what it is. It’s finer-grain microservices, it’s scale to zero, it’s pay-as-you-go, ignore the infrastructure and all that good stuff. What I think might be sort of new in the community at large is more just, people are still trying to find the right way to expose that to people,” Davis said. “But from the technology perspective, I’m not sure I see a whole lot necessarily changing from that perspective because I don’t think there’s a whole lot that you can change right now.”

Abstracting away Kubernetes 

The major appeal for many organizations moving to serverless is just that they want more and more of the infrastructure abstracted away from them. While Kubernetes revolutionized the way infrastructure is handled, many want to go further, Davis explained.

“As good as Kubernetes is from a feature perspective, I don’t think most people will say Kubernetes is easy to use. It abstracts the infrastructure, but then it presents you with different infrastructure,” Davis said. 

While people don’t need to know which VM they’re using with Kubernetes, they still have to know about nodes, and even though they don’t need to know which load balancer they’re using, there’s always managing the load balancer. 

“People are realizing not only do I not want to worry about the infrastructure from a service perspective, but I also don’t want to worry about it from a Kubernetes perspective,” Davis said. “I just want to hand you my container image or hand you my source code and you go run it for me all. I’ll tweak some little knobs to tell you what fine-tuning I want to do on it. That’s why I think projects like Knative are kind of popular, not just because yeah, it’s a new flavor of serverless, but it hides Kubernetes.” 

Davis said there needs to be a new way to present it as hiding the infrastructure, going abstract in a way, and just handing over the workload in whatever form is desired, rather than getting bogged down thinking, is this serverless, platform-as-a-service, or container-as-a-service. 

However, Arun Chandrasekaran, a distinguished vice president and analyst at Gartner, said that whereas serverless abstracts more things away from the user, things like containers and Kubernetes are more open-source oriented so the barrier to entry within the enterprise is low. Serverless can be viewed as a little bit of a “black box,” and a lot of the functional platforms today also tend to be a little proprietary to those vendors.

“So serverless has some advantages In terms of the elasticity, in terms of the abstraction that it provides in terms of the low operational overhead to the developers. But on the flip side, your application needs to fit into an event-driven pattern in many cases to be fit for using serverless functions. Serverless can be a little opaque compared to running things like containers,” Chandrasekaran said. “I kind of think of serverless and containers as being that there are some overlapping use cases, but I think by and large, they address very different requirements for customers at this point in time.”

Davis said that some decision-makers are still wary of relinquishing control over their infrastructure, because in the past that often equated to reduced functionality. But with the way that serverless stands now, users won’t be losing functionality; instead, they’ll be able to access it in a more streamlined way.

“I don’t think they buy that argument yet and I think they’re skeptical. It’s going to take time for them to believe,” Davis said. “This really is a fully-featured Kubernetes under the covers.”

Other challenges that stifle adoption include the difficulty that developers have with changing to work asynchronously. Also, some would like to have more control over their runtime, including the autoscaling, security, and tendency models, according to Forrester’s Hammond. 

Hammond added that he is starting to see a bit of an intersection between serverless and containers, but the main thing that sets serverless apart is its auto-scaling features.

Vendors are defining serverless

Serverless as a term is expanding and some cloud vendors have started to call all services where one doesn’t have to provision or manage the infrastructure as serverless.

Even though these services are not serverless functions, one could argue that they’re broadly part of serverless computing, Gartner’s Chandrasekaran explained. 

For example, you have services like Athena, which is an interactive query service from Amazon, or Fargate for example, which is a way to run containers, but you’re not operating the container environment. 

However, Roman Shaposhnik, the co-founder and VP of product and strategy at ZEDEDA, as well as a member of the board of directors for Linux Foundation Edge, and vice president of the Legal Affairs Committee at the Apache Software Foundation, said that the whole term of serverless is a bit of a confusing at the moment and that people typically mean two different things whenever they talk about serverless. Clearly defining the technology is essential to spark interest in more people. 

“Google has these two services and they kind of very confusingly call them serverless in both cases. One is called Google Functions and the other one is Google Run and people are just constantly confused. Google was such an interesting case for me because I for sure expected Google to at least unify around Knative. Their Google Cloud Functions is completely separate, and they don’t seem to be interested in running it as an open-source project,” Shaposhnik said. “This is very emblematic of how the industry is actually confused. I feel like this is the biggest threat to adoption.”

This large basket of products has created an API sprawl rather than a tool sprawl because the public cloud typically offers so much that if developers wanted to replicate all of this in an open-source serverless offering like OpenWhisk by the Apache Software Foundation, they really have to build a lot of things that they just have no interest in building. 

“This is not even because vendors are evil. It’s just because only vendors can give you the full sort of gamut of the APIs that would be meaningful to what they are actually offering you because like 90% of their APIs are closed-source and proprietary anyway. And if you want to make them, effective, well, you might as well use a proprietary serverless platform. Like what’s the big deal, right?,” Shaposhnik said. 

Serverless commits users to a certain viewpoint that not all might necessarily enjoy. If companies are doing a lot of hybrid work, if they need to support multiple public clouds and especially if they have some deployments in a private data center, it can get painful pretty quickly, Shaposhnik explained.

OpenFaaS, an open-source framework and infrastructure preparation system for building serverless applications, is trying to solve the niche of figuring out the sweet spot of dealing with the difficult aspects.

“If you have enough of those easy things that you can automate, then you should probably use OpenFaaS, but everything else actually starts making less sense because if your deployment is super heterogeneous, you are not really ready for serverless,” Shaposhnik said. 

In general, there is not much uptick with open-source serverless platforms because they need to first find a great environment to be embedded in. 

“Basically at this point, it is a bit of a solution looking for a problem, and until that bigger environment and to which it can be embedded successfully appears, I don’t think it will be very interesting.”

In the serverless space, proprietary vendor-specific solutions are the ones that are pushing the space forward. 

I would say open-source is not as compelling as in some other spaces, and the reason is I think a lot of developers prefer open-source not necessarily because it’s free as in freedom but because it’s free as in beer,” Forrester’s Hammond said. 

Because with most functions, organizations pay by the gigabyte second, now developers seem to be able to experiment and prototype and prove value at very low costs. And most of them seem to be willing to pay for that in order to have all the infrastructure managed for them. 

“So you do see some open source here, but it’s not necessarily at the same level as something like Kafka or Postgres SQL or any of those sorts of open-source libraries,” Hammond said. 

With so many functionalities to choose from, some organizations are looking to serverless frameworks to help manage how to set up the infrastructure. 

Serverless frameworks can deploy all the serverless infrastructure needed; it deploys one’s code and infrastructure via a simpler abstract experience.

In other words, “you don’t need to be an infrastructure expert to deploy a serverless architecture on AWS if you use these serverless frameworks,” Austen Collins, the founder and CEO of the Serverless Framework, said. 

Collins added that the Serverless Framework that he heads has seen a massive increase in usage over the duration of the pandemic, starting at 12 million downloads at the beginning of 2020 and now at 26 million. 

“I think a big difference there between us and a Terraform project is developers use us. They really like Serverless Framework because it helps them deliver applications where Terraform is very much focused on just the hardcore infrastructure side and used by a lot of Ops teams,” Collins said. 

The growth in the framework can be attributed to the expanding use cases of serverless and every time that there is a new infrastructure as a service (IaaS) offering. “The cloud really has nowhere else to go other than in a more serverless direction,” Collins added.

Many organizations are also realizing that they’re not going to be able to keep up with the hyper-competitive, innovative era if they’re trying to maintain and scale their software all by themselves.

“The key difference that developers and teams will have to understand is that number one, it lives exclusively on the cloud so you’re using cloud services. You can’t really spin up this architecture on your machine as easily. And also the development workflow is different, and this is one big value of Serverless Framework,” Collins said. “But, once you pass that hurdle, you’ve got an architecture with the lowest overhead out of anything else on the market right now.”

All eyes are on serverless at the edge

The adoption of serverless has been broad-based, but the larger organizations tend to embrace it a bit more, especially if they need to provide a global reach to their software infrastructure and they don’t want to do that on top of their own hardware, Forrester’s Hammond explained. 

In the past year, the industry started to see more interest in edge and edge-oriented deployments, where customers wanted to apply some of these workloads in edge computing environments, according to Gartner’s Chandrasekaran.

This is evident in content delivery network (CDN) companies such as Cloudflare, Fastly, or Akamai, which are all bringing new serverless products to market that primarily focus on edge computing. 

“It’s about scale-up, which is to really quickly scale and massively expand, but it’s also about scaling down when data is not coming from IoT endpoints. I don’t want to use the infrastructure and I want the resources to be de-provisioned,” Chandrasekaran said. “Edge is all about rapid elasticity.”

The serverless compute running in the edge is a use case that has the possibility of creating new types of architectures to change the way that applications were previously built to process compute closer to the end-user for faster performance, according to Collins. 

“So an interesting example, this is just how we’re leveraging it. We’ve got serverless.com is actually processed using Cloudflare workers in the edge. And it’s all on one domain, but the different paths are pointing to different architectures. So it’s the same domain, but we have compute running that looks at the past and forwards the request to different technology stacks. So one for our documentation, one for our landing pages, and whatnot,” Collins said. “So there’s a bunch of new architectural patterns that are opening up, thanks to running serverless in the edge.”

Another major trend that the serverless space has seen is the growth of product extension models for integrations. 

“If you’ve got a platform as a company and you want developers to extend it and use it and integrate it into their day-to-day work, the last thing you want to do is say, well now you’ve got to go stand up infrastructure on your own premises or in another public cloud provider, just so that you can take advantage of our APIs,” Forrester’s Hammond said. “I think increasingly, we will use serverless concepts as the glue by which we weld all of these cloud-based platforms together. 

The extensions also involve continued improvements to serverless functions that are adding more programming languages and trying to enhance the existing tooling in areas like security and monitoring. 

For those companies that are sold on a particular cloud and don’t really care about multicloud or whether Amazon is locking them in, for example, Shaposhnik said not using serverless would be foolish. 

“Serverless would give you a lot of bang for the buck effectively scripting and automating a lot of the things that are happening within the cloud,” Shaposhnik said.

Serverless is the architecture for volatility

Serverless seems to now be the volatility architecture because of business uncertainty due to the pandemic. 

“Everyone seems to be talking about scaling up, but there’s this whole other aspect of what about if I need to scale down,” Serverless Framework founder and CEO Austen Collins said. 

A lot of businesses that deal with events, sports, and anything that’s in-person have had to scale down operations almost immediately.

At a moment’s notice, these businesses had to scale down almost immediately due to a shutdown, and for those that work with serverless architecture, their operations can scale down without them having to do anything. 

The last 16 months have also seen a tremendous amount of employee turnover, especially in tech, so organizations are looking to adopt a way to be able to quickly onboard new hires by abstracting a lot of the infrastructure away, Collins added. 

“I think it’s our customers that have had serverless architectures that don’t require as much in-house expertise as running your own Kubernetes clusters that have really weathered this challenge better than anyone else,” Collins said. “Now we can see the differences, whenever there’s a mandate to shut down different types of businesses in the usage of people, applications and the scaling down, scaling up when things are opening up again is immediate and they don’t have to do anything. The decision-makers are often now citing these exact concerns.”

A serverless future: A tale of two companies 

Carla Diaz, the cofounder of Broadband Search, a company that aims to make it easier to find the best internet and television services in an area, has been looking at adopting a serverless architecture since it is now revamping its digital plans. 

“Since most of the team will be working from home rather than from the office, it doesn’t make sense to continue hosting servers when adopting a cloud-based infrastructure. Overall, that is the main appeal of going serverless, especially if you are beginning to turn your work environment into a hybrid environment,” Diaz said. 

Overall, the cost of maintaining and committing to downtime are just some of the things the company doesn’t need to worry about anymore with the scalability of the serverless architecture. 

Another reason why Broadband Search is interested in going to the cloud-based system is the company doesn’t have to worry about the costs of not only having to buy more hardware, which can already be quite expensive, but the costs of maintaining more equipment and possible downtime if the integration is extensive. 

“By switching and removing the hardware component, the only real cost is to pay for the service which will host your data off-site and allow you to scale your business’ IT needs either back or forward as needed,” Diaz added. 

Dmitriy Yeryomin, a senior Golang developer at iTechArt Group, a one-stop source for custom software development, said that many of the 250-plus active projects within the company use serverless architecture. 

“This type of architecture is not needed in every use case, and you should fully envision your project before considering serverless, microservice, or monolith architecture,” Yeryomin said. 

In terms of this company’s projects, Yeryomin said it helps to divide up the system into fast coding and deploying sequences, to make their solution high-performance and easily scalable.

“In terms of benefits, serverless applications are well-suited to deploying and redeploying to the cloud, while conveniently setting the environmental and security parameters,” Yeryomin said. “I work mostly with AWS, and UI has perfect tools for monitoring and test service. Also local invokes is great for testing and debug services.”

However, the most challenging thing with serverless is time. When you configure a lambda function execution, as it is bigger, it becomes more expensive. 

“You can’t store the data inside more than the function works,” Yeryomin explained. “So background jobs are not for serverless applications.”

The post Infrastructure management going extinct with serverless appeared first on SD Times.

]]>
SD Times news digest: Creatio version 7.18, Android privacy updates, and the future of IE https://sdtimes.com/softwaredev/sd-times-news-digest-creatio-version-7-18-android-privacy-updates-and-the-future-of-ie/ Fri, 21 May 2021 15:43:42 +0000 https://sdtimes.com/?p=44086 The updated version of Creatio’s low-code platform for process automation and CRM includes a full cycle of process management, an accelerated time to market for new applications and more.  The platform’s updated low-code and developer tools enable users to build apps and processes faster with an improved UI and platform enhancements for external file data … continue reading

The post SD Times news digest: Creatio version 7.18, Android privacy updates, and the future of IE appeared first on SD Times.

]]>
The updated version of Creatio’s low-code platform for process automation and CRM includes a full cycle of process management, an accelerated time to market for new applications and more. 

The platform’s updated low-code and developer tools enable users to build apps and processes faster with an improved UI and platform enhancements for external file data storage and other backend functions.

Improvements to the BPM engine streamline the full cycle of process management with faster integrations setups and the unified CRM solution enables companies to better align their sales, marketing and service departments.

Android privacy updates

The latest Android gives more transparency around the data being accessed by apps while providing simple controls to make informed choices.

With the new privacy dashboard, users can have a simple and clear timeline view of the last 24-hour access to location, microphone and camera and they can also share more context about an app’s data usage with a new permission intent API.

The updates also include  two new controls that allow users to quickly and easily cut off apps’ access to the microphone and camera on the device, more control over location data, clipboard read notifications, nearby device permissions and more. 

Additional details on all of the latest Android updates are available here.

The future of Internet Explorer

Microsoft announced that the Internet Explorer 11 desktop application will be retired on June 12, 2022, adding that the future of Internet Explorer on Windows 10 is in Microsoft Edge. 

Microsoft Edge is now capable of handling compatibility with older, legacy websites and applications with a built in “IE mode” so that users can access Internet Explorer-based websites and applications from Microsoft Edge. 

 

“With Microsoft Edge, we provide a path to the web’s future while still respecting the web’s past. Change was necessary, but we didn’t want to leave reliable, still-functioning websites and applications behind,” Sean Lyndersay, a partner group program manager of Microsoft Edge at Microsoft wrote in a blog post.

PDFTron announces new investment

PDFTron announced a new strategic growth investment from Thoma Bravo that is expected to drive increased innovation within PDFTron’s document processing market. 

The investment is expected to drive increased innovation in PDFTron’s document processing technology platform and to accelerate the company’s growth trajectory in the document processing market. 

The transaction is expected to close by the end of this month.

The post SD Times news digest: Creatio version 7.18, Android privacy updates, and the future of IE appeared first on SD Times.

]]>
NVIDIA unleashes Jarvis conversational AI framework https://sdtimes.com/ai/nvidia-unleashes-jarvis-conversational-ai-framework/ Mon, 12 Apr 2021 22:33:49 +0000 https://sdtimes.com/?p=43631 NVIDIA announced its application framework for building conversational AI services is now available. The new NVIDIA Jarvis framework comes with pre-trained deep learning models and software tools to help developers create conversational AI services that can be easily deployed from the cloud or at the edge. According to the company, it offers automatic speech recognition … continue reading

The post NVIDIA unleashes Jarvis conversational AI framework appeared first on SD Times.

]]>
NVIDIA announced its application framework for building conversational AI services is now available. The new NVIDIA Jarvis framework comes with pre-trained deep learning models and software tools to help developers create conversational AI services that can be easily deployed from the cloud or at the edge.

According to the company, it offers automatic speech recognition and language understanding, real-time translations for multiple languages and new text-to-speech capabilities to create expressive conversational AI agents.

The new offering was trained over several million GPU hours on over 1 billion pages of text, 60,000 hours of speech data, and in different languages, accents, environments and lingos to achieve world-class accuracy, NVIDIA stated in a post. 

“Conversational AI is in many ways the ultimate AI,” said Jensen Huang, founder and CEO of NVIDIA. “Deep learning breakthroughs in speech recognition, language understanding and speech synthesis have enabled engaging cloud services. NVIDIA Jarvis brings this state-of-the-art conversational AI out of the cloud for customers to host AI services anywhere.”

First, developers can choose pre-trained Jarvis models from the NVIDIA NGC catalog and then fine-tune it with the NVIDIA Transfer Learning Toolkit. Models can also be deployed using just a few lines of code so deep AI expertise isn’t needed. 

NVIDIA also partnered with Mozilla Common Voice to, an open-source collection of voice data, to train voice-enabled apps, services and devices.

“We launched Common Voice to teach machines how real people speak in their unique languages, accents and speech patterns,” said Mark Surman, executive director at Mozilla. “NVIDIA and Mozilla have a common vision of democratizing voice technology — and ensuring that it reflects the rich diversity of people and voices that make up the internet.”

The post NVIDIA unleashes Jarvis conversational AI framework appeared first on SD Times.

]]>