serverless Archives - SD Times https://sdtimes.com/tag/serverless/ Software Development News Mon, 01 May 2023 19:33:31 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.1 https://sdtimes.com/wp-content/uploads/2019/06/bnGl7Am3_400x400-50x50.jpeg serverless Archives - SD Times https://sdtimes.com/tag/serverless/ 32 32 Vercel introduces a suite of serverless storage solutions https://sdtimes.com/data/vercel-introduces-a-suite-of-serverless-storage-solutions/ Mon, 01 May 2023 19:33:31 +0000 https://sdtimes.com/?p=51055 Vercel announced its suite of serverless storage solutions: Vercel KV, Postgres, and Blob to make it easier to server render just-in-time data as part of the company’s efforts to “make databases a first-class part of the frontend cloud.” Vercel KV is a serverless Redis solution that’s easy and durable, powered by Upstash. With Vercel KV, … continue reading

The post Vercel introduces a suite of serverless storage solutions appeared first on SD Times.

]]>
Vercel announced its suite of serverless storage solutions: Vercel KV, Postgres, and Blob to make it easier to server render just-in-time data as part of the company’s efforts to “make databases a first-class part of the frontend cloud.”

Vercel KV is a serverless Redis solution that’s easy and durable, powered by Upstash. With Vercel KV, it’s possible to generate Redis-compatible databases that can be written to and read from Vercel’s Edge Network in regions that you designate, requiring only minimal configuration.

Vercel Postgres is a serverless SQL database built for the frontend, powered by Neon. Vercel Postgres provides a completely managed, fault-tolerant, and highly scalable database that offers excellent performance and low latency for web applications. It’s specifically designed to work flawlessly with Next.js App Router and Server Components, as well as other frameworks like Nuxt and SvelteKit. This makes it easy to retrieve data from your Postgres database and use it to create dynamic content on the server with the same rapidity as static content.

Lastly, Vercel Blob enables users to upload and serve files at the edge, and is powered by Cloudflare R2. Vercel Blob can store files like images, PDFs, CSVs, or other unstructured data and it’s useful for files normally stored in an external file storage solution such as Amazon S3, files that are programmatically uploaded or generated in realtime, and more. 

“Frameworks have become powerful tools to manipulate backend primitives. Meanwhile, backend tools are being reimagined as frontend-native products. This convergence means bringing data to your application is easier than ever, and we wanted to remove the final friction point: getting started,” Vercel stated. 

Vercel KV, Vercel Postgres, and Vercel Blob are built on open standards and protocols, designed for low latency, efficient data fetching, and fully integrated with Vercel’s existing tools and workflows.

Additional details are available here

The post Vercel introduces a suite of serverless storage solutions appeared first on SD Times.

]]>
How Capital One Uses Python to Power Serverless Applications https://sdtimes.com/data/how-capital-one-uses-python-to-power-serverless-applications/ Tue, 18 Apr 2023 16:15:46 +0000 https://sdtimes.com/?p=50934 Cultivating a loyal customer base by providing innovative solutions and an exceptional experience should be the goal of any company, regardless of industry.  This is one of the main reasons why Capital One uses Python to power a large number of serverless applications, giving developers a better experience as they deliver business value to customers. … continue reading

The post How Capital One Uses Python to Power Serverless Applications appeared first on SD Times.

]]>
Cultivating a loyal customer base by providing innovative solutions and an exceptional experience should be the goal of any company, regardless of industry. 

This is one of the main reasons why Capital One uses Python to power a large number of serverless applications, giving developers a better experience as they deliver business value to customers.

Python has a rich toolset with codified best practices that perform well in AWS Lambda. Capital One has been able to take modules, whether they were developed internally or from the Python community, and put them together to build what is necessary inside of a fully managed compute instance.  

“We have vibrant Python and serverless communities within Capital One which has helped us advance this work,” said Brian McNamara, Distinguished Engineer.

Why Python for Serverless

Python and serverless practices are closely aligned in the development lifecycle which allows for quick feedback loops and the ability to scale horizontally. Using Python for serverless also allows for:

  • Faster time to market: Developers use Python to quickly go from ideation to production code. Serverless applications developed with Python allow developers to have their code deployed on a resilient, performant, scalable, and secure platform.
  • Focus on business value: Lower total cost of ownership so developers can focus on features instead of maintaining servers and containers; addressing operating-level system concerns; and managing resilience, autoscaling and utilization.
  • Extremely fast scale: Serverless is event-driven which helps with fast scaling, so it’s important to think of API calls, data in a stream, or a new file to process as events. For example, with built-in retry logic from cloud services, a Python serverless function can process the non-critical path from durable cues so the customer experience is not impacted.
  • Reusable resources: The Python ecosystem provides great resources and the AWS Lambda Powertools Python package is based on a number of open source capabilities. AWS Serverless Application Model also allows for a local testing experience that generates event examples.
  • Flexible coding style: Python provides a flexible coding style allowing developers to blend functional programming, data classes and Object Oriented Programming to process the event.
 Observability Benefits

Additionally, Furman and McNamara emphasized that using Python to power serverless applications has provided Capital One with countless observability benefits so that developers know what is happening inside an application.

“Observability in serverless can be often perceived as more challenging, but it can also be more structured with libraries that codify logs, telemetry data and metric data. This makes it easy to codify best practices,” said Dan Furman, Distinguished Engineer.

Furman and McNamara also pointed out the importance of leveraging the vastness of both the serverless and Python ecosystems. Looking at the knowledge that has been acquired by other members of these communities allows for organizations to gain the benefit of their experiences. 

McNamara and Furman will be giving a presentation on using Python to power serverless applications at PyCon US 2023, taking place at the Salt Palace Convention Center in Salt Lake City Utah from April 19-27. For more information about PyCon US 2023, visit the website.

The post How Capital One Uses Python to Power Serverless Applications appeared first on SD Times.

]]>
Fermyon releases Spin 1.0 to simplify how developers build WebAssembly serverless functions https://sdtimes.com/software-development/fermyon-releases-spin-1-0-to-simplify-how-developers-build-webassembly-serverless-functions/ Wed, 22 Mar 2023 19:17:40 +0000 https://sdtimes.com/?p=50649 Fermyon Technologies, the serverless WebAssembly company that was spun out of Microsoft Azure’s Deis Lab team, today announced a major new release of its serverless functions framework based on WebAssembly, Spin 1.0. According to the company, Spin 1.0 was released in an effort to meet the needs of modern, full-stack developers, and offers support for … continue reading

The post Fermyon releases Spin 1.0 to simplify how developers build WebAssembly serverless functions appeared first on SD Times.

]]>
Fermyon Technologies, the serverless WebAssembly company that was spun out of Microsoft Azure’s Deis Lab team, today announced a major new release of its serverless functions framework based on WebAssembly, Spin 1.0.

According to the company, Spin 1.0 was released in an effort to meet the needs of modern, full-stack developers, and offers support for SQL databases, NoSQL key/value storage, OCI registry support as well as other popular languages. 

With a serverless API, such as AWS Lambda and Azure Functions, Spin applications are made easier to build and easier to deploy. Additionally, Spin has already been integrated into Microsoft’s cloud offerings, with other software vendors working to integrate in the future.

“Cloud native development has been slow and tedious for developers. Fermyon wants to reverse that trend. We want to make serverless apps fast. Fast to develop, fast to deploy and fast to run. With Spin, a developer can go from blinking cursor to deployed application in 66 seconds.The open source ecosystem gathering around Spin is propelled forward by WebAssembly, the underlying technology. Spin 1.0 is the culmination of a year’s development, and we could not have reached this milestone without the enthusiasm, contributions, and support of this rapidly expanding community,” said Matt Butcher, co-founder and CEO of Fermyon.

Key features of this release include key-value store, PostgreSQL, and Redis integration to enable stateful applications; support for the OCI Registry standard to offer users standard packaging alongside Docker images; and support for JavaScript/TypeScript, Python, Rust, Go, Java, and .NET.

Furthermore, command stability improvements have been made to provide a better developer experience, extensibility has been improved to support an expanded application scope, and it offers end-to-end testing so developers can be sure their code is complete and stable.

To learn more, click here to secure a ticket to join WASM I/O 2023 tomorrow, where Spin 1.0 will be demoed. 

The post Fermyon releases Spin 1.0 to simplify how developers build WebAssembly serverless functions appeared first on SD Times.

]]>
Cockroach Labs announces general availability of serverless database https://sdtimes.com/data/cockroach-labs-announces-general-availability-of-serverless-database/ Wed, 21 Sep 2022 18:54:20 +0000 https://sdtimes.com/?p=48961 Cockroach Labs announced the general availability of its serverless database that can help teams accelerate their software design cycles. “We envision a world where your data-intensive applications effortlessly and securely serve millions of customers anywhere on the planet, with the exact right capacity for that moment – all enabled by a simple SQL API in … continue reading

The post Cockroach Labs announces general availability of serverless database appeared first on SD Times.

]]>
Cockroach Labs announced the general availability of its serverless database that can help teams accelerate their software design cycles.

“We envision a world where your data-intensive applications effortlessly and securely serve millions of customers anywhere on the planet, with the exact right capacity for that moment – all enabled by a simple SQL API in the cloud,” said Nate Stewart, chief product officer at Cockroach Labs. “We’re a step closer to that vision now that CockroachDB Serverless is generally available. We’ve also released a new migration toolset and formed critical partnerships to help customers with existing applications take full advantage of CockroachDB.”

Users of the serverless database can save time by automating management and maintenance. It also offers scalability and high availability, and can automatically handle spikes in demand without bottlenecks.

Other capabilities include instant start, a CLI, PostgreSQL ORMs and drivers, and no-downtime schema changes.

The new serverless database includes a free tier of up to 5GB of storage and 250M RUs per month. 

The company also released its migration tool, CockroachDB Molt and integrations with Vercel, HashiCorp Vault, and Hashicorp Terraform. 

Additional details on the new releases are available here.

The post Cockroach Labs announces general availability of serverless database appeared first on SD Times.

]]>
Rookout launches its new serverless debugging experience https://sdtimes.com/softwaredev/rookout-launches-its-new-serverless-debugging-experience/ Tue, 12 Apr 2022 18:54:36 +0000 https://sdtimes.com/?p=47232 Observability company Rookout has announced the release of a new debugging experience for serverless applications. According to the company, even experienced developers have difficulty debugging ephemeral serverless applications. It believes that this new solution will help developers overcome this pain point.  “Traditional APM tools can highlight and alert on problematic areas within serverless applications – … continue reading

The post Rookout launches its new serverless debugging experience appeared first on SD Times.

]]>
Observability company Rookout has announced the release of a new debugging experience for serverless applications.

According to the company, even experienced developers have difficulty debugging ephemeral serverless applications. It believes that this new solution will help developers overcome this pain point. 

“Traditional APM tools can highlight and alert on problematic areas within serverless applications – but they can’t drill in and extract debugging data,” said Shahar Fogel, CEO of Rookout. “With a few clicks, Rookout users can now set non-breaking breakpoints and visualize the functions being invoked to understand the variables that may be contributing to customer-facing issues.” 

The new debugger includes a visual interface that displays a timeline of invoked functions. Developers can use this to further zero in on functions and debug problem areas. 

This new experience allows DevOps teams to respond to alerts within their monitoring or observability solution and then use the Rookout UI to drill down into the problem. 

“Rookout accelerates developer velocity by reducing the time for bug resolution,” said Arnal Dayaratna, research vice president of software development at IDC. “Rookout enhances the operational agility of developers because they can collect data about applications on demand, from any environment, with a few clicks.”

More information about the new debugging experience is available here

The post Rookout launches its new serverless debugging experience appeared first on SD Times.

]]>
Securing cloud-native applications https://sdtimes.com/cloud/securing-cloud-native-applications/ Mon, 01 Nov 2021 16:22:22 +0000 https://sdtimes.com/?p=45708 Cloud-native development has become the de facto way that companies make new apps due to its speed and cost savings. While it has opened up the world of Kubernetes, containers, and serverless to most organizations, they still need to grapple with certain complexities and security concerns that this style of development brings.  Concerning the use … continue reading

The post Securing cloud-native applications appeared first on SD Times.

]]>
Cloud-native development has become the de facto way that companies make new apps due to its speed and cost savings. While it has opened up the world of Kubernetes, containers, and serverless to most organizations, they still need to grapple with certain complexities and security concerns that this style of development brings. 

Concerning the use of modern, cloud-native application services such as microservices, functions as a service, containers and container orchestration frameworks (Kubernetes), more than 80% of developers report that their organizations are in the process of implementing, in the process of piloting, or already using these services, according to the IDC report “PaaSView and the Developer 2021.

This is only expected to grow, according to analysis from Gartner that found that the cloud platforms with the highest (over 20% of respondents) adoption plans in the next 12 months were cloud-managed Kubernetes and container platforms (CaaS) or aPaaS, citizen development platforms, and cloud-managed serverless function platforms (fPaaS/FaaS).

RELATED CONTENT:
How these vendors help companies with cloud-native security
A guide to cloud-native security tools

“Today, if I’m going to write a new kind of customer service portal. For an insurance company, the likelihood of that not being cloud native is very low. Because it is just more scalable and much easier to update and much more resilient,” said Rani Osnat, the VP of strategy and product marketing at Aqua Security. 

Cloud-native development changes the way that developers traditionally approached development with the use of CI/CD and more rapid methods of continuously updating software. 

This has presented some challenges since users don’t necessarily have advanced knowledge of where everything will run because it can run anywhere, according to Osnat. 

“You get this much more flexible environment to work in, but it also requires you to be a lot more cognizant in how you package code and deliver it compared with older kinds of waterfall SDLC or where it was a much slower process,” Osnat said. 

Because of the difficulty in setting up Kubernetes, few companies use the vanilla Kubernetes, instead opting for more managed options. One such option is a distribution of Kubernetes that has better defaults and is more suited to certain types of applications like K3s, the lightweight Kubernetes which is used a lot in IoT. The single-node Kubernetes can also be effectively used in development and testing, according to Osnat. 

Moving deeper are the cloud-managed offerings such as AKS, EKS, GKE, and others. 

“Those are basically set up for you in terms of the cluster. You don’t need to do much with configuring a master node,” Osnat said. “ A lot of the cloud developers will create on-prem versions of these. Amazon, for example, has EKS Anywhere, which is identical to EKS, but you can run it on-prem, or even another cloud if you want, at least in theory.”

Even further are the platforms like OpenShift, Tanzu, where they wrap Kubernetes with additional functionality with more opinionated or preset configurations and other capabilities around it such as identity access management, and better versioning and deployment controls, Osnat explained. 

Cloud native’s dependence on open-source requires extra security

Both the use of cloud-native development and open source is growing hand in hand, prompting companies to put up additional security measures to work with the much more open code. 

“Today, in a typical cloud-native application, you’ll see that 70-80% of the codebase is open source. So you could say the cloud-native applications have a lot of reusable code. And the issue that creates is that first of all, there’s a supply chain issue where you don’t govern all the code that comes in,” Osnat said. “And the second is known vulnerabilities. So open source has many more known vulnerabilities than custom code simply because it’s open.”

Contrast Security’s 2021 State of Open-source Security Report revealed that traditional software composition analysis (SCA) approaches attempt to analyze all of the open-source code contained in applications — which translates into a huge time and resource expenditure chasing vulnerabilities that pose no risk at all. Yet, for third-party code that is invoked, the risk is inherent: The average age of a library is 2.6 years old, and applications contain an average of 34 CVEs. 

While working with functions, it becomes more apparent that the traditional tools that are used for security won’t suffice, according to Blake Connell, the director of product marketing at Contrast Security. 

“With functions, because you’re just assembling these small bits of code, all those little small bits of code are entities in and of themselves. So the sort of exposure is broader for security issues. And then these permissions that are part of these functions are sort of set in kind of a default way,” Connell said. “Depending on how you assemble your application, you may want to tighten down the screws a bit more on those permissions. And that’s a common challenge with the functions serverless security angle, which is this notion of overly permissive functions.” 

Securing serverless architecture

Also important is securing serverless architecture since serverless computing is at the forefront of the cloud-native development trend, according to Connell. 

According to Contrast Security’s State of Serverless Application Security report, a big majority (71%) of organizations now have six or more development teams creating serverless applications. These findings are consistent with other research, such as New Relic’s Serverless Technology semiannual report, which shows a 206% increase in average weekly invocations of serverless applications from 2019 to 2020.

Connell added that the typical company is protecting its serverless applications with a disconnected set of legacy tools that no longer work that well—even for applications on traditional infrastructure.

For serverless applications, these tools are even less effective. “No-edge blindness” resulting from functions that do not have a public-facing URL gives them poor visibility into serverless architectures. The abstraction of infrastructure, network, and servers proves confusing for traditional tools and contributes to a false-positive rate that can exceed 85%, Contrast Security found. Legacy tools simply lack the context to do adequate analysis. 

Serverless also presents its own challenges because it’s based on ephemeral things that can happen quickly, and then disappear. So all of these require a very different set of controls, according to Osnat. 

As a result, organizations need a good prioritization strategy to understand which vulnerabilities are affecting the environment, Osnat explained.  

“You might have vulnerabilities that rely on some network connection to be exploited. But if you’re running this in a purely internal and capsulated application, it’s less adverse than an open one that’s open to the internet,” Osnat said. 

The stack affects cloud-native security

The third factor that affects security in cloud native is the beginning of this new stack that applications are being run on. Companies are no longer relying on an underlying server or VM to do the isolation for them. Users are also running various types of workloads. For example, if they’re running containers on a container-as-a-service platform like AWS Fargate, or ACI on Azure, these are containers that run in a continued virtualized environment, and there is no underlying VM that one has access to. 

Organizations are giving developers more security responsibilities, however, there is a skill shortage in this area, and there are many more developers than security professionals. This has prompted companies to look towards more automated solutions that can augment the way developers handle security.

“We solve it by introducing a high degree of automation that enables developers to make security part of their daily work, but without expecting them or requiring them to change how they work or to become security experts. Nobody expects developers to become security experts and expects developers to set policies. The policy should be set by security. So what we do is we enable this solution that spans developers, DevOps, and security,” Osnat said. “Security has visibility into what’s going on and can prioritize issues for developers, and then have developers fix that in their code as far left as possible or as early as possible knowing full well that some things will not be fixed. We can say this needs to be remediated as soon as possible, you upgrade to this version, or you swap this package with this package or you change this configuration, and what cannot be remediated or can be maybe snoozed or remediated later, or you can have a mitigating control for it.”

While there is a lot that cloud providers are doing, there is also a big area of startup development of individual vendor providers of solutions that help address security concerns, according to Lara Greden, research director for IDC’s Platform-as-a-Service (PaaS) practice.

“It’s not that organizations with their software development teams are just only making use of what the major cloud providers are providing in terms of security,” said Greden. “They’re also adding these other services that their applications are calling on the back end for services.”

Another way to solve some of these security issues is through the notion of “deputizing” developers to be a part of the security effort. The days of developers flinging code over to security, having the security team running static scans, and creating a pile of potential vulnerabilities before shipping them back to developers just won’t fly in today’s cloud-native world, according to Contrast Security’s Connell. 

Now automation finds a vulnerability, perhaps an overly permissive function, and gets that information to a developer in their environment early. Then it provides sample code and the suggested remediation. Developers can then literally copy and paste code, or modify it slightly, and then just resubmit that function. And the solution scans again, and when everything is ok and it moves on, Connell explained. 

Cloud-native development is becoming more accessible and more expansive 

Whereas at first organizations were thinking in terms of using a private cloud for their applications by making use of technologies in their data centers, now it has increasingly moved towards computing at the edge, according to IDC’s Greden. 

“What we have today is edge compute, that is, in some cases, being provided by the cloud providers,” Greden said. “And that’s sent as a cloud service, but from edge locations. It’s also being accessed in terms of organizations owning their own mini data centers.”

Even though there is less investment now in on-premises types of data centers or location centers, the need for compute to be close to the application for things like latency reasons has not gone away. “Now, we’re able to apply cloud-native development to those types of locations,” Greden added. 

Also, now more people than ever before can make use of cloud-native development through citizen development and the use of low code. 

“It’s really more an era of augmented application development where developers, including full-stack developers, whether they’re junior or senior, are saying that the number one attribute of the tools they use is code abstraction, as represented by low code and no code,” Greden said. “We’ve gotten to the point where vendors are able to package certain components together, not have to rewrite code, and it really contributes to code simplicity and code elegance.”

The post Securing cloud-native applications appeared first on SD Times.

]]>
How these vendors help companies with cloud-native security https://sdtimes.com/cloud/how-these-vendors-help-companies-with-cloud-native-security/ Mon, 01 Nov 2021 16:20:58 +0000 https://sdtimes.com/?p=45711 We asked these tool providers to share more information on how their solutions help companies secure cloud-native applications. Their responses are below. Rani Osnat, VP strategy and product marketing at Aqua Security From day one, we started out focusing on containers, because that was the big technology that was pushed in the earlier days with … continue reading

The post How these vendors help companies with cloud-native security appeared first on SD Times.

]]>
We asked these tool providers to share more information on how their solutions help companies secure cloud-native applications. Their responses are below.


Rani Osnat, VP strategy and product marketing at Aqua Security

From day one, we started out focusing on containers, because that was the big technology that was pushed in the earlier days with Docker and later on with Kubernetes. Now, we support containers of various flavors, as well as serverless, VMs, and cloud infrastructure. 

With security, we took this approach of a full life cycle security solution because we felt that was the only way to really solve these issues. If you’re just looking at runtime, the attack surface is too big, and you’re basically chasing endless risks that you can’t really address that effectively. If you’re only focusing on shift-left and only handling developers, you’re doing what’s necessary, but it’s insufficient, because not everything is based on vulnerabilities. You have to have these multiple control points and layers. 

RELATED CONTENT:
Securing cloud-native applications
A guide to cloud-native security tools

Our solution helps organizations at any scale to address the key challenges of cloud-native security across development, DevOps, cloud and security teams. Our Complete Cloud Native Application Protection  Platform (CNAPP) has the ability to give each type of stakeholder the information and ability to control what they need. 

Also, Aqua’s Cloud Security Posture Management (CSPM) scans monitors and remediate configuration issues in public cloud accounts according to best practices and compliance standards, across AWS, Azure, Google Cloud, and Oracle Cloud.

There are also additional add-ons, like vShield, that allow you to specifically detect and block vulnerabilities that you weren’t able to fix, and we have a product called Dynamic Threat Analysis, (DTA), which addresses a different risk we see in the supply chain: hidden malware.

To learn more about Aqua’s Cloud Native Application Protection Platform or start a free trial of the plan that’s right for your organization, visit us online at https://www.aquasec.com/.

Blake Connell, director of product marketing at Contrast Security

Organizations are turning to serverless environments to help realize the full potential of DevOps/Agile development. Serverless technologies enable instant scalability, high availability, greater business agility, and improved cost efficiency. While serverless is quickly becoming a preferred approach for helping organizations accelerate the development of new applications, their existing tool sets for application security testing (AST) perpetuate inefficiencies that ultimately bottleneck release cycles. There are also some key differences that create some unique challenges:

  • An expanded attack surface. Serverless has more points of attack to potentially exploit. Every function, application programming interface (API), and protocol presents a potential attack vector.
  • A porous perimeter is harder to secure. Serverless applications have more fragmented boundaries.
  • Greater complexity. Permissions and access issues can be challenging and time-consuming to manage.

Contrast Serverless Application Security is designed specifically for serverless development. The complimentary, purpose-built solution for serverless AST ensures that security and development teams get the testing and protection capabilities they need without legacy inefficiencies that delay release cycles. Key benefits include:

  • Visibility. Gain complete security visibility across your serverless architecture.
  • Speed. Onboarding takes two minutes, with zero configuration and immediate results after scanning.
  • Frictionless. Automatically discovers any new change deployed to the tested environment, issues new tailored security tests, and validates finding in close to real-time.
  • Accuracy. Provides near zero false positive results with vulnerability evidence for true vulnerabilities.

The post How these vendors help companies with cloud-native security appeared first on SD Times.

]]>
Infrastructure management going extinct with serverless https://sdtimes.com/softwaredev/infrastructure-management-going-extinct-with-serverless/ Fri, 03 Sep 2021 13:30:48 +0000 https://sdtimes.com/?p=45171 It’s no surprise that organizations are trying to do more with less. In the case of managing infrastructure, they’re in fact trying to do much more in the area of provisioning software — not by lessening it but by eliminating infrastructure altogether, through the use of serverless technology.  According to Jeffrey Hammond, the vice president … continue reading

The post Infrastructure management going extinct with serverless appeared first on SD Times.

]]>
It’s no surprise that organizations are trying to do more with less. In the case of managing infrastructure, they’re in fact trying to do much more in the area of provisioning software — not by lessening it but by eliminating infrastructure altogether, through the use of serverless technology. 

According to Jeffrey Hammond, the vice president and principal analyst at Forrester, one in four developers are now regularly deploying to public clouds using serverless technology, going up from 19% to 24% since last year. This compares to 28% of respondents that said that they are regularly deploying with containers.

The main reason containers are a little bit ahead is that when organizations are trying to modernize existing apps, it’s a little bit easier to go from a virtual machine to a container than it is to embrace serverless architecture, especially if one is using something like AWS Lambda, which you requires writing applications that are stateless, according to Hammond.

Also, the recently released Shift to Serverless survey conducted by the cloud-native programming platform provider Lightbend found that 83% of respondents said they were extremely satisfied with their serverless application development solutions. However, only a little over half of the organizations expect that making the switch to serverless will be easy.

“If I just basically want to run my code and you worry about scaling it then a serverless approach is a very effective way to go. If I don’t want to worry about having to size my database, if I just want to be able to use it as I need it, serverless extensions for things like Aurora make that a lot easier,” Hammond said. “So basically as a developer, when I want to work at a higher level, when I have a very spiky workload, when I don’t particularly care to tune my infrastructure, I’d rather just focus on solving my business problem, a serverless approach is the way to go.” 

While serverless is seeing a pickup in new domains, Doug Davis, who heads up the CNCF Serverless Working Group and is an architect and product manager at IBM Cloud Code Engine, said that the main change in serverless is not in the technology itself, but rather providers are thinking of new ways to reel people in to their platforms. 

“Serverless is what it is. It’s finer-grain microservices, it’s scale to zero, it’s pay-as-you-go, ignore the infrastructure and all that good stuff. What I think might be sort of new in the community at large is more just, people are still trying to find the right way to expose that to people,” Davis said. “But from the technology perspective, I’m not sure I see a whole lot necessarily changing from that perspective because I don’t think there’s a whole lot that you can change right now.”

Abstracting away Kubernetes 

The major appeal for many organizations moving to serverless is just that they want more and more of the infrastructure abstracted away from them. While Kubernetes revolutionized the way infrastructure is handled, many want to go further, Davis explained.

“As good as Kubernetes is from a feature perspective, I don’t think most people will say Kubernetes is easy to use. It abstracts the infrastructure, but then it presents you with different infrastructure,” Davis said. 

While people don’t need to know which VM they’re using with Kubernetes, they still have to know about nodes, and even though they don’t need to know which load balancer they’re using, there’s always managing the load balancer. 

“People are realizing not only do I not want to worry about the infrastructure from a service perspective, but I also don’t want to worry about it from a Kubernetes perspective,” Davis said. “I just want to hand you my container image or hand you my source code and you go run it for me all. I’ll tweak some little knobs to tell you what fine-tuning I want to do on it. That’s why I think projects like Knative are kind of popular, not just because yeah, it’s a new flavor of serverless, but it hides Kubernetes.” 

Davis said there needs to be a new way to present it as hiding the infrastructure, going abstract in a way, and just handing over the workload in whatever form is desired, rather than getting bogged down thinking, is this serverless, platform-as-a-service, or container-as-a-service. 

However, Arun Chandrasekaran, a distinguished vice president and analyst at Gartner, said that whereas serverless abstracts more things away from the user, things like containers and Kubernetes are more open-source oriented so the barrier to entry within the enterprise is low. Serverless can be viewed as a little bit of a “black box,” and a lot of the functional platforms today also tend to be a little proprietary to those vendors.

“So serverless has some advantages In terms of the elasticity, in terms of the abstraction that it provides in terms of the low operational overhead to the developers. But on the flip side, your application needs to fit into an event-driven pattern in many cases to be fit for using serverless functions. Serverless can be a little opaque compared to running things like containers,” Chandrasekaran said. “I kind of think of serverless and containers as being that there are some overlapping use cases, but I think by and large, they address very different requirements for customers at this point in time.”

Davis said that some decision-makers are still wary of relinquishing control over their infrastructure, because in the past that often equated to reduced functionality. But with the way that serverless stands now, users won’t be losing functionality; instead, they’ll be able to access it in a more streamlined way.

“I don’t think they buy that argument yet and I think they’re skeptical. It’s going to take time for them to believe,” Davis said. “This really is a fully-featured Kubernetes under the covers.”

Other challenges that stifle adoption include the difficulty that developers have with changing to work asynchronously. Also, some would like to have more control over their runtime, including the autoscaling, security, and tendency models, according to Forrester’s Hammond. 

Hammond added that he is starting to see a bit of an intersection between serverless and containers, but the main thing that sets serverless apart is its auto-scaling features.

Vendors are defining serverless

Serverless as a term is expanding and some cloud vendors have started to call all services where one doesn’t have to provision or manage the infrastructure as serverless.

Even though these services are not serverless functions, one could argue that they’re broadly part of serverless computing, Gartner’s Chandrasekaran explained. 

For example, you have services like Athena, which is an interactive query service from Amazon, or Fargate for example, which is a way to run containers, but you’re not operating the container environment. 

However, Roman Shaposhnik, the co-founder and VP of product and strategy at ZEDEDA, as well as a member of the board of directors for Linux Foundation Edge, and vice president of the Legal Affairs Committee at the Apache Software Foundation, said that the whole term of serverless is a bit of a confusing at the moment and that people typically mean two different things whenever they talk about serverless. Clearly defining the technology is essential to spark interest in more people. 

“Google has these two services and they kind of very confusingly call them serverless in both cases. One is called Google Functions and the other one is Google Run and people are just constantly confused. Google was such an interesting case for me because I for sure expected Google to at least unify around Knative. Their Google Cloud Functions is completely separate, and they don’t seem to be interested in running it as an open-source project,” Shaposhnik said. “This is very emblematic of how the industry is actually confused. I feel like this is the biggest threat to adoption.”

This large basket of products has created an API sprawl rather than a tool sprawl because the public cloud typically offers so much that if developers wanted to replicate all of this in an open-source serverless offering like OpenWhisk by the Apache Software Foundation, they really have to build a lot of things that they just have no interest in building. 

“This is not even because vendors are evil. It’s just because only vendors can give you the full sort of gamut of the APIs that would be meaningful to what they are actually offering you because like 90% of their APIs are closed-source and proprietary anyway. And if you want to make them, effective, well, you might as well use a proprietary serverless platform. Like what’s the big deal, right?,” Shaposhnik said. 

Serverless commits users to a certain viewpoint that not all might necessarily enjoy. If companies are doing a lot of hybrid work, if they need to support multiple public clouds and especially if they have some deployments in a private data center, it can get painful pretty quickly, Shaposhnik explained.

OpenFaaS, an open-source framework and infrastructure preparation system for building serverless applications, is trying to solve the niche of figuring out the sweet spot of dealing with the difficult aspects.

“If you have enough of those easy things that you can automate, then you should probably use OpenFaaS, but everything else actually starts making less sense because if your deployment is super heterogeneous, you are not really ready for serverless,” Shaposhnik said. 

In general, there is not much uptick with open-source serverless platforms because they need to first find a great environment to be embedded in. 

“Basically at this point, it is a bit of a solution looking for a problem, and until that bigger environment and to which it can be embedded successfully appears, I don’t think it will be very interesting.”

In the serverless space, proprietary vendor-specific solutions are the ones that are pushing the space forward. 

I would say open-source is not as compelling as in some other spaces, and the reason is I think a lot of developers prefer open-source not necessarily because it’s free as in freedom but because it’s free as in beer,” Forrester’s Hammond said. 

Because with most functions, organizations pay by the gigabyte second, now developers seem to be able to experiment and prototype and prove value at very low costs. And most of them seem to be willing to pay for that in order to have all the infrastructure managed for them. 

“So you do see some open source here, but it’s not necessarily at the same level as something like Kafka or Postgres SQL or any of those sorts of open-source libraries,” Hammond said. 

With so many functionalities to choose from, some organizations are looking to serverless frameworks to help manage how to set up the infrastructure. 

Serverless frameworks can deploy all the serverless infrastructure needed; it deploys one’s code and infrastructure via a simpler abstract experience.

In other words, “you don’t need to be an infrastructure expert to deploy a serverless architecture on AWS if you use these serverless frameworks,” Austen Collins, the founder and CEO of the Serverless Framework, said. 

Collins added that the Serverless Framework that he heads has seen a massive increase in usage over the duration of the pandemic, starting at 12 million downloads at the beginning of 2020 and now at 26 million. 

“I think a big difference there between us and a Terraform project is developers use us. They really like Serverless Framework because it helps them deliver applications where Terraform is very much focused on just the hardcore infrastructure side and used by a lot of Ops teams,” Collins said. 

The growth in the framework can be attributed to the expanding use cases of serverless and every time that there is a new infrastructure as a service (IaaS) offering. “The cloud really has nowhere else to go other than in a more serverless direction,” Collins added.

Many organizations are also realizing that they’re not going to be able to keep up with the hyper-competitive, innovative era if they’re trying to maintain and scale their software all by themselves.

“The key difference that developers and teams will have to understand is that number one, it lives exclusively on the cloud so you’re using cloud services. You can’t really spin up this architecture on your machine as easily. And also the development workflow is different, and this is one big value of Serverless Framework,” Collins said. “But, once you pass that hurdle, you’ve got an architecture with the lowest overhead out of anything else on the market right now.”

All eyes are on serverless at the edge

The adoption of serverless has been broad-based, but the larger organizations tend to embrace it a bit more, especially if they need to provide a global reach to their software infrastructure and they don’t want to do that on top of their own hardware, Forrester’s Hammond explained. 

In the past year, the industry started to see more interest in edge and edge-oriented deployments, where customers wanted to apply some of these workloads in edge computing environments, according to Gartner’s Chandrasekaran.

This is evident in content delivery network (CDN) companies such as Cloudflare, Fastly, or Akamai, which are all bringing new serverless products to market that primarily focus on edge computing. 

“It’s about scale-up, which is to really quickly scale and massively expand, but it’s also about scaling down when data is not coming from IoT endpoints. I don’t want to use the infrastructure and I want the resources to be de-provisioned,” Chandrasekaran said. “Edge is all about rapid elasticity.”

The serverless compute running in the edge is a use case that has the possibility of creating new types of architectures to change the way that applications were previously built to process compute closer to the end-user for faster performance, according to Collins. 

“So an interesting example, this is just how we’re leveraging it. We’ve got serverless.com is actually processed using Cloudflare workers in the edge. And it’s all on one domain, but the different paths are pointing to different architectures. So it’s the same domain, but we have compute running that looks at the past and forwards the request to different technology stacks. So one for our documentation, one for our landing pages, and whatnot,” Collins said. “So there’s a bunch of new architectural patterns that are opening up, thanks to running serverless in the edge.”

Another major trend that the serverless space has seen is the growth of product extension models for integrations. 

“If you’ve got a platform as a company and you want developers to extend it and use it and integrate it into their day-to-day work, the last thing you want to do is say, well now you’ve got to go stand up infrastructure on your own premises or in another public cloud provider, just so that you can take advantage of our APIs,” Forrester’s Hammond said. “I think increasingly, we will use serverless concepts as the glue by which we weld all of these cloud-based platforms together. 

The extensions also involve continued improvements to serverless functions that are adding more programming languages and trying to enhance the existing tooling in areas like security and monitoring. 

For those companies that are sold on a particular cloud and don’t really care about multicloud or whether Amazon is locking them in, for example, Shaposhnik said not using serverless would be foolish. 

“Serverless would give you a lot of bang for the buck effectively scripting and automating a lot of the things that are happening within the cloud,” Shaposhnik said.

Serverless is the architecture for volatility

Serverless seems to now be the volatility architecture because of business uncertainty due to the pandemic. 

“Everyone seems to be talking about scaling up, but there’s this whole other aspect of what about if I need to scale down,” Serverless Framework founder and CEO Austen Collins said. 

A lot of businesses that deal with events, sports, and anything that’s in-person have had to scale down operations almost immediately.

At a moment’s notice, these businesses had to scale down almost immediately due to a shutdown, and for those that work with serverless architecture, their operations can scale down without them having to do anything. 

The last 16 months have also seen a tremendous amount of employee turnover, especially in tech, so organizations are looking to adopt a way to be able to quickly onboard new hires by abstracting a lot of the infrastructure away, Collins added. 

“I think it’s our customers that have had serverless architectures that don’t require as much in-house expertise as running your own Kubernetes clusters that have really weathered this challenge better than anyone else,” Collins said. “Now we can see the differences, whenever there’s a mandate to shut down different types of businesses in the usage of people, applications and the scaling down, scaling up when things are opening up again is immediate and they don’t have to do anything. The decision-makers are often now citing these exact concerns.”

A serverless future: A tale of two companies 

Carla Diaz, the cofounder of Broadband Search, a company that aims to make it easier to find the best internet and television services in an area, has been looking at adopting a serverless architecture since it is now revamping its digital plans. 

“Since most of the team will be working from home rather than from the office, it doesn’t make sense to continue hosting servers when adopting a cloud-based infrastructure. Overall, that is the main appeal of going serverless, especially if you are beginning to turn your work environment into a hybrid environment,” Diaz said. 

Overall, the cost of maintaining and committing to downtime are just some of the things the company doesn’t need to worry about anymore with the scalability of the serverless architecture. 

Another reason why Broadband Search is interested in going to the cloud-based system is the company doesn’t have to worry about the costs of not only having to buy more hardware, which can already be quite expensive, but the costs of maintaining more equipment and possible downtime if the integration is extensive. 

“By switching and removing the hardware component, the only real cost is to pay for the service which will host your data off-site and allow you to scale your business’ IT needs either back or forward as needed,” Diaz added. 

Dmitriy Yeryomin, a senior Golang developer at iTechArt Group, a one-stop source for custom software development, said that many of the 250-plus active projects within the company use serverless architecture. 

“This type of architecture is not needed in every use case, and you should fully envision your project before considering serverless, microservice, or monolith architecture,” Yeryomin said. 

In terms of this company’s projects, Yeryomin said it helps to divide up the system into fast coding and deploying sequences, to make their solution high-performance and easily scalable.

“In terms of benefits, serverless applications are well-suited to deploying and redeploying to the cloud, while conveniently setting the environmental and security parameters,” Yeryomin said. “I work mostly with AWS, and UI has perfect tools for monitoring and test service. Also local invokes is great for testing and debug services.”

However, the most challenging thing with serverless is time. When you configure a lambda function execution, as it is bigger, it becomes more expensive. 

“You can’t store the data inside more than the function works,” Yeryomin explained. “So background jobs are not for serverless applications.”

The post Infrastructure management going extinct with serverless appeared first on SD Times.

]]>
SD Times Open-Source Project of the Week: CodeFlare https://sdtimes.com/ai/sd-times-open-source-project-of-the-week-codeflare/ Fri, 13 Aug 2021 13:00:14 +0000 https://sdtimes.com/?p=45010 CodeFlare is IBM’s open-source framework that simplifies the integration and scaling of big data and AI workflows onto the hybrid cloud.  It drastically reduces the time to set up, run, and scale machine learning tests and expands on the functionality of Ray.  Also, CodeFlare pipelines run with ease on IBM’s new serverless platform IBM Cloud … continue reading

The post SD Times Open-Source Project of the Week: CodeFlare appeared first on SD Times.

]]>
CodeFlare is IBM’s open-source framework that simplifies the integration and scaling of big data and AI workflows onto the hybrid cloud. 

It drastically reduces the time to set up, run, and scale machine learning tests and expands on the functionality of Ray. 

Also, CodeFlare pipelines run with ease on IBM’s new serverless platform IBM Cloud Code Engine and Red Hat OpenShift to allow users to deploy anywhere and extend the benefits of serverless to data scientists and AI researchers. 

It also makes it easier to integrate and bridge with other cloud-native ecosystems by providing adapters to event triggers (such as the arrival of a new file), and load and partition data from a wide range of sources, such as cloud object storages, data lakes, and distributed filesystems.

“CodeFlare should also mean developers aren’t having to duplicate their efforts or struggle to figure out what colleagues have done in the past to get a certain pipeline to run,” Carlos Costa and Priya Nagpurkar, research staff member and director of cloud platform research at I.B.M.’s T.J. Watson Research Center, respectively, wrote in a blog post. With CodeFlare, we aim to give data scientists richer tools and APIs that they can use with more consistency, allowing them to focus more on their actual research than the configuration and deployment complexity.

The post SD Times Open-Source Project of the Week: CodeFlare appeared first on SD Times.

]]>
Speed, security and reliability are now one https://sdtimes.com/canary/speed-security-and-reliability-are-now-one/ Fri, 18 Jun 2021 18:12:19 +0000 https://sdtimes.com/?p=44444 Companies around the world and across many industries have felt the pressure to release faster, yet they struggle to do so in a safe and reliable way that doesn’t compromise user trust.  A lot of these companies think there’s a dichotomy between whether you can move fast or increase value.  “I think the move fast … continue reading

The post Speed, security and reliability are now one appeared first on SD Times.

]]>
Companies around the world and across many industries have felt the pressure to release faster, yet they struggle to do so in a safe and reliable way that doesn’t compromise user trust. 

A lot of these companies think there’s a dichotomy between whether you can move fast or increase value. 

“I think the move fast and break things got a bad rap. It’s kind of horrifying to think, Hey, a developer that I’m not even talking to could suddenly blow up my entire customer base without all these gates,” said Edith Harbaugh, the CEO of LaunchDarkly, during a recent SD Times Live! tech talk.

However, releasing slower today could actually make the software more unsafe, according to Harbaugh.

“If you’re doing the old software releases of 20 years ago where you do a release every year, every release has so much heft, weight and gravity behind it,” said Harbaugh.

Not only are the releases heavy in technical complexity, requiring developers to check all of these different branches and features, but they are also risky from a business perspective because the value that was planned a year ago might not even be relevant anymore. This could cause a large release to flop when out in the field. 

With the proper distributed architectures and guardrails that limit the blast radius, both speed and value are mutually possible. 

One such method for safer deployments is canary deployments, which can limit the blast radius from 100% of the user base and have it down to where it maybe affects 1% of the most progressive users. 

Canaries are typically an engineering activity and feature flags – which are a core part of this activity – help unlock value way up in the stack, according to DROdio, the CEO of Armory.

“You have to have the seatbelt on before you want to drive the Ferrari fast. The company has to have that psychological safety to be able to flip that cost-benefit analysis in their heads that it is worth deploying out to that 1% of the population so you can deploy 10 or 100x faster,” DROdio said.

Also, distributed architectures such microservices, serverless, Docker or Kubernetes limit the blast radius so that any one change becomes is a lot less risky.  

Once the mindset of an organization is changed to be able to validate changes, get more into production and get real usage in, releasing at cadences of up to even multiple times a day gets a lot less terrifying, according to Joe Duffy, the CEO of Pulumi.

Another benefit of a faster production cycle is that developers will also get quick feedback on all the features that they are working on and have more incentive to constantly interact with that feature’s code. 

“I think of developers as artists. They have code and they want to get their code out into the world and they want to learn from that code as quickly as possible so that they can have an iterative cycle,” DROdio said. “I don’t know that executives often understand that there’s anything more soul-sucking for a developer than having code sit on the shelf for a month or a quarter and it makes the best developers not want to work at companies that have that lack of sophistication.”

Listen to the full tech talk here.

The post Speed, security and reliability are now one appeared first on SD Times.

]]>