microservices Archives - SD Times https://sdtimes.com/tag/microservices/ Software Development News Thu, 22 Sep 2022 21:27:29 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.1 https://sdtimes.com/wp-content/uploads/2019/06/bnGl7Am3_400x400-50x50.jpeg microservices Archives - SD Times https://sdtimes.com/tag/microservices/ 32 32 Jakarta EE 10 released with microservices capabilities https://sdtimes.com/java/48985/ Thu, 22 Sep 2022 20:47:15 +0000 https://sdtimes.com/?p=48985 The Jakarta EE 10 Platform, Web Profile, and new Core Profile Specifications were released today, introducing new features for building modernized, simplified, and lightweight cloud-native Java applications.  “This release is the ‘big one’ that plants Jakarta EE firmly in the modern era of microservices and containers,” said Mike Milinkovich, executive director of the Eclipse Foundation. … continue reading

The post Jakarta EE 10 released with microservices capabilities appeared first on SD Times.

]]>
The Jakarta EE 10 Platform, Web Profile, and new Core Profile Specifications were released today, introducing new features for building modernized, simplified, and lightweight cloud-native Java applications. 

“This release is the ‘big one’ that plants Jakarta EE firmly in the modern era of microservices and containers,” said Mike Milinkovich, executive director of the Eclipse Foundation. “The release of Jakarta EE 10 reflects the work of a global community of contributors, with leadership from vendors such as Fujitsu, IBM, Oracle, Payara, and Tomitribe. Jakarta EE has already helped breathe new life into enterprise Java, but with this release it has now delivered key innovations for the cloud-native era, which are critical to the future of our industry.” 

The new versions provide new functionality in over 20 component specifications through version updates. This includes Jakarta Contexts and Dependency Injection (CDI) 4.0, which offers Jakarta RESTful Web Services 3.1 and  standardizes a Java SE Bootstrap API. Also new is   Jakarta Security 3.0 with support for OpenID Connect and new functions in Jakarta Persistence queries.  Developers can also create Jakarta Faces Views with pure Java. 

According to a news release from open-source application server provider Payara, Jakarta EE 10 is the first major release of Jakarta EE since the major namespace update, brought by Jakarta EE 9. 

With Jakarta EE 9, the package namespace javax moved to jakarta across the Jakarta EE 9 Platform, Web Profile specifications, and related TCKs. “With Jakarta EE 10, we see the first release in the new namespace that also adds functionality for the Jakarta EE user,” the company wrote in its announcement. “The baseline Java JDK used is also changing, from Java 8 to Java 11 at API level, and Java 17 for runtimes. For Jakarta EE 8 users moving to Jakarta EE 10, all Jakarta EE imports in the code will need to be changed to the new namespace.’

For example, the release noted, for messaging, javax.jms must become jakarta.jms; Java Persistence, heavily used in Hibernate and Spring, must move from javax.persistence to jakarta.persistence, etc.

Also, Payara said, new Java SE Features can now be used with Jakarta EE 10; some of these are Completable Future, Fork/Join pools, and better integration with new technologies like OpenID. Payara Community users will be able to make use of these changes straightaway, thanks to Jakarta 10-compatible Payara 6 Community Alpha 4.

Meanwhile, the new Core Profile offers Jakarta EE specifications that target smaller runtimes for microservices development, including a new CDI-Lite specification that enables compiling to native by providing build-compatible extensions.  

Developers can now develop and deploy Jakarta EE 10 applications on Java SE 11 and SE 17 and take advantage of new features from SE 9 and SE 11. They also have access to simplified application development through the broader use of additional annotations.

 

The post Jakarta EE 10 released with microservices capabilities appeared first on SD Times.

]]>
SD Times Open-Source Project of the Week: Luos https://sdtimes.com/software-development/sd-times-open-source-project-of-the-week-luos/ Fri, 16 Sep 2022 13:00:07 +0000 https://sdtimes.com/?p=48910 Luos is an open-source lightweight library that enables developers to develop and scale their edge and embedded distributed software.  Developers can create portable and scalable packages that they can share with teams and communities and the project’s engine encapsulates embedded features in services with APIs, providing direct access to hardware.  Remote control enables users to … continue reading

The post SD Times Open-Source Project of the Week: Luos appeared first on SD Times.

]]>
Luos is an open-source lightweight library that enables developers to develop and scale their edge and embedded distributed software. 

Developers can create portable and scalable packages that they can share with teams and communities and the project’s engine encapsulates embedded features in services with APIs, providing direct access to hardware. 

Remote control enables users to access the topology and routing table from anywhere and they can monitor their devices with several SDKs including Python, TS, Browser app, and others coming soon. Luos detects all services in a system and allows one to access and adapt to any feature anywhere. 

“Most of the embedded developments are made from scratch. By using the Luos engine, you will be able to capitalize on the development you, your company, or the Luos community already did. The re-usability of features encapsulated in Luos engine services will fasten the time your products reach the market and reassure the robustness and the universality of your applications,” the developers behind the project wrote on its website. 

Additional features that Luos can power include event-based polling, service aliases management, data auto-update, self healing and more.

The post SD Times Open-Source Project of the Week: Luos appeared first on SD Times.

]]>
Continuous test data management for microservices, Part 2: Key steps https://sdtimes.com/test/continuous-test-data-management-for-microservices-part-2-key-steps/ Tue, 14 Jun 2022 15:27:51 +0000 https://sdtimes.com/?p=47960 This is part 2 in a series on applying test data management (TDM) to microservices. Part 1 can be found here.  The continuous TDM process for microservices applications is similar to that for general continuous TDM, but tailored to the nuances of the architecture. The key differences are as follows:  Step 1(b): Agile Design Rigorous … continue reading

The post Continuous test data management for microservices, Part 2: Key steps appeared first on SD Times.

]]>
This is part 2 in a series on applying test data management (TDM) to microservices. Part 1 can be found here


The continuous TDM process for microservices applications is similar to that for general continuous TDM, but tailored to the nuances of the architecture. The key differences are as follows: 

Step 1(b): Agile Design

Rigorous change impact analysis during this step is key to reducing the testing (and the TDM) burden for microservices applications—especially in the upper layers of the test pyramid and the CD stages of the lifecycle. There are various ways to do this, following are a few highlights: 

(a)   Code-change-based impact analysis (also known as a white-box, inside-out approach). Through this approach, we identify which services and transactions are affected by specific code changes in implementing backlog requirements. We then focus testing and TDM efforts on those services and transactions affected. This approach is supported by tools such as Broadcom TestAdvisor and Microsoft Test Impact Analysis. This approach is more useful for white and gray box testing, specifically unit and component testing.  

(b)  Model flow-based impact analysis (also known as a black-box, outside-in approach). Here we do change impact analysis using flows in model-based testing. This analysis helps to highlight key end-to-end or system integration scenarios that need to be tested, and can also be traced down to individual components and source code. This approach is supported by such tools as Broadcom Agile Requirements Designer, and is more beneficial for testing in the upper layers of the test pyramid. 

I recommend a combination of both approaches to ensure sufficient test coverage, while minimizing the number of tests in a microservices context. Based on the change impact set, we prepare test data for the tests discussed in the previous section. 

Step 2(a): Agile Parallel Development 

As discussed in the previous section, as part of development, a component developer must also define and implement these APIs:

  •  APIs that allow us to set test data values in the component data store. These are sometimes referred to as mutator APIs. 
  • APIs that allow us to extract test data values, for example, from instances of components in production. These are also known as accessor APIs.

Developers should use the white-box change impact testing technique discussed above to focus their unit and component testing efforts. 

Step 2(b): Agile Parallel Testing

This is an important stage in which testers and test data engineers design, or potentially generate or refresh, the test data for test scenarios that have been impacted by changes and that will be run in subsequent stages of the CI/CD lifecycle. This assessment is based on the backlog items under development. Testers use the TDM approaches described above for cross-service system testing and end-to-end testing.  

In addition, the test data will need to be packaged, for example, in containers or using virtual data copies. This approach can ease and speed provisioning into the appropriate test environment, along with test scripts and other artifacts.  

Step 3: Build

In this step, we typically run automated build verification tests and component regression tests using the test data generated in the previous step. 

Step 4: Testing in the CD Lifecycle Stages 

The focus in these stages is to run tests in the upper layers of the test pyramid using test data created during step 2(b).  The key in these stages is to minimize the elapsed time TDM activities require. This is an important consideration: The time required to create, provision, or deploy test data must not exceed the time it takes to deploy the application in each stage.  

How do you get started with continuous TDM for microservices?

Continuous TDM is meant to be practiced in conjunction with continuous testing. Various resources offer insights into evolving to continuous testing. If you are already practicing continuous testing with microservices, and want to move to continuous TDM, proceed as follows:   

  • For new functionality, follow the TDM approach I have described. 
  • For existing software, you may choose to focus continuous TDM efforts on the most problematic or change-prone application components, since those are the ones you need to test most often. It would help to model the tests related to those components, since you can derive the benefits of combining TDM with model-based testing. While focusing on TDM for these components, aggressively virtualize dependencies on other legacy components, which can lighten your overall TDM burden. In addition, developers must provide APIs to update and access the test data for their components. 
  • For other components that do not change as often, you need to test less often. As described above, virtualize these components while testing others that need testing. In this way, teams can address TDM needs as part of technical debt remediation for these components. 

The post Continuous test data management for microservices, Part 2: Key steps appeared first on SD Times.

]]>
Continuous test data management for microservices, Part 1: Key approaches https://sdtimes.com/microservices/continuous-test-data-management-for-microservices/ Mon, 06 Jun 2022 16:52:38 +0000 https://sdtimes.com/?p=47861 Applying TDM to microservices is quite challenging. This is due to the fact that an application may have many services, each with its own underlying diverse data store. Also, there can be intricate dependencies between these services, resulting in a type of ‘spaghetti architecture.’ For these systems, TDM for end-to-end system tests can be quite … continue reading

The post Continuous test data management for microservices, Part 1: Key approaches appeared first on SD Times.

]]>
Applying TDM to microservices is quite challenging. This is due to the fact that an application may have many services, each with its own underlying diverse data store. Also, there can be intricate dependencies between these services, resulting in a type of ‘spaghetti architecture.’

For these systems, TDM for end-to-end system tests can be quite complex. However, it lends itself very well to the continuous TDM approach. As part of this approach, it is key to align TDM with the test pyramid concept.

Let’s look at the TDM approaches for tests in the various layers of the pyramid. 

TDM Approach for Supporting Microservices Unit Tests

Unit tests test the code within the microservice and at the lowest level of granularity. This is typically at a function or method level within a class or object. This is no different than how we do unit testing for other types of applications. Most test data for such tests should be synthetic. Such data is typically created by the developer or software development engineer in test (SDET), who uses “as-code” algorithmic techniques, such as combinatorial. Through this approach, teams can establish a high level of test data coverage. While running unit tests, we recommend that all dependencies outside the component (or even the function being tested) are stubbed out using mocks or virtual services

TDM Approach for Supporting Microservices Component or API Tests

This step is key for TDM of microservices, since the other tests in the stack depend on it.  In these tests, we prepare the test data for testing the microservice or component as a whole via its API.

There are various ways of doing this depending on the context: 

  1. Generate simple synthetic test data based on the API specs. This is typically used for property-based testing or unit testing of the API.
  2. Generate more robust synthetic test data from API models, for example, by using a test modeling tool like Broadcom Agile Requirements Designer. This enables us to do more rigorous API testing, for example for regression tests.
  3. Generate test data by traffic sniffing a production instance of the service, for example, by using a tool like Wireshark. This helps us create more production-like data. This approach is very useful if for some reason it isn’t possible to take a subset of data from production instances. 
  4. Generate test data by sub-setting and masking test data from a production instance of the service, or by using data virtualization. Note that many microservice architectures do not allow direct access to the data store, so we may need special data access APIs to create such test data.  

Regardless of the approach, in most cases test data fabrication for a microservice must be prepared by the developer or producer of the microservice, and made available as part of service definition. Specifically, additional APIs should be provided to set up the test data for that component. This is necessary to allow for data encapsulation within a microservice. It is also required because different microservices may have various types of data stores, often with no direct access to the data. 

This also allows the TDM of microservices applications to re-use test data, which enables teams to scale tests at higher layers of the pyramid. For example, a system or end-to-end test may span hundreds of microservices, with each having its own unique encapsulated data storage. It would be very difficult to build test data for tests that span different microservices using traditional approaches.   

Again, for a single component API test, it is recommended that all dependencies from the component be virtualized to reduce the TDM burden placed on dependent systems. 

TDM Approach for Supporting Microservices Integration and Contract Tests

These tests validate the interaction between microservices based on behaviors defined in their API specifications.

The TDM principles used for such testing are generally the same as for the process for API testing described previously. The process goes as follows: 

For contract definition, we recommend using synthetic test data, for example, based on the API specs, to define the tests for the provider component. 

The validated contract should be a recorded virtual service based on the provider service. This virtual service can then be used for consumer tests. Note that in this case, a virtual service recording forms the basis of the test data for the consumer test. 

TDM Approach for Supporting an X-service System Test or Transaction Test at the API Level 

In this type of test, we have to support a chain of API calls across multiple services. For example, this type of test may involve invoking services A, B, and C in succession.

The TDM approach for supporting this type of test is essentially the same as that for a single API test described above—except that we need to set up the test data for each of the services involved in the transaction. 

However, an additional complexity is that you also need to ensure that the test data setup for each of these services (and the underlying services they depend on) are aligned, so the test can be successfully executed. Data synchronization across microservices is largely a data management issue, not specific to TDM per se, so you need to ensure that your microservices architecture sufficiently addresses this requirement. 

Assuming data synchronization between microservices is in place, the following approaches are recommended to make test management easier: 

  1. As mentioned before, use model-based testing to describe the cross-service system tests. This allows you to specify test data constraints for the test uniformly across affected services, so that that initial setup of test data is correct. This is done using the test data setup APIs we discussed above.
  2. Since setting up test data definition across services is more time consuming, I recommend minimizing the number of cross-service tests, based on change impact testing. Run transaction tests only if the transaction, or any of the underlying components of the transaction, have changed. Again, this is a key principle of continuous testing that’s aligned with the test pyramid. 
  3. If there have been no changes to a participating component or underlying sub-component, we recommend using a virtual service representation of that component. This will further help to reduce the TDM burden for that component. 
TDM Approach for Supporting End-to-End Business Process or User Acceptance Tests 

The TDM approach for these tests is similar to that for system tests described above, since user actions map to underlying API calls. Such tests are likely to span more components. 

Many customers prefer to use real components, rather than virtual services, for user acceptance testing, which means that the TDM burden can be significant. As before, the key to reducing TDM complexity for such tests is to reduce the number of tests to the bare minimum, using techniques like change-impact testing, which was discussed above. I also recommend you use the change-impact approach to decide whether to use real components or their virtual services counterparts. If a set of components has changed as part of the release or deployment, it makes sense to use the actual components. However, if any dependent components are unchanged, and their test data has not been refreshed or is not readily available, then virtual services can be considered.

The post Continuous test data management for microservices, Part 1: Key approaches appeared first on SD Times.

]]>
SD Times Open-Source Project of the Week: Mizu https://sdtimes.com/softwaredev/sd-times-open-source-project-of-the-week-mizu/ Fri, 18 Feb 2022 14:00:34 +0000 https://sdtimes.com/?p=46634 Mizu is an API traffic viewer for Kubernetes that enables users to view all API communication between microservices to help debug and troubleshoot regressions. “Viewing API traffic between microservices is essential if you want to understand the root cause of problems found in complex distributed systems,” Alex Haiut, the co-founder and vice president of engineering … continue reading

The post SD Times Open-Source Project of the Week: Mizu appeared first on SD Times.

]]>
Mizu is an API traffic viewer for Kubernetes that enables users to view all API communication between microservices to help debug and troubleshoot regressions.

“Viewing API traffic between microservices is essential if you want to understand the root cause of problems found in complex distributed systems,” Alex Haiut, the co-founder and vice president of engineering at UP9, the company behind the project, wrote in a blog post. “Through our efforts to observe API traffic between microservices, we were able to isolate a chunk of our technology and package it as an open source project.”

Users can easily view traffic in the same way that they can use Google Chrome Dev Tool to view the traffic of their webapps.

The tool works by injecting a container that performs a tcpdump-like operation at the node level of a Kubernetes cluster. The operation can be performed on-demand via a CLI that injects the container when run. Alternatively, when ^C is used, it removes the container.

Mizu doesn’t require code instrumentation. It can be used in true on-demand fashion without prior preparation.

Mizu uses Kubectl and can  therefore run on any node through which kubectl is configured.

The tool supports HTTP/1.x, HTTP/2, AMQP, Apache Kafka, Redis protocols. A Kubernetes server version of 1.16.0 or higher is required.

 

The post SD Times Open-Source Project of the Week: Mizu appeared first on SD Times.

]]>
Troubleshooting microservices: Challenges and best practices https://sdtimes.com/microservices/troubleshooting-microservices-challenges-and-best-practices/ Mon, 03 Jan 2022 14:00:42 +0000 https://sdtimes.com/?p=46204 When people hear ‘microservices’ they often think about Kubernetes, which is a declarative container orchestrator. Because of its declarative nature, Kubernetes treats microservices as entities, which presents some challenges when it comes to troubleshooting. Let’s take a look at why troubleshooting microservices in a Kubernetes environment can be challenging, and some best practices for getting … continue reading

The post Troubleshooting microservices: Challenges and best practices appeared first on SD Times.

]]>
When people hear ‘microservices’ they often think about Kubernetes, which is a declarative container orchestrator. Because of its declarative nature, Kubernetes treats microservices as entities, which presents some challenges when it comes to troubleshooting. Let’s take a look at why troubleshooting microservices in a Kubernetes environment can be challenging, and some best practices for getting it right.

To understand why troubleshooting microservices can be challenging, let’s look at an example. If you have an application in Kubernetes, you can deploy it as a pod and leverage Kubernetes to scale it. The entity is a pod that you can monitor. With microservices, you shouldn’t monitor pods; instead, you should monitor services. So you can have a monolithic workload (a single container deployed as a pod) and monitor it, but if you have a service made up of several different pods, you need to understand the interactions between those pods to understand how the service is behaving. If you don’t do that, what you think is an event might not really be an event (i.e. might not be material to the functioning of the service). 

When it comes to monitoring microservices, you need to monitor at the service level, not the pod level. If you try to monitor at the pod level, you’ll be fighting with the orchestrator and might get it wrong. I recognize that “You should not be monitoring pods” is a bold statement, but I believe that if you’re doing that, you won’t get it right the majority of the time.

Common sources of issues when troubleshooting microservices

Network, infrastructure, and application issues are all commonly seen when troubleshooting microservices.

Network 

Issues at the network level are the hardest ones to debug. If the problem is in the network, you need to look at socket-layer stats. The underlying network has sockets that connect point A to B, so you need to look at round-trip time at the network level, see if packets are being transmitted, if there’s a routing issue, etc. 

Infrastructure 

One way infrastructure issues can manifest is as pod restarts (crash looping in Kubernetes). This can happen for many reasons. For example, if you have a pod in your service that can’t reach the Kubernetes data store, Kubernetes will restart it. You need to track the status of the pods that are backing the service. If you see several or frequent pod restarts, it becomes an issue.

Another common infrastructure issue is the Kubernetes API server being overloaded and taking a long time to respond. Every time something needs to happen, pods need to talk to the API server—so if it’s overloaded, it becomes an issue.

A third infrastructure issue is related to the Domain Name System (DNS). In Kubernetes, your services are identified by names, which get resolved with a DNS server. If those resolutions are slow, you start to see issues.

Application

There are several common application issues that can lead to restarts and errors. For example, if your service load balancing isn’t happening, say because there’s a change in your URL or the load balancer isn’t doing something right, you could be overloading a single pod and causing it to restart. 

If your URLs are not constructed properly, you’ll get a response code “404 page not found.” If the server is overloaded, you’ll get a 500 error. These are application issues that manifest as infrastructure issues.

Best practices for troubleshooting microservices

Here are two best practices for effectively identifying and troubleshooting microservice issues.

1. Aggregate data at the service level

You need to use a tool that provides data (i.e. a log) that is aggregated at the service level, so you can see how many pod restarts, error codes, etc. occurred. This is different from the approach most DevOps engineers use today, where every pod restart is a separate alert, leading engineers to be buried in alerts that might just be normal operations or Kubernetes correcting itself. 

Some DevOps engineers might wonder if service mesh can be used to aggregate data in this way. While service mesh has observability tools baked in, you need to be careful because many service meshes sample due to the large amount of data involved; they provide you raw data and give you labels to aggregate the data yourself. What you really need is a tool that gives you just the data you need for the service, as well as service-level reporting. 

2. Use machine learning 

When trying to identify and troubleshoot microservice issues, you need to monitor how each pod belonging to your service is behaving. This means monitoring metrics like latency, number of process restarts, and network connection errors. There are two ways to do this:

Set a threshold — For example, if there are more than 20 errors, create an alert. This is a bit of a naive approach in a dynamic system like Kubernetes, particularly with microservices.

Baselining — Use machine learning to study how a metric behaves over time, and build a machine learning model to predict how that metric will behave in the future. If the metric deviates from its baseline, you will receive an alert specifying which parameters led the machine learning algorithm to believe there was an issue.

I advise against trying to set a threshold—you’ll be flooded with alerts and this will cause alert fatigue. Instead, use machine learning. Over time, a machine learning algorithm can start to alert you before an issue arises. 

 

The post Troubleshooting microservices: Challenges and best practices appeared first on SD Times.

]]>
SD Times Open-Source Project of the Week: WireMock https://sdtimes.com/softwaredev/sd-times-open-source-project-of-the-week-wiremock/ Fri, 10 Dec 2021 14:00:57 +0000 https://sdtimes.com/?p=46041 WireMock is a simulator for HTTP-based APIs that enables users to stay productive when an API that one depends on doesn’t exist or is incomplete. It supports the testing of edge use cases and failure modes that the real API won’t reliably produce.  The company behind the project, MockLab, was recently acquired by UP9. The … continue reading

The post SD Times Open-Source Project of the Week: WireMock appeared first on SD Times.

]]>
WireMock is a simulator for HTTP-based APIs that enables users to stay productive when an API that one depends on doesn’t exist or is incomplete. It supports the testing of edge use cases and failure modes that the real API won’t reliably produce. 

The company behind the project, MockLab, was recently acquired by UP9. The rapid growth of microservice adoption and the booming API economy grew the popularity of WireMock to 1.6 million monthly downloads.

“The number of APIs created every day is growing exponentially. Developers need tools to ensure the reliability and security of their APIs, while still staying productive,” said Alon Girmonsky, CEO and co-founder of UP9. “WireMock is a significant player in the API economy, and by combining it with UP9’s existing API monitoring and traffic analysis capabilities, modern cloud-native developers can now develop faster and find problems quicker.”

Users can run WireMock from within their Java application, JUnit test, Servlet container, or as a standalone process.

The project can also match request URLs, methods, headers, cookies, and bodies using a wide variety of strategies. 

WireMock is distributed via Maven Central and can be included in your project using common build tools’ dependency management.

“With the rise in popularity of microservices along with supplier, partner and cloud APIs as essential building blocks of modern software, developers need tools that help manage the complexity and uncertainty this brings,” said Tom Akehurst, creator of WireMock and CTO of UP9. “WireMock allows developers to quickly create mocks (or simulations) of APIs they depend on, allowing them to keep building and testing when those APIs haven’t been built yet, don’t provide (reliable!) developer sandboxes, or cost money to call. It simulates faults and failure modes that are hard to create on demand and can be used in many environments, from unit test on a laptop all the way up to a high-load stress test.”

Additional details on WireMock are available here.

 

The post SD Times Open-Source Project of the Week: WireMock appeared first on SD Times.

]]>
Microservices at scale: A complexity management issue https://sdtimes.com/microservices/microservices-at-scale-a-complexity-management-issue/ Fri, 02 Jul 2021 13:00:38 +0000 https://sdtimes.com/?p=44617 The benefits of microservices have been touted for years, and their popularity is clear when you consider the explosion in use of technologies, such as Kubernetes, over the last few years. It seems that based on the number of successful implementations, that popularity is deserved.  For example, according to a 2020 survey by O’Reilly, 92% … continue reading

The post Microservices at scale: A complexity management issue appeared first on SD Times.

]]>
The benefits of microservices have been touted for years, and their popularity is clear when you consider the explosion in use of technologies, such as Kubernetes, over the last few years. It seems that based on the number of successful implementations, that popularity is deserved. 

For example, according to a 2020 survey by O’Reilly, 92% of respondents reported some success with microservices, with 54% describing their experience as “mostly successful” and under 10% describing a “complete success.” 

But building and managing all of these smaller units containing code adds a lot of complexity to the equation, and it’s important to get it right to achieve those successes. Developers can create as many of these microservices as they need, but it’s important to have good management over those, especially as the number of microservices increases. 

According to Mike Tria, head of platform engineering at Atlassian, there are two schools of thought when it comes to managing proliferation of microservices. One idea is just to keep the number of microservices to a minimum so that developers don’t have to think about things like scale and security.

“Every time they’re spinning up a new microservice they try to keep them small,” Tria said. “That works fine for a limited number of use cases and specific domains, because what will happen is those microservices will become large. You’ll end up with, as they say, a distributed monolith.”

The other option is to let developers spin up microservices whenever they want, which requires some additional considerations, according to Tria. Incorporating automation into the process is the key to ensuring this can be done successfully. 

“If every time you’re building some new microservice, you have to think about all of those concerns about security, where you’re going to host it, what’s the IAM user and role that you need access to, what other services can it talk to—If developers need to figure all that stuff out every time, then you’re going to have a real scaling challenge. So the key is through automating those capabilities away, make it such that you could spin up microservices without having to do all of those things,” said Tria.

According to Tria, the main benefits of automation are scalability, reliability, and speed. Automation provides the ability to scale because new microservices can be created without burdening developers. Second, reliability is encapsulated in each microservice, which means the whole system becomes more reliable. Finally, nimbleness and speed are gained because each team is able to build microservices at their own pace. 

At Atlassian, they built their own tool for managing their microservices, but Tria recommends starting small with some off-the-shelf tool. This will enable you to get to know your microservices and figure out your needs, rather than trying to predict your needs and buying some expensive solution that might have features you don’t need or is missing features you do. 

“It’s way too easy with microservices to overdo it right at the start,” Tria said. “Honestly, I think that’s the mistake more companies make getting started. They go too heavy on microservices, and right at the start they throw too much on the compute layer, too much service mesh, Kubernetes, proxy, etc. People go too, too far. And so what happens is they get bogged down in process, in bureaucracy, in too much configuration when people just want to build features really, really fast.”

In addition to incorporating automation, there are a number of other ways to ensure success with scaling microservices.

  1. Incorporate security 

Because of the nature of microservices, they tend to evoke additional security concerns, according to Tzury Bar Yochay, CTO and co-founder of application security company Reblaze. Traditional software architectures use a castle-and-moat approach with a limited number of ingress points, which makes it possible to just secure the perimeter with a security solution. 

Microservices, however, are each independent entities that are Internet-facing. “Every microservice that can accept incoming connections from the outside world is potentially exposed to threats within the incoming traffic stream, and it has other security requirements as well (such as integrating with authentication and authorization services). These requirements are much more challenging than the ones typically faced by traditional applications,” said Bar Yochay. 

According to Bar Yochay, new and better approaches are constantly being invented to secure cloud native architectures. For example, service meshes can build traffic filtering right into the mesh itself, and block hostile requests before the microservice receives them. Service meshes are an addition to microservices architectures that enable services to communicate with each other. In addition to added security, they offer benefits like load balancing, discovery, failure recovery, metrics, and more.

These advantages of service meshes will seem greater when they are deployed across a larger number of microservices, but smaller architectures can also benefit from them, according to Bar Yochay. 

Of course, the developers in charge of these microservices are also responsible for security, but there are a lot of challenges in their way. For example, there can often be friction between developers and security teams because developers want to add new features, while security wants to slow things down and be more cautious. “As more apps and services are being maintained, there are more opportunities for these cultural issues to arise,” Bar Yochay said. 

In order to alleviate the friction between developers and security, Bar Yochay recommends investing in developer-friendly security tools for microservices. According to him, there are many solutions on the market today that allow for security to be built directly into containers or into service meshes. In addition, security vendors are also advancing their use of technology, such as by applying machine learning to behavioral analysis and threat detection. 

  1. Make sure your microservices don’t get too big

“We’ve seen microservices turn into monolithic microservices and you get kind of a macroservice pretty quickly if you don’t keep and maintain it and keep on top of those things,” said Bob Quillin, chief ecosystem officer at vFunction, a company that helps migrate applications to microservices architectures. 

Dead code is one thing that can quickly lead to microservices that are bigger than they need to be.  “There is a lot of software where you’re not quite sure what it does,” said Quillin. “You and your team are maintaining it because it’s safer to keep it than to get rid of it. And that’s what I think that eventually creates these larger and larger microservices that become almost like monoliths themselves.” 

  1. Be clear about ownership

Tria recommends that rather than having individuals own a microservice, it’s best to have a team own it. 

“Like in the equivalent of it takes a village, it takes a team to keep a microservice healthy, to upgrade it to make sure it’s checking in on its dependencies, on its rituals, around things like reliability and SLO. So I think the good practices have a team on it,” said Tria. 

For example, Atlassian has about 3,000 developers and roughly 1,400 microservices. Assuming teams of five to 10 developers, this works out to every team owning two or three microservices, on average, Tria explained.

  1. Don’t get too excited about the polyglot nature of microservices

One of the benefits of microservices—being polyglot—is also one of the downsides. According to Tria, one of Atlassian’s initial attractions to microservices was that they could be written using any language. 

“We had services written in Go, Kotlin, Java, Python, Scala, you name it. There’s languages I’ve never even heard of that we had microservices written in, which from an autonomy perspective and letting those teams run was really great. Individual teams could all run off on their own and go and build their services,” said Tria.

However this flexibility led to a language and service transferability problem across teams. In addition, microservices written in a particular language needed developers familiar with that language to maintain them. Eventually Tria’s team realized they needed to standardize down to two or three languages.

Another recommendation Tria has based on his team’s experience is to understand the extent of how much the network can do for you. He recommends investing in things like service discovery early on. “[At the start] all of our services found each other just through DNS. You would reach another service through a domain name. What that did is it put a lot of pressure on our own internal networking systems, specifically DNS,” said Tria. 

Figuring out a plan for microservices automation at Atlassian 

Atlassian’s Tria is a proponent of incorporating automation into microservices management, but his team had to learn that the hard way. 

According to Tria, when Atlassian first started using microservices back in early 2016, it had about 50 to 60 microservices total and all of the microservices were written on a Confluence page. They listed every microservice, who owned it, whether it had passed SOC2 compliance yet, and the on-call contact for that microservice.

“I remember at that time we had this long table, and we kept adding columns to the table and the columns were things like when was the last time a performance test was run against it, or another column was what are all the services that depend on it? What are all the services it depends on? What reliability tier is it for uptime? Is it tier one where it needs very high uptime, tier two where it needs less? And we just kept expanding those columns.”

Once the table hit 100 columns, the team realized that wouldn’t be maintainable for very long. Instead, they created a new project to take the capabilities they had in Confluence and turn them into a tool.

“The idea was we would have a system where when you build a microservice, the system essentially registers it into a central repository that we have,” said Tria. “That repository has a list of all of our services. It has the owners, it has the reliability, tiers, and anyone within the company can just search and look up a surface and we made the tool pretty plugable so that when we have new capabilities that we’re adding to our service.” 

 

The post Microservices at scale: A complexity management issue appeared first on SD Times.

]]>
Speed, security and reliability are now one https://sdtimes.com/canary/speed-security-and-reliability-are-now-one/ Fri, 18 Jun 2021 18:12:19 +0000 https://sdtimes.com/?p=44444 Companies around the world and across many industries have felt the pressure to release faster, yet they struggle to do so in a safe and reliable way that doesn’t compromise user trust.  A lot of these companies think there’s a dichotomy between whether you can move fast or increase value.  “I think the move fast … continue reading

The post Speed, security and reliability are now one appeared first on SD Times.

]]>
Companies around the world and across many industries have felt the pressure to release faster, yet they struggle to do so in a safe and reliable way that doesn’t compromise user trust. 

A lot of these companies think there’s a dichotomy between whether you can move fast or increase value. 

“I think the move fast and break things got a bad rap. It’s kind of horrifying to think, Hey, a developer that I’m not even talking to could suddenly blow up my entire customer base without all these gates,” said Edith Harbaugh, the CEO of LaunchDarkly, during a recent SD Times Live! tech talk.

However, releasing slower today could actually make the software more unsafe, according to Harbaugh.

“If you’re doing the old software releases of 20 years ago where you do a release every year, every release has so much heft, weight and gravity behind it,” said Harbaugh.

Not only are the releases heavy in technical complexity, requiring developers to check all of these different branches and features, but they are also risky from a business perspective because the value that was planned a year ago might not even be relevant anymore. This could cause a large release to flop when out in the field. 

With the proper distributed architectures and guardrails that limit the blast radius, both speed and value are mutually possible. 

One such method for safer deployments is canary deployments, which can limit the blast radius from 100% of the user base and have it down to where it maybe affects 1% of the most progressive users. 

Canaries are typically an engineering activity and feature flags – which are a core part of this activity – help unlock value way up in the stack, according to DROdio, the CEO of Armory.

“You have to have the seatbelt on before you want to drive the Ferrari fast. The company has to have that psychological safety to be able to flip that cost-benefit analysis in their heads that it is worth deploying out to that 1% of the population so you can deploy 10 or 100x faster,” DROdio said.

Also, distributed architectures such microservices, serverless, Docker or Kubernetes limit the blast radius so that any one change becomes is a lot less risky.  

Once the mindset of an organization is changed to be able to validate changes, get more into production and get real usage in, releasing at cadences of up to even multiple times a day gets a lot less terrifying, according to Joe Duffy, the CEO of Pulumi.

Another benefit of a faster production cycle is that developers will also get quick feedback on all the features that they are working on and have more incentive to constantly interact with that feature’s code. 

“I think of developers as artists. They have code and they want to get their code out into the world and they want to learn from that code as quickly as possible so that they can have an iterative cycle,” DROdio said. “I don’t know that executives often understand that there’s anything more soul-sucking for a developer than having code sit on the shelf for a month or a quarter and it makes the best developers not want to work at companies that have that lack of sophistication.”

Listen to the full tech talk here.

The post Speed, security and reliability are now one appeared first on SD Times.

]]>
Service connectivity platform Kong Konnect enters GA with multi-geo support https://sdtimes.com/api/service-connectivity-platform-kong-konnect-enters-ga-with-multi-geo-support/ Tue, 11 May 2021 21:02:40 +0000 https://sdtimes.com/?p=43990 Kong announced the general availability of its cloud-native, connectivity platform Kong Konnect with new features to enable reliable, secure and observable connectivity across microservices and APIs. The platform was first previewed last year as a private beta at Kong Summit 2020 with the promise to simplify complex cloud-native workflows.  “Kong Konnect addresses a massive challenge … continue reading

The post Service connectivity platform Kong Konnect enters GA with multi-geo support appeared first on SD Times.

]]>
Kong announced the general availability of its cloud-native, connectivity platform Kong Konnect with new features to enable reliable, secure and observable connectivity across microservices and APIs. The platform was first previewed last year as a private beta at Kong Summit 2020 with the promise to simplify complex cloud-native workflows. 

“Kong Konnect addresses a massive challenge companies face as they enter digital transformation 2.0, which is characterized by an exponential increase in the volume and variety of connections that need to be activated and secured with lightning-fast speed,” said Marco Palladino, the CTO and co-founder of Kong Inc.

With the Konnect platform, the company explained app architects can separate connectivity concerns from microservices so that developers can just focus on building applications.

The GA release adds multi-geo support for users to physically locate services close to their businesses to ensure compliance. 

The release also adds a consumption-based model so that customers only pay for the services they used. The service is available in three tiers: Free, which allows developers to try out the product; Plus with a freemium model and a pay-as-you-go, credit card-based option; and Enterprise for organizations that want to use the platform as a whole. 

“One day, we will look back in amazement to realize that developers were spending a lot of time building the plumbing, and writing the networking and security code for each service on each platform. Automated cloud connectivity is inevitable – from gateway to service mesh across every cloud, Kubernetes and VMs – and market-leading companies already know this and are offloading this function so that their top talent can focus on application design and feature development so they can continue to outpace the competition,” Palladino added.

The post Service connectivity platform Kong Konnect enters GA with multi-geo support appeared first on SD Times.

]]>