virtual machines Archives - SD Times https://sdtimes.com/tag/virtual-machines/ Software Development News Tue, 17 Apr 2018 13:59:28 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.1 https://sdtimes.com/wp-content/uploads/2019/06/bnGl7Am3_400x400-50x50.jpeg virtual machines Archives - SD Times https://sdtimes.com/tag/virtual-machines/ 32 32 Oracle announces polyglot virtual machine, GraalVM https://sdtimes.com/java/oracle-announces-polyglot-virtual-machine-graalvm/ Tue, 17 Apr 2018 13:59:28 +0000 https://sdtimes.com/?p=30242 Oracle has set out on a mission to create a universal virtual machine that can support multiple languages while providing consistent performance, tooling and configuration. The company announced GraalVM 1.0, a virtual machine designed to accomplish that mission with high performance and interoperability with no overhead when building polyglot apps. According to the company, most … continue reading

The post Oracle announces polyglot virtual machine, GraalVM appeared first on SD Times.

]]>
Oracle has set out on a mission to create a universal virtual machine that can support multiple languages while providing consistent performance, tooling and configuration. The company announced GraalVM 1.0, a virtual machine designed to accomplish that mission with high performance and interoperability with no overhead when building polyglot apps.

According to the company, most virtual machines today only support specific languages or a very small set of languages. “Compilation, memory management, and tooling are maintained separately for different languages, violating the ‘don’t repeat yourself’ (DRY) principle. This leads not only to a larger burden for the VM implementers,” the GraavlVM team wrote in a post

GraalVM allows objects and arrays to be used by foreign languages without having to convert them into different languages first. For example, this tool would allow Node.js code to access the functionality of a Java library, or to call a Python routine from within Java. With this flexibility, programmers will be able to use whatever language they think is best suited to the task they are trying to complete, Oracle explained.

This first release will allow developers to run JVM-based languages such as Java, Scala, Groovy, or Kotlin; JavaScript; LLVM bitcode; and experimental versions of Ruby, R, and Python. It can run on its own, be embedded as part of platforms or run inside databases.

The core installation provides developers with a language-agnostic debugger, profiler, and heap viewer. Oracle is encouraging third-party developers to use the instrumentation API or the language-implementation API to make tools that will further improve the GraalVM ecosystem. According to the company, it envisions “GraalVM as a language-level virtualization layer that allows leveraging tools and embeddings across all languages.”

“This first release is only the beginning. We are working on improving all aspects of GraalVM; in particular the support for Python, R and Ruby. GraalVM is an open ecosystem and we encourage building your own languages or tools on top of it. We want to make GraalVM a collaborative project enabling standardized language execution and a rich set of language-agnostic tooling,” Oracle said in a statement announcing the virtual machine.

The post Oracle announces polyglot virtual machine, GraalVM appeared first on SD Times.

]]>
SD Times GitHub Project of the Week: Go-Libvirt https://sdtimes.com/os/sd-times-github-project-week-go-libvirt/ Fri, 16 Feb 2018 14:00:48 +0000 https://sdtimes.com/?p=29380 Go-Libvirt is a pure Go interface for interacting with the toolkit Libvirt. DigitalOcean developed go-libvirt in 2016 to meet the company’s specific needs. “At DigitalOcean, we use libvirt with QEMU to create and manage the virtual machines that compose our Droplet product. QEMU is the workhorse that enables hundreds of Droplets to run on a … continue reading

The post SD Times GitHub Project of the Week: Go-Libvirt appeared first on SD Times.

]]>
Go-Libvirt is a pure Go interface for interacting with the toolkit Libvirt. DigitalOcean developed go-libvirt in 2016 to meet the company’s specific needs.

“At DigitalOcean, we use libvirt with QEMU to create and manage the virtual machines that compose our Droplet product. QEMU is the workhorse that enables hundreds of Droplets to run on a single server within our data centers. To perform management actions (like powering off a Droplet), we originally built automation which relied on shelling out to virsh, a command-line client used to interact with the libvirt daemon,” the company wrote in a blog post at the time. As we began to deploy Go into production, we realized we would need simple and powerful building blocks for future Droplet management tooling.”

The open source project can be used in conjunction with the company’s other project go-qemu, and manages VMs by proxying communication through libvirt daemon. This eliminates the need to communicate using cgo and C bindings. “While using the libvirt’s C bindings would be easier up front, we try to avoid cgo when possible,” the company wrote. “A pure Go library simplifies our build pipelines, reduces dependency headaches, and keeps cross-compilation simple.”

The project also uses code generation to build Go bindings. According to the company, libvirt’s RPC interface is very extensive and changes from one version to the next. Code generation  enables the library to be more resilient to any future changes in the libvirt API. The company explained that if the project adds a new API, the code generator will pick that up and generate bindings for it. Automation is crucial to code generation.

“Tedious, repetitive tasks are ideal candidates for automation,” the company said in an interview with SD Times. “That’s part of the reason code generation tools like Swagger (swagger.io) and gRPC (grpc.io) have been growing in popularity for developers building APIs. The same thing holds true for lower level language bindings. The Go programming language provides some built-in tools for generating code, allowing you to write programs that write programs.”

Top 5 trending projects on GitHub this week

 

  • Tensorflow: The ever resourceful machine learning framework brought to you by Google.
  • Nocode: What’s better than not coding. No code. This funny repository is getting a lot of hits this week! Still trending from last week!

 

    1. Automerge: A library of data structures for JavaScript apps

 

  • Checkstyle: A development tool to help programmers write Java code while adhering to a coding standard.

 

 

The post SD Times GitHub Project of the Week: Go-Libvirt appeared first on SD Times.

]]>
Analyst Watch: Can Graal be the Holy Grail of polyglot runtime? https://sdtimes.com/graal/analyst-watch-can-graal-holy-grail-polyglot-runtime/ Fri, 28 Jul 2017 16:00:43 +0000 https://sdtimes.com/?p=26364 Virtualization has proven its value to IT and to developers through technologies such as server virtualization and the venerable JVM. Operating system virtualization is about providing protection and isolation/security from other operating systems while maximizing system utilization. In the case of the JVM, the value is arguably more about providing an insulation layer that abstracts … continue reading

The post Analyst Watch: Can Graal be the Holy Grail of polyglot runtime? appeared first on SD Times.

]]>
Virtualization has proven its value to IT and to developers through technologies such as server virtualization and the venerable JVM. Operating system virtualization is about providing protection and isolation/security from other operating systems while maximizing system utilization. In the case of the JVM, the value is arguably more about providing an insulation layer that abstracts the application code from the underlying architectural idiosyncrasies. Wouldn’t it be nice if more languages had a virtualization layer?

Well, that may come to pass. An open-source project from the technologies of the Graal research project is the basis for GraalVM, a JVM that bundles Graal, Truffle and other select components. Graal is a new Just in Time (JIT) compiler that has been nurtured for the past several years primarily by Oracle Labs. Built in Java, GraalVM leverages the open-source project Graal to accelerate compilation performance and, in collaboration with a project called Truffle, a polyglot language compiler, provides optimized compilation capabilities for any programming language that supports the Truffle API.

GraalVM provides polyglot runtime functionality that brings the “write once, run anywhere” attribute of Java to any language that can be compiled by Graal, thereby serving as a unified infrastructure for compiling a plurality of programming languages across a multitude of devices as well as any SaaS application or data processing application. Moreover, GraalVM enables languages to interoperate with one another, thereby empowering developers to begin writing code in one language and subsequently leverage code written in another language. As such, the GraalVM has the potential to serve as a unified framework for compilation that facilitates enhanced portability and interoperability amongst programming languages.

Currently, Java compilation takes place by means of a two-step process whereby a Java compiler compiles the source code and generates bytecode; thereafter, the JVM converts the bytecode to machine code. The bytecode, which is saved as a .class file, is executed via a JIT compiler by a Java virtual machine (JVM) and thereby translated into machine code.

GraalVM promises to bring the speed of program execution specific to compiled languages such as C++ to interpreted languages by means of its support for the Truffle language-implementation framework. The Truffle API creates an Abstract Syntax Tree representation of source code that it subsequently converts into a Graal Intermediate Representation (IR). Graal enters the picture by performing advanced optimization on the Graal IR and transforming the result into machine code as a state of the art optimizing compiler.

Separate from its ability to accelerate compilation, GraalVM boasts the ability to allow programming languages to interoperate with one another by means of the Truffle Object Storage Model. Interoperability, here, means that GraalVM allows languages to access objects, classes and data structures from other languages. For example, developers can enable Java code to access code written in JavaScript, Ruby, R, or C/C++ and vice versa. GraalVM’s ability to facilitate interoperability between languages has the potential to give the programming world respite from the dizzying profusion of languages by providing a unified framework that empowers developers to integrate code from a plurality of languages into one unified code-base.

One of the unanswered questions for the larger GraalVM project, however, concerns its ability to attract developers to its open-source community to drive adoption and nurture a healthy ecosystem of code contribution. The project was initiated by, and is still led by Oracle Labs and will need robust and transparent governance to encourage contributions from the global community of developers as well as the support of the enterprise and startup community, alike.

Of particular concern is the lack of a diverse community supporting GraalVM. Vendors that should have an interest in this project but apparently do not – Microsoft, IBM, Red Hat, and Intel all come to mind – seem to indicate that either the project has been flying too low to show up on their radar, or that there is some inherent resistance to supporting the project for either technology reasons or for competitive reasons. That said, there were substantial contributions from Red Hat (ARM back end), Intel (optimizations for the Intel platform), and Twitter (bug fixes). Those were more along the lines of one-time contributions rather than an ongoing stream of commits. In the case of GraalVM, deep and sustained support from the developer community will be critical to its success, particularly if GraalVM aspires to integrate a growing roster of languages into its infrastructure.

Overall, GraalVM promises to enhance the developer experience by way of its polyglot capability to accelerate, improve, streamline and simplify application runtime and performance. The key to its success, however, will hinge on Oracle’s ability to court developer mindshare and create collaborative processes that support its evolution.

The post Analyst Watch: Can Graal be the Holy Grail of polyglot runtime? appeared first on SD Times.

]]>
Five best practices to keep containerized infrastructure safe and secure https://sdtimes.com/automation/five-best-practices-to-keep-containerized-infrastructure-safe-and-secure/ Thu, 06 Jul 2017 17:02:07 +0000 https://sdtimes.com/?p=26028 Software containers are indisputably on the rise. Developers looking to build more efficient applications and quickly bring them to market love the flexibility that containers provide when building cloud-native applications. Enterprises also benefit from productivity gains and cost reductions, thanks to the improved resource utilization containers provide. Some criticize containers as being less secure than … continue reading

The post Five best practices to keep containerized infrastructure safe and secure appeared first on SD Times.

]]>
Software containers are indisputably on the rise. Developers looking to build more efficient applications and quickly bring them to market love the flexibility that containers provide when building cloud-native applications. Enterprises also benefit from productivity gains and cost reductions, thanks to the improved resource utilization containers provide. Some criticize containers as being less secure than deployments on virtual machines (VMs); but with the proper implementation, containers can deliver a more secure environment. Security on the Internet is a complex problem, but we’re developing the tools and processes needed to solve it.

Additionally, containers and VMs aren’t an either-or proposition. It’s possible to deploy containers onto VMs if that’s what you choose to do, or use technologies like Intel’s Clear Containers or the open-source Hyper to achieve the best of both worlds: The isolation of a VM with the flexibility of a container.

Containers and distributed systems provide a level of development flexibility and speed that outpaces traditional processes, making late adoption a handicap to competitiveness. Once you decide to migrate to container deployments, make sure you take the appropriate steps to protect your infrastructure.

Here are 5 best practices to secure your distributed systems:

  1. Use a lightweight Linux operating system
    A lightweight OS, along with other benefits, reduces the surface area vulnerable to attack. It also makes applying updates a lot easier, as the OS updates are decoupled from the application dependencies, and take less time to reboot after an update.
  2. Keep all images up to date
    Keeping all images up to date ensures they’re patched against the latest exploits. The best way to achieve this is to use a centralized repository to help with versioning. By tagging each container with a version number, updates are easier to manage. The containers themselves also hold their own dependencies which need to be maintained.
  3. Automate security updates
    Automated updates ensure that patches are quickly applied to your infrastructure, minimizing the time between publishing the patch and applying it to production. Decoupled containers can be updated independently from each other, and can be migrated to another host if the host OS needs to be updated. This helps remove concern about infrastructure security updates affecting other parts of your stack.
  4. Scan container images for potential defects
    There are lots of tools available to help do this. These tools compare container manifests with lists of known vulnerabilities and alert you when they either detect old vulnerabilities that might affect your container on startup or when a new vulnerability is discovered that would affect your running containers.
  5. Don’t run extraneous network-facing services in containers
    It’s considered best practice to not run Secure Shell (SSH) in containers – orchestration APIs typically have better access controls for container access. A good rule of thumb is if you don’t expect to perform routine maintenance tasks on individual containers, don’t allow any log-in access at all. It is also a good idea to design your containers for a shorter life than you plan for VMs, which ensures each new lifecycle can take advantage of updated security.

Container security will continue to evolve. By following the five best practices outlined in this article, I hope to help dispel the myth that containers are not secure and help enterprises take advantage of the productivity gains they provide while ensuring they are as secure as they can be today.

The post Five best practices to keep containerized infrastructure safe and secure appeared first on SD Times.

]]>
Rainforest QA Mobile App Testing, Dart 1.24, and Yahoo joins Verizon — SD Times news digest: June 14, 2017 https://sdtimes.com/dart/rainforestqa-dart-yahoo-verizon-sdtimes-news-digest/ Wed, 14 Jun 2017 15:28:59 +0000 https://sdtimes.com/?p=25652 Rainforest QA launched its new Rainforest QA Mobile App Testing solution, which offers crowdtesting that combines virtual machines and real devices. It’s built on Rainforest QA’s crowdsourcing and AI platform, and gives teams testing results without requiring additional engineering resources to manage. “Traditional mobile testing solutions that rely on testers using their personal devices take … continue reading

The post Rainforest QA Mobile App Testing, Dart 1.24, and Yahoo joins Verizon — SD Times news digest: June 14, 2017 appeared first on SD Times.

]]>
Rainforest QA launched its new Rainforest QA Mobile App Testing solution, which offers crowdtesting that combines virtual machines and real devices. It’s built on Rainforest QA’s crowdsourcing and AI platform, and gives teams testing results without requiring additional engineering resources to manage.

“Traditional mobile testing solutions that rely on testers using their personal devices take days or weeks to return results, and those results are often hard to reproduce,” said Russell Smith, chief technology officer and co-founder for Rainforest QA. “Only Rainforest QA’s platform approach, which blends virtual machines and real devices — while running AI underneath to verify results — can provide fully on-demand, comprehensive, highly-reproducible testing completed within hours each and every time.”

The new solution comes with more speed for faster tests, it reduces costs of virtual machines, and because of its real-world device testing abilities, teams are able to deliver repeatable states across each testing configuration required. More information on this release is available here.

Dart 1.24 released
The application programming language created by Google is getting new performance updates this week. Dart 1.24 is designed to provide a faster edit-refresh cycle as well as a new function type syntax. In addition, it features the Dart Development Compiler.

The generic function type syntax enables developers to “specify generic function types everywhere a type is expected,” the team wrote.

Other features include pub serve support for the Dart Development Compiler, the ability to publish packages that depend on Flutter SDK, updates to Dartium, and a new warning for the MIPS architecture.

Yahoo joins Verizon
Verizon has officially completed the Yahoo acquisition. Earlier this week, it was reported Yahoo CEO Marissa Mayer was reportedly going to resign with this deal. Her resignation is now official.

“Given the inherent changes to Marissa Mayer’s role with Yahoo resulting from the closing of the transaction, Mayer has chosen to resign from Yahoo. Verizon wishes Mayer well in her future endeavors,” Verizon wrote in a statement.

Yahoo will become a part of Verizon’s Oath subsidiary, which consists of more than 50 media and technology brands such as HuffPost, Yahoo Sports, AOL.com, MAKERS, and Tumblr.

Tails 3.0 available
Tails 3.0 is now available, making it the first release based on Debian 9 (Stretch). Tails 3.0 comes with a new startup and shutdown experience, desktop improvements, security improvements, and major upgrades to the software.

According to the Tails team, it was important that they released a new version of Tails around the same time the new version of Debian was released. Debian 9 will be released on June 17, and Debian allows Tails users to benefit from cool changes in Debian, fix issues in new versions of Debian while it is still in development, and more.

Major changes besides the startup/shutdown experience include all options available from a single window, languages and region settings are displayed first, accessibility features enabled from the start, and security improvements to Tails.

More information is available here.

The post Rainforest QA Mobile App Testing, Dart 1.24, and Yahoo joins Verizon — SD Times news digest: June 14, 2017 appeared first on SD Times.

]]>
Controlling software through containers and microservices https://sdtimes.com/container-lifecycle-management/controlling-software-containers-microservices/ https://sdtimes.com/container-lifecycle-management/controlling-software-containers-microservices/#comments Wed, 29 Mar 2017 13:00:33 +0000 https://sdtimes.com/?p=24062 Businesses want to move faster, develop more software, and deploy software and updates more often, but to do this in a traditional software architecture is a lot to put on developers. In order to ease the pain, more businesses and developers are turning to containers. A software container is a way to package software in … continue reading

The post Controlling software through containers and microservices appeared first on SD Times.

]]>
Businesses want to move faster, develop more software, and deploy software and updates more often, but to do this in a traditional software architecture is a lot to put on developers. In order to ease the pain, more businesses and developers are turning to containers.

A software container is a way to package software in order for it to run anywhere regardless of the environment. “Everything comes back to being faster and being cheaper than the competition from a core business standpoint. How can you deliver software faster, and how can you make sure you can deliver it in a way that is more cost effective than other competitors in your market,” said Mackenzie Burnett, product lead for Tectonic at CoreOS, a container orchestration platform provider. “What containers have enabled is both an organizational speed in terms of how you deliver software and how you develop software. On the other hand it allows for significant cost savings.”

Containers are not a new phenomenon, but it wasn’t until recently they were made easily accessible to developers. In 2013, the software container platform provider Docker announced a framework that made container technology portable, flexible and easy to deploy. “When Docker started, the focus for Solomon Hykes, founder and head of all technology and product for Docker, was on two areas: The democratization of [containers] and the democratization of the container technology for developers,” said David Messina, SVP of marketing and community at Docker. What Hykes was able to do was separate the application concerns from the infrastructure concerns and make container technology accessible to developers, he explained.

Before Docker, containers were not accessible to developers. “It was actually an obscure Linux stack technology used by operations folks for isolation,” said Messina. The first generation of containers, also known as system containers, were primarily focused on virtualizing the operating system, according to Arun Chandrasekaran, research vice president of storage, cloud and BigData at Gartner. “What Docker really did was ride on the coattails of past innovations and past work and provide a very simple application interface to system containers,” he said.

Today, the interest in containers has become widely popular. According to Gartner, client inquiries show a 300% increase in containers in 2016.

Another reason for this surge in containers is what Gartner calls the digital business. According to Chandrasekaran, more and more businesses are becoming software companies, and they are under more pressure to do continuous software delivery.

“People want to go faster. The whole idea of ‘software is eating the world’ is businesses outside of Silicon Valley need to realize they can be disrupted by teams that adopt new technologies and build applications that can be changed as quickly as customers require changes to be made,” said Alexis Richardson, CEO of Weaveworks, container and microservices networking solution provider.

Docker donates core components of its technology to the industry
In order to help the industry benefit from its technology, and create innovative container solutions, Docker has donated components and ingredients of its platform to open-source foundations. In 2014, Docker introduced the libcontainer, now known as runC, a built-in execution driver for accessing container APIs without any dependencies. The specification and runtime code was donated to the Open Container Initiative in 2015 in order to help create open industry containers for container formats and runtime.

In 2016, the company’s containerd runtime was released as a standalone, open-source project. Just last month the company announced its intent to donate the runtime to the Colud Native Computing Foundation (CNCF). According to the company, the runtime and organization’s goals align in terms of advancing cloud native technology and providing a common set of container technology. Docker will continue to invest and contribute to the project. The company is currently working on implementing the containerd 1.0 roadmap, with a target date of June 2017.

“Containerd is at the heart of Docker. We need the project, and we need it to be successful,” said Patrick Chanezon, member of Docker’s technical staff. “Giving it to the CNCF will just expand the community that can collaborate on it.” 

How  to take advantage of a container architecture
Containers are often associated with microservices, a software approach where developers break applications down into small, independent components instead of dealing with one large monolithic application.

“Container technology is an excellent way to implement microservices because what microservices does in a nutshell is allow you to break up your monolithic applications into a set of independent services. Each service can then be deployed, upgraded, maintained and bug- fixed on its own without having to impact the whole application,” said Sheng Liang, CEO of Rancher Labs, a container management platform provider. “Without containers, businesses have to worry about the different environments software has to be deployed in, and packaging the application then becomes a very labor-intensive and time-consuming process.”

According to Liang, because microservices need to be individually packaged, deployed, scaled and upgraded, containers are a nice fit because of the lightweight architecture. It enables continuous deployment, continuous integration, and can cut the build and development time down to minutes.

“More than a revolutionizing approach to software development, containers and microservices enable greater app agility, reliability, portability, efficiency and innovation. Moving away from monolithic app architecture to a distributed microservices-based architecture often leveraging containers means that developers can quickly introduce new features without impacting application functionality and maintaining availability at scale,” said Corey Sanders, director of compute for Azure at Microsoft.

Containers enable agility because they enable developers to build an application one time, run it on any infrastructure, and package everything together in an easily shareable way. In deployment, containers provide a shorter testing cycle by packaging all the dependencies, and enabling consistent deployments, according to CoreOS’s Burnett.

If you are  going to do microservices, there is really no reason not to use containers, according to Rancher Labs’ Liang. However, not all applications are going to be ready for a microservices architecture. “Even if you have a monolithic application, there are still a lot of benefits to using a container because the fundamental benefits are universal packaging and a deployment format that provides consistent runtime,” he said.

Burnett explained that the difference between the architectures is that if you have one giant monolithic application, with myriad dependencies, you typically will have a giant team working on it. With a microservices architecture, you have smaller teams working on separate services that don’t have the same tightly coupled dependencies on one another, he said.

“In either case, the container is the package for the thing you’re replicating. If your architecture is monolithic, you’re going to have a few big clunky boxes to replicate. If your architecture is made of microservices, you’ll have a lot of small boxes that you can replicate independently of each other. Most enterprises have architectures that are a mix of the two, monolithic and microservices,” she said.

However, Microsoft’s Sanders explained that scaling with monolithic applications can be problematic because developers need to deploy more application instances, create new virtual machines or provision new servers. “When combined with testing to ensure that the system works as expected after the changes, scaling monolithic applications can be time-consuming and expensive. This complexity can be exacerbated further when there are resiliency requirements, which is often the case with enterprise applications,” Sanders said.

A microservices architecture is designed to scale independently, providing agile scaling at any point in time, Sanders explained.  

And then there are the situations where applications may not be suitable for containers or microservices at all. “You can’t just take something that is built one way, and change it. Not everything needs to be containerized. It is just a part of what architecture decision is best for your business,” said Burnett.

How do containers differ from virtualization?
If the idea of taking things from an application and isolating them sounds an awful lot like virtualization, that is because it is, according to Betty Junod, director of products at Docker. Junod said that conceptually, virtual machines (VM) and containers are similar, but architecturally they are different.

“If you think about VMs, those are effectively machine instances that were set up by operations to effectively allocate memory resources whereas the packaging that we are talking about here with Docker and containers is in the hands of the developers, and it can run on any infrastructure,” Docker’s Messina added.

In a sense, containers are a lighter weight VM. They are an application packaging format that doesn’t require developers to package the operating system in as well as a VM would, according to CoreOS’s Burnett. “What this means is coupled with container orchestration platforms such as Kubernetes you can pack servers in a much better way,” she explained. “A way to think about it is in terms of Tetris. If you aren’t paying attention to what you are doing once you get to the top, you run out of space. If you pay attention, you have to pack Tetris or the pieces much more efficiently [to] effectively use the space,” she said.

However, the real key differences between containers and virtualization are that virtualization typically has been bound to a infrastructure provider, and virtualization up until recently has been expensive and too difficult to make real applications out of components, according to Weaveworks’ Richardson. “Containers are very quick to start, and very lightweight in terms of their capacity consumption requirements,” he said. “There is a possibility that you could build much more realistic applications using containers and get some of the benefits of VM at the same time.”

The three key benefits that make containers more appealing over VM’s include their ability to run on bare-metal infrastructure; their smaller resource footprint; and the ability to bundle application dependencies, according to a recent study from Gartner’s Chandrasekaran, and Raj Bala, research director at Gartner. 

Approaching containerization
There are three entry points to adopting containers, according to Docker’s Junod. They include:

  1. Taking an existing application, containerizing the whole thing, and slowly starting to carve pieces off for modernization
  2. Taking commercial off-the-shelf applications that are already in-house and containerizing them to be more portable
  3. Starting with a new new application

However an organization decides to approach containers, there are some best practices that can help them along the way.

Traditionally a lot of technology adoption requires big top-down initiatives, but containers have a very different process in terms of how organizations typically adopt them, according to Rancher Labs’ Liang. Container adoption tends to start with developers very organically because the benefits are very tangible and simple to use. In addition, businesses don’t have to turn every single application into a container on day one. You can start with one, and eventually migrate everything over. Some applications may be working just fine and not updated very often, so a company can stay with a legacy infrastructure and not implement a container deployment model, Liang explains. “In general, there is a lot of flexibility and freedom in how an organization can adoption container technology,” he said.

According to Microsoft’s Sanders, container-based and microservice architectures take a lot of planning. The first thing business leaders need to do is prioritize their application and services, and figure out which ones are most important to their daily operations. “Applications requiring high availability with fast agile development can benefit most from these new models. Depending on the business goals and time horizons, enterprises can choose from many ways in which to transition to a these modern architectures,” Sanders says.

CoreOS’ Burnett recommends having a small team within the organization to lead the transition. The team starts playing around with the technology, evaluating the technology platform, and acts as a prototype for the rest of the company. “The prototyping does not just include the technology. The team is also prototyping how to build a team, the best practices for training people on the new technology, and how to communicate between teams,” he said.

In order to start using containers right away, Sanders believes a lift-and-shift approach to existing apps may be the best solution.

A lift-and-shift approach allows developers to port applications without having to refactor or deal with extensive code modifications, according to Weaveworks’ Richardson. For example, a lift-and-shift of a small legacy app allows developers to move it to the cloud, make it redundant, and create a sleeping copy so that it has a backup in case the primary app is overloaded, he explained

For teams trying to take the full advantages of a microservices architecture, the best way enterprises go about this is to fully re-architect their applications, according to Sanders. “This development mechanism lends itself well to the distributed, resilient, and agile nature of a microservices-based application,” said Sanders. In order to successfully re-architect an application,  Sanders suggests developers take a gradual approach, identify the components that benefit most from cloud scale and agility in deployment, and rebuild those components first.

“Whether you choose to adopt containers and microservices through a legacy migration, lift-and-shift, a re-architecture, or greenfield, it is always going to come down to the question of how do you make this easy for application developers,” said Richardson.

Gartner’s Chandrasekaran explains sometimes the amount of effort required to retool or refactor a legacy application might not compare to the benefits a company could potentially be getting with containers and microservices. “Organizations have to have a very clear idea of their portfolio and figure out which applications can benefit from the tradition. Secondly, they have to identify what are the metrics and how are they going to measure the status of these projects to figure out if it has been a success initiative.”

One of the biggest challenges organizations will run across is the cultural transition. Containers are relatively new to developers, and the skill sets aren’t all there, according to Chandrasekaran.

“If you really want this to be successful, you have to have a more fluid organization where people are collaborating increasingly with each other, trying to do new things, trying to in some sense break things and willing to learn from those things,” he said “A lot of this movement is going to really come from the willingness of organizations to relook at their skills, relook at their processes and more importantly relook at the culture and leadership, and how they reward and hire people.”

The container toolbelt
Containers are a way to easily package your software, but there is still the matter of existing server, storage, networking and security infrastructure a business needs to consider.

As leaders look to create a container strategy, they need to address operations management, application software installation, infrastructure software installation and management, and physical and virtual server installation. Each one of those different pieces require different tools and approaches to have a successful container transition, according to Gartner’s Chandrasekaran.

Operation management includes scheduling, resource management and monitoring. “Scheduling and resource management are key, as containers allow denser packing of hardware resources than virtualization,” said Chandrasekaran. He recommends looking at tools such as Google’s Kubernetes, Docker Swarm, Apache Mesos and Mesosphere Datacenter.  

Since containerization is such a new technology and skill for everyone, CoreOS’s Burnett says it is best to look toward a solution that has an established operationalized knowledge base on how to run containers in production like Kubernetes.

According to Weaveworks’ Richardson, orchestration provides an easy way to discover and maintain containers that you wouldn’t be able to do manually. “You don’t want to be looking at hundreds of machines, or even tens of machines and have to worry about what software is deployed on which one,” he said.

In addition, Chandrasekaran says granular monitoring tools that handle container-level monitoring will help developers identify bottlenecks and failures, as well as pinpoint problems. Scheduling and orchestration will allow users to scale containers and have them interoperate with other parts of the infrastructure.

Application software installation includes activities associated with installing the app software within the containers. According to Chandrasekaran, it is important to maintain the registries that store the software and ensure developers are using the right software. “Without this governance, developers are free to use any application or application infrastructure. Among the enterprise-hosted offerings in this area are solutions from Docker and CoreOS,” he said.

Service management includes activities involving the development and operations of the service such as container runtime and container discovery. Here, traditional operating systems and container formats are used with an operations management process, Chandrasekaran explained. According to Richardson, an operational platform helps complement other solutions because it provides the ability to troubleshoot, diagnose, issues and correlate them with results.

Infrastructure software installation and management includes infrastructure provisioning, configuration and patching functions. “This includes the installation of the underlying operating system that is virtualized to make containers. After installation, the configuring and ongoing patching of the operating system must be performed,” Chandrasekaran said. Chandrasekaran believes users need a continuous configuration automation process to work with containers.

Physical and virtual server installation is provisioning the infrastructure where containers reside. According to Chandrasekaran, enterprises are deploying containers within VMs because of their ability to separate individual containers, and the mature tooling found in the VM world. Over time, however, Chandrasekaran sees more companies taking an interest in developing new container-related tools that are in line with VM management. Serverless technology is an area Microsoft’s Sanders believes is growing. According to him, it allows developers to focus on developing applications, not managing machines or worrying about virtual machines, and in turn boosts productivity.

“The world of microservices and containers is evolving rapidly. There are multiple popular offerings for container orchestration and management. We see this diversity continuing as customer needs continue to diversify,” Sanders said.

Other necessities for containerization include having proper governance and security policies in order to prevent things like malicious code from coming in. Chandrasekaran recommends trusted registries to help monitor container traffic. In addition, Rancher Labs’ Liang says businesses need to implement internal processes to prevent the operations team from looking at things they aren’t supposed to, such as customer data. “Breaches don’t just come from the outside, the come from within the organization too. You want to make sure your security and privacy concerns are solved,” he said.

For networking and storage, you need to have a back-end infrastructure that is agile-oriented, and allows for a more automated process. Gartner’s Chandrasekaran is seeing more people interested in cloud infrastructure because it lessens the pain that comes with hardware management, and allows users to quickly provision and scale infrastructure.

Liang believes cloud infrastructure is important because if you have a system running on a couple of servers in your own data center and you have a bad network connection, it is not going to scale. The cloud can help ensure teams store data reliability, move data from one host to another, handle load balancing problems and solve networking and storage problems.

Additionally, Docker’s Messia believes teams need to have an overall management platform that covers the container lifecycle from developers to operations, and allows Dev and Ops to collaborate.

“Container technology is no longer playing around. It is for real,” said Weaveworks’ Richardson. It is becoming easier and easier for application developers to use this with their favorite tool. 2017 is the year they should start doing it, if they haven’t already.”

The post Controlling software through containers and microservices appeared first on SD Times.

]]>
https://sdtimes.com/container-lifecycle-management/controlling-software-containers-microservices/feed/ 3
Notes from Node.js Interactive: Node.js VM-neutrality, the Node.js security project, and NodeSource NSolid 2.0 https://sdtimes.com/chakracore/notes-nodejs-interactive-nodejs-vm-neutrality-nodejs-security-project-nodesource-nsolid/ https://sdtimes.com/chakracore/notes-nodejs-interactive-nodejs-vm-neutrality-nodejs-security-project-nodesource-nsolid/#comments Tue, 29 Nov 2016 20:00:43 +0000 https://sdtimes.com/?p=22193 The Node.js Foundation is continuing its mission to make Node.js VM-neutral. The foundation announced major milestones toward allowing the solution to work in a wide variety of VMs at the Linux Foundation’s Node.js Interactive conference this week. According to the foundation, VM-neutrality will allow Node.js to expand its ecosystem to more devices and workloads, such … continue reading

The post Notes from Node.js Interactive: Node.js VM-neutrality, the Node.js security project, and NodeSource NSolid 2.0 appeared first on SD Times.

]]>
The Node.js Foundation is continuing its mission to make Node.js VM-neutral. The foundation announced major milestones toward allowing the solution to work in a wide variety of VMs at the Linux Foundation’s Node.js Interactive conference this week.

According to the foundation, VM-neutrality will allow Node.js to expand its ecosystem to more devices and workloads, such as the Internet of Things and mobile devices. Other benefits include developer productivity and standardized efforts.

As part of VM-neutrality, the foundation has announced that the Node.js API is now independent from any changes in V8, the open-source JavaScript engine. “A large part of the Foundation’s work is focused on improving versatility and confidence in Node.js,” said Mikeal Rogers, community manager of the Node.js Foundation. “Node.js API efforts support our mission of spreading Node.js to as many different environments as possible. This is the beginning of a big community web project that will give VMs the same type of competition and innovation that you see within the browser space.”

(Related: What’s in Node.js 6.0)

In addition, the foundation revealed the Node.js build system will start to produce nightly builds of node-chakracore, allowing Node.js to be used with Microsoft’s JavaScript engine, ChakraCore.

“Today, there is a proliferation in the variety of device types, each with differing resource constraints,” wrote Arunesh Chandra, senior program manager for Chakra, in a blog post. “In this device context, we believe that enabling VM-neutrality in Node.js and providing choice to developers across various device types and constraints are key steps to help the Node.js ecosystem continue to grow.”

The Node.js Foundation also announced plans to oversee a Node.js security project at the conference, which is designed to detect and disclose security vulnerabilities in Node.js. According to Rogers, the foundation will allow security vendors to contribute to its common vulnerability repository.

“Given the maturity of Node.js and how widely used it is in enterprise environments, it makes sense to tackle this endeavor under open governance facilitated by the Node.js Foundation,” said Rogers. “This allows for more collaboration and communication within the broad community of developers and end users, ensuring the stability and longevity of the large, continually growing Node.js ecosystem.” A Node.js security project working group will be established as part of the Node.js Foundation.

In other Node.js news, enterprise Node company NodeSource announced it is expanding its production toolset with NodeSource Certified Modules and the release of NSolid v2.0. NodeSource Certified Modules is designed to provide security and trust to third-party JavaScript solutions. The solution verifies trustworthiness through the NodeSource Certification Process, and it ensures a stable, reliable and secure source.

NSolid v2.0 is the latest release of the company’s enterprise-grade Node.js platform, and it features automated error reporting, real-time metrics, built-in security features, CPU profiling, and performance monitoring.

The post Notes from Node.js Interactive: Node.js VM-neutrality, the Node.js security project, and NodeSource NSolid 2.0 appeared first on SD Times.

]]>
https://sdtimes.com/chakracore/notes-nodejs-interactive-nodejs-vm-neutrality-nodejs-security-project-nodesource-nsolid/feed/ 3
Yext’s location data developer platform, Scala 2.12.0, and ClusterHQ’s FlockerHub and Fli—SD Times news digest: Nov. 3, 2016 https://sdtimes.com/application-testing/yexts-location-data-developer-platform-scala-2-12-0-clusterhqs-flockerhub-fli-sd-times-news-digest-nov-3-2016/ Thu, 03 Nov 2016 15:18:16 +0000 https://sdtimes.com/?p=21791 Yext, a database for location data, has announced a new developer platform designed to take location data out of spreadsheets and into a more centralized solution. The Yext Location Cloud Platform is designed to give businesses the control and ability to manage their location data across the organization. It features open APIs, a developer console, … continue reading

The post Yext’s location data developer platform, Scala 2.12.0, and ClusterHQ’s FlockerHub and Fli—SD Times news digest: Nov. 3, 2016 appeared first on SD Times.

]]>
Yext, a database for location data, has announced a new developer platform designed to take location data out of spreadsheets and into a more centralized solution. The Yext Location Cloud Platform is designed to give businesses the control and ability to manage their location data across the organization. It features open APIs, a developer console, a developer portal, developer accounts, geocoding, custom fields. smart address formats, and a customizable local user portal.

“Location data touches all parts of an organization, but typically, each department has their own system of management, leading to duplicate and often contradictory information within the company,” said Marc Ferrentino, Yext’s Chief Strategy Officer. “From changes in office hours to warehouse opening and closings—and from localized email sends to support line call-routing—businesses need a centralized way manage their location data.”

Scala 2.12.0 available
The availability of Scala 2.12.0 comes with a completely overhauled Scala compiler in order to make use of the new VM features available in Java 8. A new optimizer is available with this release along with improvements to the Scala and Java 8 interoperability.

Scala 2.12 is all about making optimal use of Java 8’s new features, which is why it generates code that requires a Java 8 runtime, according to a company release. A trait compiles directly to an interface with default methods in this release, which improves binary compatibility and Java interoperability, according to the company. The new optimizer eliminates closure allocations, dead code, and box/unbox pairs more effectively. From this point on, 2.12.x releases will be fully binary-compatible.

By the end of November, known issues in this release will be resolved in 2.12.1. The list of open-source libraries released for Scala 2.12 can be found here.

New products from ClusterHQ give DevOps teams new data-management capabilities
ClusterHQ has announced new container data management products called FlockerHub and Fli. Both products will be available through ClusterHQ’s beta program.

FlockerHub is like GitHub for data, said the company in a statement. Fli is a Git-like command-line interface that lets developers push and pull data volumes to FlockerHub. With both products, developers and DevOps teams can seamlessly move data between their devices, test environments, data centers and clouds.

Both products aim to solve some of the challenges DevOps teams face for effective data management across complete application life cycles, said the company. Both FlockerHub and Fli improve processes for managing and distributing data across containerized environments.

Some features of the products include the ability to use realistic test data, on-demand staging for each commit, and (with Fli) database states can be regularly snapshotted and pushed to FlockerHub.

FlockerHub launches in beta on Nov. 8, and developers can sign up here. Fli will be available as an Apache 2.0 download the same day.

Red Hat updates its Linux operating system
Red Hat has announced the availability of Red Hat Enterprise Linux 7.3. The latest release features improvements to performance, security and reliability, as well as new capabilities around Linux containers and the Internet of Things.

“As modern enterprise applications become increasingly resource intensive at both the network and storage levels, IT infrastructure must not just keep pace, but anticipate and adapt to these changing needs said Jim Totton, vice president and general manager of the platforms business unit at Red Hat. “Red Hat Enterprise Linux 7.3 delivers increased application performance and a more secure, reliable and innovative enterprise platform, well suited for existing mission-critical workloads and emerging technology deployments like Linux containers and IoT.”

The post Yext’s location data developer platform, Scala 2.12.0, and ClusterHQ’s FlockerHub and Fli—SD Times news digest: Nov. 3, 2016 appeared first on SD Times.

]]>
CoreOS and Intel to collaborate on OpenStack with Kubernetes https://sdtimes.com/containers/coreos-and-intel-to-collaborate-on-openstack-with-kubernetes/ Fri, 01 Apr 2016 18:00:12 +0000 https://sdtimes.com/?p=18016 CoreOS and Intel aim to bring virtual machines and containers together with their newly announced technical collaboration. The companies have announced plans to deploy and manage OpenStack, the open-source software for building clouds, with Kubernetes, the open-source system for automating deployment, scaling and operations of applications. “A collaboration between Intel and CoreOS is a huge … continue reading

The post CoreOS and Intel to collaborate on OpenStack with Kubernetes appeared first on SD Times.

]]>
CoreOS and Intel aim to bring virtual machines and containers together with their newly announced technical collaboration. The companies have announced plans to deploy and manage OpenStack, the open-source software for building clouds, with Kubernetes, the open-source system for automating deployment, scaling and operations of applications.

“A collaboration between Intel and CoreOS is a huge step forward for enterprises looking to achieve hyperscale,” said Jason Waxman, vice president and general manager of the Cloud Platforms Group at Intel. “Both the Kubernetes and OpenStack communities can benefit greatly by having an orchestration layer to manage workloads across VMs and containers.”

(Related: CoreOS’ Docker alertnative reaches version 1.0)

Together, CoreOS and Intel want to integrate Kubernetes and OpenStack into a single open-source software-defined infrastructure (SDI) stack. CoreOS also has plans to offer the stack as an option in Tectonic as a way to achieve “Google’s infrastructure for everyone else” strategy; simplify OpenStack deployment and management; provide the ability to rapidly release OpenStack clusters for development, test, QA or production; and provide a consistent platform for VMs running on top of Kubernetes.

“Together with Intel, we are accelerating the industry forward in reaching GIFEE (Google’s infrastructure for everyone else),” said Alex Polvi, CEO of CoreOS. “By running OpenStack on Kubernetes, you get the benefits of consistent deployments of OpenStack with containers together with the robust application life-cycle management of Kubernetes.”

This collaboration marks another step in CoreOS and Intel’s commitment to deliver Tectonic on consumer appliances.

The post CoreOS and Intel to collaborate on OpenStack with Kubernetes appeared first on SD Times.

]]>
IBM partners up for cloud-based virtual machines https://sdtimes.com/bluemix/ibm-partners-up-for-cloud-based-virtual-machines/ https://sdtimes.com/bluemix/ibm-partners-up-for-cloud-based-virtual-machines/#comments Mon, 22 Feb 2016 19:00:20 +0000 https://sdtimes.com/?p=17257 IBM’s InterConnect 2016 conference kicked off today, with the company making cloud-based announcements for its product lines. Chief among them was a new partnership with VMware to bring virtual-machine-hosted applications into IBM’s cloud-based offerings. The IBM announcement was riddled with partnerships, many of which were focused on bringing the benefits of IBM’s cloud offerings to … continue reading

The post IBM partners up for cloud-based virtual machines appeared first on SD Times.

]]>
IBM’s InterConnect 2016 conference kicked off today, with the company making cloud-based announcements for its product lines. Chief among them was a new partnership with VMware to bring virtual-machine-hosted applications into IBM’s cloud-based offerings.

The IBM announcement was riddled with partnerships, many of which were focused on bringing the benefits of IBM’s cloud offerings to existing customers of these third-party services. GitHub will be offering its enterprise edition within IBM’s cloud as a hosted GitHub solution. The collaboration will also yield Bluemix integrations for IoT users based on GitHub.

Chris Wanstrath, cofounder and CEO of GitHub, said that “Great software is no longer a nice-to-have in the enterprise, and developers expect to be able to build software quickly and collaboratively. By making GitHub Enterprise available on the IBM Cloud, even more companies will be able to tap into the power of social coding, and build the best software, faster.”

(Related: VMware wants its own hybrid cloud)

IBM’s cloud will also host VMware virtual machines. As part of the VMware family, the IBM Cloud will be part of the VMware vCloud Air Network, and will enable hybrid cloud deployments inside enterprises.

Pat Gelsinger, CEO of VMware, said, “This partnership, an extension of our 14-year plus relationship with IBM, demonstrates a shared vision that will help enterprise customers more quickly and easily embrace the hybrid cloud. Our customers will be able to efficiently and securely deploy their proven software-defined solutions with sophisticated workload automation to take advantage of the flexibility and cost effectiveness of IBM Cloud.”

Robert LeBlanc, senior vice president of IBM Cloud, said, “We are reaching a tipping point for cloud as the platform on which the vast majority of business will happen. The strategic partnership between IBM and VMware will enable clients to easily embrace the cloud while preserving their existing investments and creating new business opportunities.”

IBM expanded its cloud offerings in other ways as well. It introduced WebSphere Cloud Connect, which takes existing applications and turns them into easily discoverable APIs for cloud-based hosting.

Marie Wieck, general manager of IBM WebSphere Cloud Connect, said, “The power of cloud-based applications is that you can easily represent both real-time information and the collective knowledge on any topic. That’s always going to be a combination of newly created services and existing apps, many of which exist on premises. Our objective is to make those distinctions go away for a developer. A developer shouldn’t care where a piece of data, a microservice, or even an IBM Watson cognitive system resides; the platform should do that for them.”

IBM also introduced Bluemix OpenWhisk, today, a simpler platform for constructing IoT applications. Bluemix OpenWhisk includes container support, built-in AI capabilities, and the ability to chain together small pieces of code to create microservices.

Finally, IBM introduced a number of tools aimed at winning over Swift developers. The company introduced a Swift Sandbox for developers to try the language in the IBM cloud. Swift is also supported in Bluemix and with Kitura, a new open-source Web server released by IBM for Linux and OS X. Bluemix also now contains a Swift Package catalog for developers to share their applications across the IBM developer community.

The post IBM partners up for cloud-based virtual machines appeared first on SD Times.

]]>
https://sdtimes.com/bluemix/ibm-partners-up-for-cloud-based-virtual-machines/feed/ 3