VMs Archives - SD Times https://sdtimes.com/tag/vms/ Software Development News Tue, 05 Mar 2019 19:20:33 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.1 https://sdtimes.com/wp-content/uploads/2019/06/bnGl7Am3_400x400-50x50.jpeg VMs Archives - SD Times https://sdtimes.com/tag/vms/ 32 32 Twistlock goes beyond container security https://sdtimes.com/containers/twistlock-goes-beyond-container-security/ Tue, 05 Mar 2019 19:20:33 +0000 https://sdtimes.com/?p=34595 Twistlock has announced the latest release of its cloud-native security platform. Twistlock 19.03 is designed to expand the company’s security capabilities to hosts, containers and serverless solutions. “Our approach is different because we’re not repacking legacy technologies or focusing on only a single aspect of host defense. Instead, Twistlock provides vulnerability management, compliance, runtime defense, … continue reading

The post Twistlock goes beyond container security appeared first on SD Times.

]]>
Twistlock has announced the latest release of its cloud-native security platform. Twistlock 19.03 is designed to expand the company’s security capabilities to hosts, containers and serverless solutions.

“Our approach is different because we’re not repacking legacy technologies or focusing on only a single aspect of host defense. Instead, Twistlock provides vulnerability management, compliance, runtime defense, and firewalling across all your VMs in all your clouds. We’re able to do this because we started on the harder problem first – containers, where you have many more entities, they’re all ephemeral, and they’re changing all the time,” John Morello, CTO for Twistlock, wrote in a blog post.

According to Morello, while VMs have been around for quite some time and are used for a number of different scenarios, the company is focusing on modern, cloud-focused deployments. “we continue to heavily invest in container and serverless features but adding VMs provides truly comprehensive and consistent protection across all your workloads regardless of where on the continuum they’re run,” he explained.

Key features of the release include:

    • Cloud native network firewall and radar for hosts: One of the key challenges of cloud-based VMs is the ability to maintain a least privilege networking mode for apps they run, Morello explained. Cloud native network firewall for hosts is designed to automate learning and workload awareness. Radar for hosts aims to display vulnerabilities, compalice and runtime status.
    • Host file integrity monitoring: A central place for security and compliance policy. According to Morello, it “enables monitoring of host file systems for specific changes to directors and files by specific users.”
    • Host forensics: First introduced in Twistlow 2.5 and known for its continuous forensic capability or ‘flight data recorder’ for containers, host forensics aims to behave similarly to container forensics and designed to keep a self-managed local log of forensic activity.
    • Customer runtime rule language: Designed to provide control over discrete runtime behaviors in containers and hosts. “These custom rules enable you to specify exact conditions to watch for and exact actions to take when they’re encountered,” Morello wrote.
    • Cloud compliance v2: Provides deeper compliance capabilities for AWS and includes the CIS AWS Foundations Benchmark checks.
    • Assigned collections: Aims to make it easier to provide least privilege access to data within a Twistlock environment, such as allowing a given dev team to only see vulnerability data about their own images.
    • RASP defender: “RASP (Runtime Application Self Protection) is an industry term for embedding security within an app, rather than relying on an external tool. RASP Defender is a simple binary that runs as part of an app (even a non-containerized app) and provides automatic process and network based runtime defense, such as preventing anomalous processes from starting and blocking access to undesired DNS namespaces,” Morello explained.

 

 

 

 

 

 

Additionally, Morello says the release has a number of smaller improvements that include native Helm support, the ability to upload debug data, real-time log ingestion, simplified vulnerability management policy and separate host and container policies.

The post Twistlock goes beyond container security appeared first on SD Times.

]]>
Five best practices to keep containerized infrastructure safe and secure https://sdtimes.com/automation/five-best-practices-to-keep-containerized-infrastructure-safe-and-secure/ Thu, 06 Jul 2017 17:02:07 +0000 https://sdtimes.com/?p=26028 Software containers are indisputably on the rise. Developers looking to build more efficient applications and quickly bring them to market love the flexibility that containers provide when building cloud-native applications. Enterprises also benefit from productivity gains and cost reductions, thanks to the improved resource utilization containers provide. Some criticize containers as being less secure than … continue reading

The post Five best practices to keep containerized infrastructure safe and secure appeared first on SD Times.

]]>
Software containers are indisputably on the rise. Developers looking to build more efficient applications and quickly bring them to market love the flexibility that containers provide when building cloud-native applications. Enterprises also benefit from productivity gains and cost reductions, thanks to the improved resource utilization containers provide. Some criticize containers as being less secure than deployments on virtual machines (VMs); but with the proper implementation, containers can deliver a more secure environment. Security on the Internet is a complex problem, but we’re developing the tools and processes needed to solve it.

Additionally, containers and VMs aren’t an either-or proposition. It’s possible to deploy containers onto VMs if that’s what you choose to do, or use technologies like Intel’s Clear Containers or the open-source Hyper to achieve the best of both worlds: The isolation of a VM with the flexibility of a container.

Containers and distributed systems provide a level of development flexibility and speed that outpaces traditional processes, making late adoption a handicap to competitiveness. Once you decide to migrate to container deployments, make sure you take the appropriate steps to protect your infrastructure.

Here are 5 best practices to secure your distributed systems:

  1. Use a lightweight Linux operating system
    A lightweight OS, along with other benefits, reduces the surface area vulnerable to attack. It also makes applying updates a lot easier, as the OS updates are decoupled from the application dependencies, and take less time to reboot after an update.
  2. Keep all images up to date
    Keeping all images up to date ensures they’re patched against the latest exploits. The best way to achieve this is to use a centralized repository to help with versioning. By tagging each container with a version number, updates are easier to manage. The containers themselves also hold their own dependencies which need to be maintained.
  3. Automate security updates
    Automated updates ensure that patches are quickly applied to your infrastructure, minimizing the time between publishing the patch and applying it to production. Decoupled containers can be updated independently from each other, and can be migrated to another host if the host OS needs to be updated. This helps remove concern about infrastructure security updates affecting other parts of your stack.
  4. Scan container images for potential defects
    There are lots of tools available to help do this. These tools compare container manifests with lists of known vulnerabilities and alert you when they either detect old vulnerabilities that might affect your container on startup or when a new vulnerability is discovered that would affect your running containers.
  5. Don’t run extraneous network-facing services in containers
    It’s considered best practice to not run Secure Shell (SSH) in containers – orchestration APIs typically have better access controls for container access. A good rule of thumb is if you don’t expect to perform routine maintenance tasks on individual containers, don’t allow any log-in access at all. It is also a good idea to design your containers for a shorter life than you plan for VMs, which ensures each new lifecycle can take advantage of updated security.

Container security will continue to evolve. By following the five best practices outlined in this article, I hope to help dispel the myth that containers are not secure and help enterprises take advantage of the productivity gains they provide while ensuring they are as secure as they can be today.

The post Five best practices to keep containerized infrastructure safe and secure appeared first on SD Times.

]]>
Kotlin/Native, Infragistics Ultimate UI for Xamarin, ActivePython 2.7.13 and 3.5.3, and Progress OpenEdge 11.7 — SD Times news digest: April 4, 2017 https://sdtimes.com/activepython/kotlinnative-infragistics-ultimate-ui-xamarin-activepython-2-7-13-3-5-3-progress-openedge-11-7-sd-times-news-digest-april-4-2017/ https://sdtimes.com/activepython/kotlinnative-infragistics-ultimate-ui-xamarin-activepython-2-7-13-3-5-3-progress-openedge-11-7-sd-times-news-digest-april-4-2017/#comments Tue, 04 Apr 2017 15:13:52 +0000 https://sdtimes.com/?p=24414 JetBrains is giving developers a first look at its Kotlin/Native compiler. The compiler is designed to compile Kotlin into machine code, and deliver executables without virtual machines. “Kotlin/Native is another step toward making Kotlin usable throughout a modern application. Eventually, it will be possible to use Kotlin to write every component, from the server back-end … continue reading

The post Kotlin/Native, Infragistics Ultimate UI for Xamarin, ActivePython 2.7.13 and 3.5.3, and Progress OpenEdge 11.7 — SD Times news digest: April 4, 2017 appeared first on SD Times.

]]>
JetBrains is giving developers a first look at its Kotlin/Native compiler. The compiler is designed to compile Kotlin into machine code, and deliver executables without virtual machines.

“Kotlin/Native is another step toward making Kotlin usable throughout a modern application. Eventually, it will be possible to use Kotlin to write every component, from the server back-end to the web or mobile clients. Sharing the skill set is one big motivation for this scenario. Another is sharing actual code,” Andrey Breslav, lead language designer of Kotlin at JetBrains, wrote in a post.

The Kotlin team plans to create common modules, or modules that are written in Kotlin and can be compiled to supported platforms such as Kotlin/JVM, Kotlin/JS and the Kotlin/Native. Going forward, the team hopes to bring Kotlin/Native to iOS apps, embedded systems and game development. Known issues include: no performance optimization, and an incomplete standard library and reflection support. 

LogiGear’s software testing industry survey
According to a recently released report, test automation continues to be a problem for the software development industry. LogiGear released its first in a four-part survey series report on the software testing landscape. The first survey, Testing Essentials, reveals management still does not understand the processes and tools associated with test automation, making it a huge hurdle for teams to get over.

Other results revealed projects are often behind schedule due to a change in user stories and requirements; 67% of respondents write tests based on release or user stories; 54% have testing strategies that include only  API integration and UI testing; and issues around automation contribute to poor quality software.

Infragistics releases Ultimate UI for Xamarin
Infragistics is announcing new UI controls and productivity tools for Microsoft Visual Studio developers in its latest release. The company announced the Infragistics Ultimate UI for Xamarin with new UI controls and developer productivity tools.

The UI solution features Productivity Pack to visually map an app’s entire flow and generate all views. Other features include: App Map, Xamarin.Forms toolbox, DataGrid, DataCharts, Mobile Schedule, control configurations, documentation, and a reference app.

“We want to provide dev teams with the tools they need to allow them to focus on solving business problems, doing the kinds of work they got into coding to do, not doing grunt work or wrangling code onto different devices and platforms,” said Jason Beres, senior vice president of developer tools at Infragistics.

ActivePython 2.7.13 and ActivePython 3.5.3
ActiveState has announced updates to its Python solutions. ActivePython 2.7.13 and ActivePython 3.5.3 feature popular packages for data science and web app development as well as a comprehensive commercial Python distribution.

“While ActivePython is used by millions of developers around the world because it’s an easy way to install Python for their projects, large organizations also love using our Python distribution because it meets their security and open source compliance policies,” said Bart Copeland, ActiveState CEO. “Developers download packages from various repositories in order to get the job done, but they don’t always think about licensing or security issues. By using ActivePython organizations don’t have to worry about these potential vulnerabilities — our distribution and its packages have been vetted for security and have been accompanied by a complete license review.”

Hortonworks Data Platform 2.6
Hortonworks announced version 2.6 of its data platform with improved data science, enterprise-grade security and streamlined operations.  According to the company, it helps deliver real-time operational analytics directly from the data lake, and harvests value from data faster.

“HDP 2.6 showcases the advantages of the open source community. Significant innovation is coming out of the Apache community, and because of our commitment to delivering an open platform, we are uniquely able to bring these value-creating capabilities to customers,” said Scott Gnau, chief technology officer at Hortonworks. “HDP 2.6 introduces key new enterprise features and performance improvements that will benefit our customers immediately—no application re-write required.”

Features include improved user experience for data scientists, enhancements to Ranger and Atlas, and streamlined and proactive operations.

Progress OpenEdge 11.7
Progress’ new release of its app development platform OpenEdge is designed to provide always on performance as well as security and data management capabilities. The 11.7 release features operations at scale, high availability, application security, SQL enhancements, streamlined installation, and highly requested improvements such as OOABL performance improvements and optional “strict compile” mode.

“Our customers rely on the OpenEdge platform to develop and deploy the applications that drive their businesses. They depend on us to push the boundaries of the OpenEdge platform with every release, to ensure we’re providing what they need to stay competitive,” said Colleen Smith, vice president and general manager of Progress OpenEdge. “With the release of the OpenEdge 11.7 platform, we are building on this tradition of excellence with capabilities that provide additional assurances of uptime, security and accurate analytics.”

 

The post Kotlin/Native, Infragistics Ultimate UI for Xamarin, ActivePython 2.7.13 and 3.5.3, and Progress OpenEdge 11.7 — SD Times news digest: April 4, 2017 appeared first on SD Times.

]]>
https://sdtimes.com/activepython/kotlinnative-infragistics-ultimate-ui-xamarin-activepython-2-7-13-3-5-3-progress-openedge-11-7-sd-times-news-digest-april-4-2017/feed/ 1
Notes from Node.js Interactive: Node.js VM-neutrality, the Node.js security project, and NodeSource NSolid 2.0 https://sdtimes.com/chakracore/notes-nodejs-interactive-nodejs-vm-neutrality-nodejs-security-project-nodesource-nsolid/ https://sdtimes.com/chakracore/notes-nodejs-interactive-nodejs-vm-neutrality-nodejs-security-project-nodesource-nsolid/#comments Tue, 29 Nov 2016 20:00:43 +0000 https://sdtimes.com/?p=22193 The Node.js Foundation is continuing its mission to make Node.js VM-neutral. The foundation announced major milestones toward allowing the solution to work in a wide variety of VMs at the Linux Foundation’s Node.js Interactive conference this week. According to the foundation, VM-neutrality will allow Node.js to expand its ecosystem to more devices and workloads, such … continue reading

The post Notes from Node.js Interactive: Node.js VM-neutrality, the Node.js security project, and NodeSource NSolid 2.0 appeared first on SD Times.

]]>
The Node.js Foundation is continuing its mission to make Node.js VM-neutral. The foundation announced major milestones toward allowing the solution to work in a wide variety of VMs at the Linux Foundation’s Node.js Interactive conference this week.

According to the foundation, VM-neutrality will allow Node.js to expand its ecosystem to more devices and workloads, such as the Internet of Things and mobile devices. Other benefits include developer productivity and standardized efforts.

As part of VM-neutrality, the foundation has announced that the Node.js API is now independent from any changes in V8, the open-source JavaScript engine. “A large part of the Foundation’s work is focused on improving versatility and confidence in Node.js,” said Mikeal Rogers, community manager of the Node.js Foundation. “Node.js API efforts support our mission of spreading Node.js to as many different environments as possible. This is the beginning of a big community web project that will give VMs the same type of competition and innovation that you see within the browser space.”

(Related: What’s in Node.js 6.0)

In addition, the foundation revealed the Node.js build system will start to produce nightly builds of node-chakracore, allowing Node.js to be used with Microsoft’s JavaScript engine, ChakraCore.

“Today, there is a proliferation in the variety of device types, each with differing resource constraints,” wrote Arunesh Chandra, senior program manager for Chakra, in a blog post. “In this device context, we believe that enabling VM-neutrality in Node.js and providing choice to developers across various device types and constraints are key steps to help the Node.js ecosystem continue to grow.”

The Node.js Foundation also announced plans to oversee a Node.js security project at the conference, which is designed to detect and disclose security vulnerabilities in Node.js. According to Rogers, the foundation will allow security vendors to contribute to its common vulnerability repository.

“Given the maturity of Node.js and how widely used it is in enterprise environments, it makes sense to tackle this endeavor under open governance facilitated by the Node.js Foundation,” said Rogers. “This allows for more collaboration and communication within the broad community of developers and end users, ensuring the stability and longevity of the large, continually growing Node.js ecosystem.” A Node.js security project working group will be established as part of the Node.js Foundation.

In other Node.js news, enterprise Node company NodeSource announced it is expanding its production toolset with NodeSource Certified Modules and the release of NSolid v2.0. NodeSource Certified Modules is designed to provide security and trust to third-party JavaScript solutions. The solution verifies trustworthiness through the NodeSource Certification Process, and it ensures a stable, reliable and secure source.

NSolid v2.0 is the latest release of the company’s enterprise-grade Node.js platform, and it features automated error reporting, real-time metrics, built-in security features, CPU profiling, and performance monitoring.

The post Notes from Node.js Interactive: Node.js VM-neutrality, the Node.js security project, and NodeSource NSolid 2.0 appeared first on SD Times.

]]>
https://sdtimes.com/chakracore/notes-nodejs-interactive-nodejs-vm-neutrality-nodejs-security-project-nodesource-nsolid/feed/ 3
CoreOS and Intel to collaborate on OpenStack with Kubernetes https://sdtimes.com/containers/coreos-and-intel-to-collaborate-on-openstack-with-kubernetes/ Fri, 01 Apr 2016 18:00:12 +0000 https://sdtimes.com/?p=18016 CoreOS and Intel aim to bring virtual machines and containers together with their newly announced technical collaboration. The companies have announced plans to deploy and manage OpenStack, the open-source software for building clouds, with Kubernetes, the open-source system for automating deployment, scaling and operations of applications. “A collaboration between Intel and CoreOS is a huge … continue reading

The post CoreOS and Intel to collaborate on OpenStack with Kubernetes appeared first on SD Times.

]]>
CoreOS and Intel aim to bring virtual machines and containers together with their newly announced technical collaboration. The companies have announced plans to deploy and manage OpenStack, the open-source software for building clouds, with Kubernetes, the open-source system for automating deployment, scaling and operations of applications.

“A collaboration between Intel and CoreOS is a huge step forward for enterprises looking to achieve hyperscale,” said Jason Waxman, vice president and general manager of the Cloud Platforms Group at Intel. “Both the Kubernetes and OpenStack communities can benefit greatly by having an orchestration layer to manage workloads across VMs and containers.”

(Related: CoreOS’ Docker alertnative reaches version 1.0)

Together, CoreOS and Intel want to integrate Kubernetes and OpenStack into a single open-source software-defined infrastructure (SDI) stack. CoreOS also has plans to offer the stack as an option in Tectonic as a way to achieve “Google’s infrastructure for everyone else” strategy; simplify OpenStack deployment and management; provide the ability to rapidly release OpenStack clusters for development, test, QA or production; and provide a consistent platform for VMs running on top of Kubernetes.

“Together with Intel, we are accelerating the industry forward in reaching GIFEE (Google’s infrastructure for everyone else),” said Alex Polvi, CEO of CoreOS. “By running OpenStack on Kubernetes, you get the benefits of consistent deployments of OpenStack with containers together with the robust application life-cycle management of Kubernetes.”

This collaboration marks another step in CoreOS and Intel’s commitment to deliver Tectonic on consumer appliances.

The post CoreOS and Intel to collaborate on OpenStack with Kubernetes appeared first on SD Times.

]]>
IBM partners up for cloud-based virtual machines https://sdtimes.com/bluemix/ibm-partners-up-for-cloud-based-virtual-machines/ https://sdtimes.com/bluemix/ibm-partners-up-for-cloud-based-virtual-machines/#comments Mon, 22 Feb 2016 19:00:20 +0000 https://sdtimes.com/?p=17257 IBM’s InterConnect 2016 conference kicked off today, with the company making cloud-based announcements for its product lines. Chief among them was a new partnership with VMware to bring virtual-machine-hosted applications into IBM’s cloud-based offerings. The IBM announcement was riddled with partnerships, many of which were focused on bringing the benefits of IBM’s cloud offerings to … continue reading

The post IBM partners up for cloud-based virtual machines appeared first on SD Times.

]]>
IBM’s InterConnect 2016 conference kicked off today, with the company making cloud-based announcements for its product lines. Chief among them was a new partnership with VMware to bring virtual-machine-hosted applications into IBM’s cloud-based offerings.

The IBM announcement was riddled with partnerships, many of which were focused on bringing the benefits of IBM’s cloud offerings to existing customers of these third-party services. GitHub will be offering its enterprise edition within IBM’s cloud as a hosted GitHub solution. The collaboration will also yield Bluemix integrations for IoT users based on GitHub.

Chris Wanstrath, cofounder and CEO of GitHub, said that “Great software is no longer a nice-to-have in the enterprise, and developers expect to be able to build software quickly and collaboratively. By making GitHub Enterprise available on the IBM Cloud, even more companies will be able to tap into the power of social coding, and build the best software, faster.”

(Related: VMware wants its own hybrid cloud)

IBM’s cloud will also host VMware virtual machines. As part of the VMware family, the IBM Cloud will be part of the VMware vCloud Air Network, and will enable hybrid cloud deployments inside enterprises.

Pat Gelsinger, CEO of VMware, said, “This partnership, an extension of our 14-year plus relationship with IBM, demonstrates a shared vision that will help enterprise customers more quickly and easily embrace the hybrid cloud. Our customers will be able to efficiently and securely deploy their proven software-defined solutions with sophisticated workload automation to take advantage of the flexibility and cost effectiveness of IBM Cloud.”

Robert LeBlanc, senior vice president of IBM Cloud, said, “We are reaching a tipping point for cloud as the platform on which the vast majority of business will happen. The strategic partnership between IBM and VMware will enable clients to easily embrace the cloud while preserving their existing investments and creating new business opportunities.”

IBM expanded its cloud offerings in other ways as well. It introduced WebSphere Cloud Connect, which takes existing applications and turns them into easily discoverable APIs for cloud-based hosting.

Marie Wieck, general manager of IBM WebSphere Cloud Connect, said, “The power of cloud-based applications is that you can easily represent both real-time information and the collective knowledge on any topic. That’s always going to be a combination of newly created services and existing apps, many of which exist on premises. Our objective is to make those distinctions go away for a developer. A developer shouldn’t care where a piece of data, a microservice, or even an IBM Watson cognitive system resides; the platform should do that for them.”

IBM also introduced Bluemix OpenWhisk, today, a simpler platform for constructing IoT applications. Bluemix OpenWhisk includes container support, built-in AI capabilities, and the ability to chain together small pieces of code to create microservices.

Finally, IBM introduced a number of tools aimed at winning over Swift developers. The company introduced a Swift Sandbox for developers to try the language in the IBM cloud. Swift is also supported in Bluemix and with Kitura, a new open-source Web server released by IBM for Linux and OS X. Bluemix also now contains a Swift Package catalog for developers to share their applications across the IBM developer community.

The post IBM partners up for cloud-based virtual machines appeared first on SD Times.

]]>
https://sdtimes.com/bluemix/ibm-partners-up-for-cloud-based-virtual-machines/feed/ 3
20 ways to build up your Azure deployment https://sdtimes.com/azure/20-ways-to-build-up-your-azure-deployment/ https://sdtimes.com/azure/20-ways-to-build-up-your-azure-deployment/#comments Mon, 28 Dec 2015 14:00:38 +0000 https://sdtimes.com/?p=16424 Despite playing catch-up to Amazon Web Services, Microsoft Azure has quickly become a contender with its powerful Platform-as-a-Service and Infrastructure-as-a-Service offerings. With constant innovations around usability, open source and cross-platform compatibility, infrastructure management and evolving software development paradigms for new devices and applications, it can be hard to get your bearings within the vast platform. … continue reading

The post 20 ways to build up your Azure deployment appeared first on SD Times.

]]>
Despite playing catch-up to Amazon Web Services, Microsoft Azure has quickly become a contender with its powerful Platform-as-a-Service and Infrastructure-as-a-Service offerings. With constant innovations around usability, open source and cross-platform compatibility, infrastructure management and evolving software development paradigms for new devices and applications, it can be hard to get your bearings within the vast platform.

First things first: What comprises the Azure platform? It’s not turtles all the way down; in Azure principal program manager Scott Hanselman’s words, the underlying layer, the “infinite hard disk in the sky,” is Azure storage, where you can drill down to every virtual hard disk (VHD) image in your deployment. The next level up comprises virtual machines, which you choose, configure and manage.

On top of those VMs is a middle ground between IaaS and PaaS: Worker Roles, which are stateless cloud apps that can scale their VMs up or down. Above Worker Roles we are clearly in PaaS territory, with Web Apps, Azure Batch and HDInsight (Hadoop) for Big Data analysis. And at the top there are Web Jobs, Mobile Apps, and Media Services. These pieces are among those also available as the Azure Stack for on-premise datacenters or hybrid cloud applications.

(Related: PaaS gets a new lease on life)

“The first distinction to make as an Azure customer is, do I want to consume VMs, or do I want to consume the platform that Azure provides me?” said Esteban Garcia, Visual Studio ALM MVP and chief technologist for Nebbia Technology, and Azure company in Orlando. “Within PaaS, we typically do a Web app and SQL services. Those are pretty straightforward and easy to discover.”

It’s likely that most cloud problems you’re facing have already been solved somewhere in the Azure community, advises Corey Sanders, director of program management for Azure. “Make sure you look at the full breadth of services we offer, because they solve a lot of different problems. That’s the value of having a broad platform such as Azure,” he said.

Read on for tips from these Azure experts.

1. Saving pennies, saving dollars
The promise of the cloud is elasticity: sizing your deployment according to demand. Too often, Azure users are surprised by the multiple dimensions of pricing, from storage to transactions, support, bandwidth and more. The first step toward transparency in billing is not to “set it and forget it” but to monitor, measure and adjust frequently.

“One aspect of VMs that is not well known but that I think is super cool is the wide variety of VM sizes that we now offer,” said Sanders. “And the fact that when we stop a VM through the portal, we actually stop billing for it. The combo of those two has resulted in very few points of contention around billing.”

The Azure Billing Alert Service can create customized billing alerts for your Azure accounts, and the pricing calculator is your friend.

2. Elastic Scale for Azure SQL Database
Performance and price limits on Azure SQL Database, which offer a subset of SQL Server features, can slow you down while also burning cash.

“People use SQL Service instead of the full server,” said Garcia. “Whenever you do that, you pay for Database Throughput Units. You could be paying for 10 databases with five DTUs [database throughput units], which is a small number of DTUs to use. What you can do is start using this elastic scale, say ‘I’m going to assign 100 DTUs to 10 databases,’ and they are going to share that processing power among all the databases.” In other words, a P3 tier database costs US$4.65 per DTU per month, while the same number of DTUs scaled out rather than up on an S2 instance cost $1.50 a month.

“They are all able to use that shared pool of resources rather than being constrained,” said Garcia. “It gets along with the idea that cloud allows you to draw resources from anywhere as needed.”

Currently in preview, Elastic Scale simplifies the scaling of data tiers from just a few to thousands of database shards via .NET client libraries and Azure service templates. High-volume OLTP, multi-tenant SaaS, and continuous data collection from telemetry and Internet of Things applications are likely use cases.

3. Preview pricing
The aforementioned Elastic Scale is just one example of many new features available at preview pricing, which may be free or 50% less than the general availability pricing. Taking advantage of preview pricing lets you play with new features, stay ahead of the technical curve, save money, and possibly beat the competition by having production-ready deployments when the features go live for all customers. A list of preview services for Azure is available here.

4. The Azure Portal
Also in preview is the Azure Portal, a new dashboard for accessing IaaS and PaaS deployments.

“I was not happy with the portal at first, but now it’s growing on me,” said Hanselman in his June 2015 TechDays UK keynote. “If you double-click on the background, you can change the theme to dark. This made me so happy.”

Right-clicking on a given window pins it to the start board. Charts can be edited to show, say, CPU percentage, pricing, disk usage and more. “Don’t discount the portal quite yet; it’s fantastic,” said Hanselman.

5. Keyboard shortcuts
Every computer user knows the mouse can be deadly, in terms of ergonomics and efficiency. The best economy of movement is achieved with keyboard shortcuts. Launched with version 5.0 of the Azure Portal, the shortcut menu can be accessed by hitting shift + ?. Luckily, there aren’t too many to memorize. You’ll want to use these and more:

Hubs (left menu) shortcuts:
H – show startboard
N – open Notifications hub
A – open Active Journeys hub (a Journey is the current opened group of blades; a blade is card/tab/subpage that contains some group of tiles, e.g. website properties or analytics)
/ – open Browse/Search hub
B – open Billing hub
C – open Create/New hub

Changing focus between blades shortcuts:
J – move focus to the previous blade
K – move focus to the next blade
F – move focus to the first blade
L – move focus to the last blade

6. Azure resource manager
The complexity of managing websites, virtual machines and databases just got a little simpler with the addition of the resource manager in the new Azure Portal. Group and view resources (such as an instance of Application Insights along with a Web application and SQL database) as a single resource group. Deployment templates in Visual Studio are also aided by IntelliSense that surfaces new resource providers and template language functions to you as you write deployment templates, avoiding pesky naming errors.

“Azure resource manager has an exciting, growing community,” said Sanders. “We’re seeing in templates for the resource manager—starting about six months ago—a pretty exciting pickup in the community. We seeded GitHub with a set of these templates and put them all out fully open source. Now we have over 140 contributors and more than 200 templates available. It’s a delightful outcome to see this service that we didn’t do a huge amount of coverage on get this kind of response.
7. Scale Sets
Do you have Big Data or container-based workloads? You may want to orchestrate these complex, large-scale deployments with Scale Sets. Also in public preview, Azure Virtual Machine Scale Sets let you manage and configure virtual machines as a set of identical Windows or Linux images.

“A customer can come in and say, ‘I want VM Scale Sets in groups of 10, and I want to configure them all with a tool like Chef or Puppet,’ ” said Sanders. “The other aspect with Scale Sets that’s exciting is the deep integration with Azure Insights autoscale, which restricts cost and spending by only using the compute resources you need, responding to traffic changes.”

8. Security
A Denial of Service attack can hurt your wallet, hamstring your business and harm your customers. Like other cloud providers, Microsoft aims to share its security knowledge as well as build in basic protections. Azure Security Center’s view lets you set policies across all your subscriptions and monitor security configurations. The good news is, the days of accidentally raising DDOS flags by testing or polling your own app are over, thanks to cutting-edge threat intelligence around malformed requests and traffic sources.

9. Site extensions
Another powerful way to add custom administration features to your Web apps is with site extensions. Write them yourself or choose from the new Site Extensions Galler These live on the SCM (site control manager) site for administration and debugging that runs over SSL and is created with every Azure website. The URL for your SCM site is the hostname plus “scm”. Thus, “sdtimes.azurewebsites.net” would have a corresponding SCM site at “sdtimes.scm.azurewebsites.net”.

10. PowerShell cmdlets
Whether you want to clean up a deployment where you have some extra VHDs and VMs lying around, provision VMs, set up cross-premises networks, or other production tasks, you’ll enjoy the Unix-like scripting power of PowerShell and the new PowerShell cmdlets. As of the November 2015 update of the Azure SDK 2.8 for Visual Studio 2013 and Visual Studio 2015, the PowerShell script for deploying Azure Resource Manager templates now works with PowerShell cmdlets. Find scripting solutions already crafted for you in the PowerShell Gallery.

11. Service fabric
Riding the microservices revolution, Service Fabric is Azure’s platform for assembling cloud applications from a large collection of services. “Service fabric offers a platform that runs on Azure but also on-premises,” said Sanders. “This is a platform for deploying, managing and maintaining microservices. Discovery is handled for you, and it supports stateful and stateless microservices.”

12. Docker
The explosion of ways you can tinker with cloud resources, from remote desktops and SSH to portal shortcuts, has only just begun. According to Hanselman, the microservices revolution means there will soon be even more options to choose from.

“Simply stated, if I’ve got a tiny little 10MB PHP app sitting inside of a 5GB VHD, that’s a lot of VHD, a lot of virtual machine for a small Web application,” he said. “Does it really need that weight? That much security and isolation? It just needs to be in a container, and it needs to be deployable in a reliable way. Docker will provide that.”

“The excitement around Docker is very real. It makes it incredibly easy to deploy in ways that have never been possible,” said Sanders, who notes new integration of Docker support into Visual Studio and the Azure marketplace.

“My biggest tip and trick with Docker containers is just to deploy one. If you’ve never done anything with Docker, there’s a way to quickly deploy with a fully packaged VM and Docker in the Azure marketplace. No bringing down of the Docker engine, no pulling down the hub.”

13. Azure DevTest Labs
How do you avoid using up all your MSDN credit while testing on Azure? The preview of Azure DevTest Labs lets you spin up Windows and Linux environments to deploy and test applications while avoiding cost overruns.

“With Azure Dev/Test labs, the idea is that a lot of times developers have to wait for someone to spin up labs for them,” said Nebbia’s Garcia. “This allows you to spin up environments much quicker. You can choose for it to run a maximum eight hours, and after that it gets shut down. It’s a quick way to provision environments but avoid the problem of leaving it up and running. You can push a button and have whole sandbox.”

14. Application Insights
“Application Insights allows you to dig down and find the root cause of any application issues and understand how people are using the application,” said Garcia. “I’ve been using it for a year and half…as a Microsoft MVP.” For example, he uses it for availability testing from different geographic locations, either as a static test that checks a single page, or as a test of dynamic application flow.

15. Kudu, CloudBerry and Sendy
The Kudu open-source project is a useful troubleshooting tool and client-side process explorer for capturing memory dumps or looking at deployment. It’s also a site extension and welcomes community participation.

Another useful freeware tool is CloudBerry Explorer for Azure Blob Storage, which offers a file manager-style user interface to Azure Blob Storage.

If you’re already mucking around in the cloud, you may have e-mail update needs that can be met by Sendy or similar tools. Sendy was designed to work with Amazon Simple Email Service, but can be adapted for Azure as well. The cost savings versus a hosted e-mail solution such as MailChimp can be enormous.

16. Remote debugging
In its September 2015 white paper, “Practical Guide to Platform-as-a-Service Version 1.0,” the Cloud Standards Customer Council notes that no PaaS worth its salt should be without remote debug capabilities. “Application developers should have access to tools that enable them to control activities in the PaaS—for example, uploading (‘pushing’) application code, binding services to applications, controlling application configuration, starting and stopping application instances,” it said.

“Such capabilities should be provided in a way that fits well with the other tools used by the developer—command-line tools, graphical tools, embedded components for development environments. Ideally these tools should work via an API that is exposed by the PaaS system—cloud service customers should look for these APIs and assure themselves that the API can be used by a variety of custom tooling code.”

Remote debugging with Visual Studio fits the bill: Developers interact with cloud applications as if they were on-premise. Best used with Visual Studio 2013, remote debugging lets you manipulate memory, set breakpoints, and step through code—with the caveat that breaking a running process could break your live website. Save this one for pre-production sites.

17. Performance testing
Another public preview that is currently free to use, performance testing, allows you to generate thousands of virtual users from around the world and test your application against the load.

“We started using performance testing in the past six months,” said Garcia. “If you spin up a Web application, you’re able to do a performance test right from Azure, right in the cloud. Before, it was more on the Visual Studio side. So I can see what it looks like if 1,000 people hit my app at once. It’s very useful in knowing how to scale the application: We can have fewer servers, but make them stronger by adding this performance testing feature right within the Azure portal when you first launch an application.”

18. Easy ROI: Lift and shift
Want an instant return on your cloud investment? Eliminate idle servers that only handle periodic loads. “This is something people forget about when they’re thinking about the cloud,” said Hanselman. “Azure storage, that’s an infinite disk that’s out there. You probably have a machine sitting under your desk and it’s got a VM, running maybe an expense reporting system. It’s something that you need to lift and shift into the cloud. There are migration tools that can help you. Literally, it’s Hyper-V in the cloud, but that’s only the most basic way of using Azure. So, Step 0: Lift and shift. Then start thinking about other ways to exploit stuff.”

19. Developer services marketplace
Before you reinvent speech recognition, check the Developer Services Marketplace for free and paid ways to extend functionality, turbo-charge development, and manage cloud deployments with certified Azure tools such as io for event-driven computing, or face APIs from Project Oxford.

20. Ride the IoT wave
No set of tips would be complete without instructions on how to program the proverbial light bulb. These days, Internet of Things projects are everywhere. Hanselman, a type 1 diabetic, movingly demonstrated in a November 2015 keynote video how he tracks his blood sugar and insulin pump in the cloud with Azure technologies.

Microsoft Azure IoT Hub offers SDKs, management and security solutions to harness a plethora of IoT devices, and once the data is collected, there are new machine-learning tools available to process data stored in HD Insight, Microsoft’s version of the Hadoop Big Data store. Have fun!

Start your engines
Use these 20 tips as a checklist for leveraging the vast Azure platform. The more you understand Azure, the more you see where it’s headed: “We’re seeing a blurring of IaaS vs. PaaS and starting to just see a compute platform,” said Sanders.

Redmond has clearly learned to embrace today’s polyglot cloud, and you can use that flexibility to your advantage. “Azure is not only a Windows server; they have Ubuntu, they have Linux servers, they have a new agreement with Red Hat, you can spin up an Oracle database… There’s so many different non-Microsoft technologies to choose from,” said Garcia.

The post 20 ways to build up your Azure deployment appeared first on SD Times.

]]>
https://sdtimes.com/azure/20-ways-to-build-up-your-azure-deployment/feed/ 7
ContainerX wants to bring containers to the data center https://sdtimes.com/containers/containerx-wants-to-bring-containers-to-the-data-center/ Mon, 23 Nov 2015 20:41:57 +0000 https://sdtimes.com/?p=16054 Docker has simplified the way developers work with containers, but one company thinks containers have the potential to go beyond developers to enterprise IT. ContainerX, founded by veterans from Citrix, Microsoft and VMware, is a startup aimed at transforming containers from a development tool to the building blocks of the next generation of data centers. … continue reading

The post ContainerX wants to bring containers to the data center appeared first on SD Times.

]]>
Docker has simplified the way developers work with containers, but one company thinks containers have the potential to go beyond developers to enterprise IT. ContainerX, founded by veterans from Citrix, Microsoft and VMware, is a startup aimed at transforming containers from a development tool to the building blocks of the next generation of data centers.

“ContainerX is like vSphere for containers,” said Kiran Kamity, CEO of ContainerX. “It is a ready-to-go container infrastructure platform that is designed for enterprise IT, where developers can come in and self-service using [the] Docker command line.”

Bringing containers to the data center would eliminate the need for virtual machines (VMs) there, according to Kamity. With containers, users will get a sense of application agility and be able to move applications from development to IT smoothly, he explained.

(Related: Docker releases hardware signing of container images)

“If containers were to become a first-class citizen, then the agility problem is solved because the moment a developer is done with the application development, a container is created, and that container can be easily launched on any infrastructure platform by the IT admin, and the deployment time can be cut down from weeks to hours if that,” said Kamity.

Another problem with virtual machines is the amount of VMs an organization has to maintain. For example, if a large organization has 10,000 VMs, then those are 10,000 copies of operating systems it has to maintain, update, patch and secure, according to Kamity.

“The data center as we know it today is going through a massive change,” said Kamity. “With traditional virtual machine infrastructures, enterprise IT faces two fundamental issues: operating system management, and the lack of application-level agility. Containers have the potential to address both of these critical issues and therefore become a fundamental building block of the data center of the future. ContainerX is a plug-and-play container platform that is designed specifically for IT admins who are not looking for a DIY project. It is transforming the enterprise journey, shaping data centers of the future and furthering the promise of containers.”

While the goal is to replace VMs, Kamity notes that they will still have a place in data centers, and a percentage of them will remain if containers break through the enterprise.

Other features of the ContainerX platform include support for Linux, Windows, bare-metal and virtual machines, and private and public cloud environments; enterprise-grade management; a ready-to-go package; protection from rogue containers crashing; container pools with limits and access controls for CPU, memory, and network; and elastic container clusters.

“Containers are becoming a critically important tenet of the modern data center,” said Steve Herrod, former VMware CTO and managing director of General Catalyst, a venture capital firm. “The ContainerX team, product and vision are impressive, and they are poised to make a large impact in this industry transition.”

ContainerX is currently in private beta, and the company expected it to be generally available by the first half of next year.

The post ContainerX wants to bring containers to the data center appeared first on SD Times.

]]>
Containers steal the show at VMworld https://sdtimes.com/big-data/containers-steal-the-show-at-vmworld/ https://sdtimes.com/big-data/containers-steal-the-show-at-vmworld/#comments Thu, 03 Sep 2015 18:40:14 +0000 https://sdtimes.com/?p=14748 Containers were the only thing anyone could talk about at VMworld this week, and yet the discussions were not about how great they are. Rather, the discussions were about, “How do we use this stuff in an enterprise?” VMware has a very distinct answer: Run the container inside of a virtual machine. And it is … continue reading

The post Containers steal the show at VMworld appeared first on SD Times.

]]>
Containers were the only thing anyone could talk about at VMworld this week, and yet the discussions were not about how great they are. Rather, the discussions were about, “How do we use this stuff in an enterprise?”

VMware has a very distinct answer: Run the container inside of a virtual machine. And it is a great stopgap answer while the container systems of the world mature, add security controls, and gain governance capabilities.

(Related: Other container news at VMworld)

Kit Colbert, CTO of cloud-native apps at VMware, said that security is a major concern already for containers. “We’re seeing a lot of exploits come out of the woodwork around [the basic container]. That might calm down over time. The challenge with Linux containers is that it is a very wide interface and it changes. Then there are these issues around container identity,” he said.

Colbert said the VMware team is working with Docker to solve some of these problems. “Docker has supports in Notary to solve that. We’re working with the Notary guys, and working on Project Lightwave, which does container authentication and certificate management.”

That being said, Colbert added that the container capabilities introduced at VMworld will help in the shorter term. “What we do offer with vSphere Integrated Containers is you can run that wrapped inside a VM. It also enables IT to validate and audit. A lot of tooling they’ve built out around VMs can be leveraged in the vSphere containers model.”

In the longer run, however, there is at least one detractor saying that running containers inside a virtual machine misses the point entirely. Late last year, Joyent began releasing the source code for its Smart DataCenter Project, the software that runs its hosting platform.

Bryan Cantrill, CTO of Joyent, said that this platform, known as Joyent Triton, uses Docker directly and effectively eliminates the need to install a Linux distro or run a virtual machine.

“We run Docker, and we virtualize the Docker CLI endpoint,” he said. “The entire datacenter looks like a single Docker host. You’re no longer paying the VM tax. Our belief is that that containers should be secure. When you do that and solve the security problem, and when you solve the network problem, you can truly [join] the container revolution. Containers are stuck in the birth canal because they are on the VM substrate.”

In the past, Joyent had been tied to the Solaris model of hosting by using DTrace, ZFS and Zones. While these are all still included in the Joyent stack, Cantrill said that the company realized about a year and a half ago that it had to find a way to allow users to run unmodified Linux binaries on this decidedly Solaris-like infrastructure.

As a result, he said, Joyent has been able to bring full Docker application-hosting support to its platform, as well as eliminate the need for a virtual machine entirely. “We’ve seen what running containers on the metal does to your infrastructure,” said Cantrill. “Now, with Triton, you don’t have to pick between Docker running on the metal, or Linux on the metal. You can do that in the cloud or on premises.”

The post Containers steal the show at VMworld appeared first on SD Times.

]]>
https://sdtimes.com/big-data/containers-steal-the-show-at-vmworld/feed/ 2
Microsoft introduces Azure Data Catalog; releases Azure Batch https://sdtimes.com/azure/microsoft-introduces-azure-data-catalog-releases-azure-batch/ https://sdtimes.com/azure/microsoft-introduces-azure-data-catalog-releases-azure-batch/#comments Fri, 10 Jul 2015 14:59:55 +0000 https://sdtimes.com/?p=13743 Microsoft has announced a public preview of Azure Data Catalog, an enterprise metadata catalog and portal, along with the general availability of the Azure Batch compute pool management service. Azure Data Catalog is a fully managed cloud service for storing, describing, indexing and providing information on accessing any registered data source. The catalog enables data … continue reading

The post Microsoft introduces Azure Data Catalog; releases Azure Batch appeared first on SD Times.

]]>
Microsoft has announced a public preview of Azure Data Catalog, an enterprise metadata catalog and portal, along with the general availability of the Azure Batch compute pool management service.

Azure Data Catalog is a fully managed cloud service for storing, describing, indexing and providing information on accessing any registered data source. The catalog enables data source self-discovery, using a crowdsourced approach allowing a data analyst, developer or other professional using the service to register the data sources they use and log its structural metadata, while other users can annotate that data.

“Azure Data Catalog bridges the gap between IT and the business—it encourages the community of data producers, data consumers and data experts to share their business knowledge while still allowing IT to maintain control and oversight over all the data sources in their constantly evolving systems,” wrote Joseph Sirosh, Microsoft corporate vice president of Information Management and Machine Learning, in a blog post.

The metadata portal also uses search and filtering parameters for data discovery, and allows developers to connect the data to the Big Data tool of their choice. More details about Azure Data Catalog are available in the video below:

Microsoft also announced the general availability of Azure Batch, the company’s job scheduling and compute pool management service for scaling compute-intensive workloads to numerous virtual machines without manual infrastructure management.

Alex Sutton, Microsoft’s group program manager of HPC and Big Compute, announced the release in a blog post.

“In a world of rapidly evolving products and fierce competition, our goal is to deliver a service that lets you focus more on your applications and less on plumbing,” wrote Sutton. “As a managed service, Azure Batch handles the heavy lifting of provisioning, monitoring and scaling virtual machines. You create an Azure Batch account, within minutes have the resources you require, and can scale up and down as the volume of jobs and tasks change. Batch helps you handle spikes; you pay for what you use.”

The Azure Batch GA release comes with a price change for free resource management and job scheduling capabilities within the service, and a new API unifying the Batch and Batch Apps namespaces released at preview. Azure Batch is available here.

The post Microsoft introduces Azure Data Catalog; releases Azure Batch appeared first on SD Times.

]]>
https://sdtimes.com/azure/microsoft-introduces-azure-data-catalog-releases-azure-batch/feed/ 9