ALU

container

2019 Predictions: What’s Next for Software Defined Storage?

As we head into the heart of predictions season, the tech prophets are working overtime. There are so many streams of emerging technology — some of them converging into rapids — that we all need to arm ourselves with some foresight and guidance for navigating our way through the rush of data and possibilities. 

The first stop on the journey is cloud strategy, namely standardization of orchestration and commoditization of cloud resources. As your digital business grows in scale and complexity, automated capabilities will be critical to maintaining control and visibility. In 2019, you should be figuring out how to optimize savings and efficiency by leveraging the commoditization of hardware, managed services, security solutions, and cloud platforms — but this will only work if you have a robust, overarching orchestration solution in place. 

Original Link

Kubernetes Demystified: Using LXCFS to Improve Container Resource Visibility

This series of articles explores some of the common problems enterprise customers encounter when using Kubernetes. This second article in the series addresses the problem of legacy applications that cannot identify container resource restrictions in Docker and Kubernetes environments.

Linux uses cgroups to implement container resource restrictions, but the host’s procfs /proc directory is still mounted by default in the container. This directory includes meminfo, cpuinfo, stat, uptime, and other resource information. Some monitoring tools, such as "free" and "top," and legacy applications still acquire resource configuration and usage information from these files. When they run in a container, they read the host’s resource status, which leads to errors and inconveniences.

Original Link

Running SQL Server on a Linux Container Using Docker for Windows

Recently, I have been investigating what all the fuss is about Docker and it has been well worth my time as Docker is pretty awesome for automating stuff.

My development environment has typically required installing SQL Server. SQL is a bit of a beast with lots of options and takes time to set up how you want.

Original Link

Kubernetes Demystified: Solving Service Dependencies

This series of articles explores some of the common problems enterprise customers encounter when using Kubernetes. One question frequently asked by Container Service customers is, "How do I handle dependencies between services?"

In applications, component dependencies refer to middleware services and business services. In traditional software deployment methods, application startup and stop tasks must be completed in a specific order.

Original Link

The Journey to Kubernetes

Have you wondered why Kubernetes is so popular? Discover why Kubernetes is the clear market leader in the container orchestration space in our latest addition to the Weaveworks Kubernetes library.

On this page, we provide an overview of what Kubernetes is and how it enables fast-growing applications to quickly scale. The different installation options available, as well as a summary on how companies are running it today with many hand-curated links to more in-depth information are all discussed in these pages.

Original Link

Spring IoC Container With XML

What Is the IoC Container?

The org.springframework.context.ApplicationContext interface represents the Spring IoC container. This container instantiates, configures, and assemble beans by using externally provided configuration metadata.

ApplicationContext and BeanFactory are the same, but the ApplicationContext adds more enterprise-related functionality. In short, the ApplicationContext is a superset of the BeanFactory.

Original Link

Use Docker Instead of Kubernetes

Today we are all talking about containers and container-based infrastructure.  But what is this container technology? And how does it solve today problems?

I am using containers myself and of course, I am fascinated by this server technology. Containers can really simplify things. After more than 20 years in building server applications, I have experienced many problems very closely.

Original Link

Multiple MySQL Databases With One MySQL Container

Problem Statement

I want to create 2 databases inside one MySQL container and give the user of the first database full access to the 2nd database. With the official MySQL image one can easily create a database and allow a user access to that database. However, creating a 2nd database is not easily provisioned.

Solution

Docker images work on the concept of layers. Each new command, so to speak, creates a new layer, and herein lies our solution.

Original Link

Best Practices of ECS Container Network Multi-NIC Solution

Container-based virtualization is a type of virtualization technology. Compared with a virtual machine (VM), a container is lighter and more convenient to deploy. Docker is currently a mainstream container engine, which supports platforms such as Linux and Windows, as well as mainstream Docker orchestration systems such as Kubernetes (K8S), Swarm, and Rocket (RKT). Common container networks support multiple models such as Bridge, Overlay, Host, and user-defined networks. Systems such as K8S rely on the Container Network Interface (CNI) plug-ins for network management. Commonly used CNI plug-ins include Calico and Flannel.

This article will introduce the basics of container networks. Based on Alibaba Cloud’s Elastic Network Interface (ENI) technology, the ECS container network features high performance, easy deployment and maintenance, strong isolation, and high security.

Original Link

PouchContainer RingBuffer Log Practices

PouchContainer is an open-source container technology of Alibaba, which helps enterprises containerize the existing services and enables reliable isolation. PouchContainer is committed to offering new and reliable container technology. Apart from managing service life cycles, PouchContainer is also used to collect logs. This article describes the log data streams of PouchContainer, analyzes the reasons for introducing the non-blocking log buffer, and illustrates the practices of the non-blocking log buffer in Golang.

PouchContainer Log Data Streams

Currently, PouchContainer creates and starts a container using Containerd. The modules involved are shown in the following figure. Without the communication feature of a daemon, a runtime is like a process. To better manage a runtime, the Shim service is introduced between Containerd and Runtime. The Shim service not only manages the life cycle of a runtime but also forwards the standard input/output data of a runtime, namely log data generated by a container.

Original Link

Spring Sweets: Dockerize Spring Boot Application With Jib

Jib is an open-source Java library from Google designed for creating Docker images for Java applications. Jib can be used as a Maven or Gradle plugin in our Spring Boot projects. One of the nice features of Jib is that it adds layers to our classes, resources, and dependency libraries for the Docker image. This means that when only class files have changed, the classes layer is rebuilt, but the others remain the same. Therefore, the creation of a Docker image with our Spring Boot application is also very fast (after the first creation). Also, the Maven and Gradle plugins have sensible defaults, like using the project name and version as image name, so we don’t have to configure anything in our build tool. Although, Jib provides options to configure other values for the defaults, for example, to change the JVM options passed on to the application.

Let’s see Jib in action in this sample Spring Boot application. We will use Gradle as a build tool with the following Spring Boot application:

Original Link

Telemetry Data Collection, Query, and Visualization with Istio on Alibaba Cloud Container Service for Kubernetes

In our previous articles, we have demonstrated how to deploy an application in the Istio environment with an official example, as well as explored how to configure intelligent routing and distributed tracing with Istio.

This article continues to use this example to explain how to use the Istio functions of collecting, querying, and visualizing the telemetry data.

Original Link

Waves of ‘Cloud-Native’ Transformations

This was originally published at my personal publication.

Enterprise CIOs have been working on digitally transforming their IT infrastructure for a while now. Such digital transformations have traditionally been based on virtualization and Software-as-a-Service (or SaaS) offerings. With the development of cloud computing/container technologies, transformational CIOs are looking into becoming cloud-native as well. But what is Cloud-Native?

Original Link

Understanding the Kubelet Core Execution Frame

Kubelet is the node agent in a Kubernetes cluster, and is responsible for the Pod lifecycle management on the local node. Kubelet first obtains the Pod configurations assigned to the local node, and then invokes the bottom-layer container runtime, such as Docker or PouchContainer, based on the obtained configurations to create Pods. Then Kubelet monitors the Pods, ensuring that all Pods on the node run in the expected state. This article analyzes the previous process using the Kubelet source code.

Obtaining Pod Configurations

Kubelet can obtain Pod configurations required by the local node in multiple ways. The most important way is Apiserver. Kubelet can also obtain the Pod configurations by specifying the file directory or accessing the specified HTTP port. Kubelet periodically accesses the directory or HTTP port to obtain Pod configuration updates and adjust the Pod running status on the local node.

Original Link

Alibaba Cloud Toolbox — Running CLI in Docker

The Alibaba Cloud Command Line Interface (CLI) is a unified tool to manage your Alibaba Cloud services. With just one tool to download and configure, you can control multiple Alibaba Cloud services from the command line and automate them through scripts.

The CLI uses the SDK of various products internally to achieve the intended results. This installation can be hard to maintain considering the frequent releases of new SDK versions. This can also be cumbersome if you don’t have access to a machine with the prerequisites installed.

Original Link

4 Use Cases of Serverless Architecture

Serverless architectures have rapidly emerged as a new technology concept in recent years. Using this architecture, developers can create a variety of applications for various industries. Many enterprises have already started adopting serverless products into their solutions. Serverless solutions help you build light, highly-flexible, and stateless applications easily. In this article, we will explore four common applications of serverless with Alibaba Cloud Function Compute.

Before we begin, let’s familiarize ourselves with the history and features of serverless architecture.

Original Link

How to Natively Compile Java Code for Better Startup Time

Microservices and serverless architectures are being implemented as part of the roadmap in most modern solution stacks. Given that Java is still the dominant language for business applications, the need for reducing the startup time for Java is becoming increasingly important. Serverless architectures are one such area that needs faster startup times, and applications hosted on container platforms such as Red Hat Openshift can benefit from both fast Java startup time and a smaller Docker image size.

Let’s see how GraalVM can be beneficial for Java-based programs in terms of speed and size improvements. Surely, these gains are not bound to containers or serverless architectures and can be applied to a variety of use cases.

Original Link

Difference Between BeanFactory and ApplicationContext in Spring

I see a lot of questions asking about the difference between BeanFactory and ApplicationContext.
Along with that, I get the question: should I use the former or the latter to get beans from the Spring container?

We previously talked about the Spring container here. Basically, these two interfaces supply the way to reach Spring beans from the container, but there are some significant differences.

Let’s take a look!

What Is a Spring Bean?

This is a very simple question that is often overcomplicated. Usually, Spring beans are Java objects that are managed by the Spring container.

Here is a simple Spring bean example:

package com.zoltanraffai; public class HelloWorld { private String message; public void setMessage(String message){ this.message = message; } public void getMessage(){ System.out.println("My Message : " + message); } }

In the XML-based configuration, beans.xml supplies the metadata for the Spring container to manage the bean.

What Is the Spring Container?

The Spring container is responsible for instantiating, configuring, and assembling the Spring beans. Here is an example of how we configure our HelloWorld POJO for the IoC container:

<?xml version = "1.0" encoding = "UTF-8"?> <beans xmlns = "http://www.springframework.org/schema/beans" xmlns:xsi = "http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation = "http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd"> <bean id = "helloWorld" class = "com.zoltanraffai.HelloWorld"> <property name = "message" value = "Hello World!"/> </bean> </beans>

Now, it managed by the Spring container. The only question is: how we can access it?

The Difference Between BeanFactory and ApplicationContext

The BeanFactory Interface

This is the root interface for accessing the Spring container. To access the Spring container, we will be using Spring’s dependency injection functionality using this BeanFactory interface and its sub-interfaces.

Features:

  • Bean instantiation/wiring

It is important to mention that the BeanFactory interface only supports XML-based bean configuration. Usually, the implementations use lazy loading, which means that beans are only instantiating when we directly calling them through the getBean() method.

The most used API that implements the BeanFactory is the XmlBeanFactory.

Here is an example of how to get a bean through the BeanFactory:

package com.zoltanraffai; import org.springframework.core.io.ClassPathResource; import org.springframework.beans.factory.InitializingBean; import org.springframework.beans.factory.xml.XmlBeanFactory; public class HelloWorldApp{ public static void main(String[] args) { XmlBeanFactory factory = new XmlBeanFactory (new ClassPathResource("beans.xml")); HelloWorld obj = (HelloWorld) factory.getBean("helloWorld"); obj.getMessage(); }
}

The ApplicationContext Interface

The ApplicationContext is the central interface within a Spring application that is used for providing configuration information to the application.

It implements the BeanFactory interface. Hence, the ApplicationContext includes all functionality of the BeanFactory and much more! Its main function is to support the creation of big business applications.

Features:

  • Bean instantiation/wiring
  • Automatic BeanPostProcessor registration
  • Automatic BeanFactoryPostProcessor registration
  • Convenient MessageSource access (for i18n)
  • ApplicationEvent publication

The ApplicationContext supports both XML and annotation-based bean configuration. It
uses eager loading, so every bean instantiate after the ApplicationContext is started up.

Here is an example of the ApplicationContext usage:

package com.zoltanraffai; import org.springframework.core.io.ClassPathResource; import org.springframework.beans.factory.InitializingBean; import org.springframework.beans.factory.xml.XmlBeanFactory; public class HelloWorldApp{ public static void main(String[] args) { ApplicationContext context=new ClassPathXmlApplicationContext("beans.xml"); HelloWorld obj = (HelloWorld) context.getBean("helloWorld"); obj.getMessage(); }
}

Conclusion

The ApplicationContext includes all the functionality of the BeanFactory. It is generally recommended to use the former. There are some limited situations, such as in mobile applications, where memory consumption might be critical. In those scenarios, it would be justifiable to use the more lightweight BeanFactory. However, in most enterprise applications, the ApplicationContext is what you will want to use.

Original Link

Neo4j Launches Commercial Kubernetes Application on Google Cloud Platform Marketplace

On behalf of the Neo4j team, I am happy to announce that today we are introducing the availability of the Neo4j Graph Platform within a commercial Kubernetes application to all users of the Google Cloud Platform Marketplace.

This new offering provides customers with the ability to easily deploy Neo4j’s native graph database capabilities for Kubernetes directly into their GKE-hosted Kubernetes cluster.

The Neo4j Kubernetes application will be “Bring Your Own License” (BYOL). If you have a valid Neo4j Enterprise Edition license (including startup program licenses), the Neo4j application will be available to you.

Commercial Kubernetes applications can be deployed on-premise or even on other public clouds through the Google Cloud Platform Marketplace.

What This Means for Kubernetes Users

We’ve seen the Kubernetes user base growing substantially, and this application makes it easy for that community to launch Neo4j and take advantage of graph technology alongside any other workload they may use with Kubernetes.

Kubernetes customers are already building some of these same applications, and using Neo4j on Kubernetes, a user combines the graph capabilities of Neo4j alongside an existing application, such as an application that is generating recommendations by looking at the behavior of similar buyers, or a 360-degree customer view that uses a knowledge graph to help spot trends and opportunities.

GCP Marketplace + Neo4j

GCP Marketplace is based on a multi-cloud and hybrid-first philosophy, focused on giving Google Cloud partners and enterprise customers flexibility without lock-in. It also helps customers innovate by easily adopting new technologies from ISV partners, such as commercial Kubernetes applications, and allows companies to oversee the full lifecycle of a solution, from discovery through management.

As the ecosystem leader in graph databases, Neo4j has supported containerization technology, including Docker, for years. With this announcement, Kubernetes customers can now easily pair Neo4j with existing applications already running on their Kubernetes cluster or install other Kubernetes marketplace applications alongside Neo4j.

Original Link

Neo4j Launches Commercial Kubernetes Application on Google Cloud Platform Marketplace

On behalf of the Neo4j team, I am happy to announce that today we are introducing the availability of the Neo4j Graph Platform within a commercial Kubernetes application to all users of the Google Cloud Platform Marketplace.

This new offering provides customers with the ability to easily deploy Neo4j’s native graph database capabilities for Kubernetes directly into their GKE-hosted Kubernetes cluster.

The Neo4j Kubernetes application will be “Bring Your Own License” (BYOL). If you have a valid Neo4j Enterprise Edition license (including startup program licenses), the Neo4j application will be available to you.

Commercial Kubernetes applications can be deployed on-premise or even on other public clouds through the Google Cloud Platform Marketplace.

What This Means for Kubernetes Users

We’ve seen the Kubernetes user base growing substantially, and this application makes it easy for that community to launch Neo4j and take advantage of graph technology alongside any other workload they may use with Kubernetes.

Kubernetes customers are already building some of these same applications, and using Neo4j on Kubernetes, a user combines the graph capabilities of Neo4j alongside an existing application, such as an application that is generating recommendations by looking at the behavior of similar buyers, or a 360-degree customer view that uses a knowledge graph to help spot trends and opportunities.

GCP Marketplace + Neo4j

GCP Marketplace is based on a multi-cloud and hybrid-first philosophy, focused on giving Google Cloud partners and enterprise customers flexibility without lock-in. It also helps customers innovate by easily adopting new technologies from ISV partners, such as commercial Kubernetes applications, and allows companies to oversee the full lifecycle of a solution, from discovery through management.

As the ecosystem leader in graph databases, Neo4j has supported containerization technology, including Docker, for years. With this announcement, Kubernetes customers can now easily pair Neo4j with existing applications already running on their Kubernetes cluster or install other Kubernetes marketplace applications alongside Neo4j.

Original Link

AWS SAM Local: Test Serverless Application Locally

AWS Lambda is a serverless framework where we can just create an application, make an artifact out of it and upload it. Developers are free from configuring the infrastructure as it is handled by AWS.

Now the concern here is that when developers want to test the application after changing the code they have to, again and again, deploy the jar to AWS lambda console.

It’s really a time-consuming task.

But don’t worry, AWS also provides AWS SAM Local were we can run the jar locally as it creates a local environment, the same as the one AWS creates on the console.

Now developers can focus on their development and deploy the jar to the actual AWS environment whenever needed.

Now let’s see how it’s done.

Install SAM CLI

First, you need to set up SAM CLI on local.

SAM CLI is a tool that allows faster, iterative development of your Lambda function code.

To use SAM CLI we need to install Docker first as SAM CLI provides a docker-lambda Docker image where it runs the jar. Using docker-lambda, you can invoke your Lambda function locally. You can find how to install Docker here.

Now you can download the latest version of SAM CLI Debian file from here. Then,

sudo dpkg -i sam_0.2.11_linux_amd64.deb

Now let’s verify that the installation succeeded:

sam --version

Now we are done with the installation. Let’s start with an example.

You can clone the project from here.

First, we need to create a template.yaml file in the project under root directory:

AWSTemplateFormatVersion : '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: AWS Lambda Sample Project Resources: Products: Type: AWS::Serverless::Function Properties: Handler: com.example.handler.LambdaHandler CodeUri: ./target/lambda-project-1.0-SNAPSHOT.jar Runtime: java8 Timeout: 300 Environment: Variables: ENVIRONMENT: "test" Events: ListProducts: Type: Api Properties: Path: /lambda Method: post

This file contains all the information that is needed for the application to run. We specify the handler class that will be executed in the lambda as well as the jar name that we have created earlier.

Then, we will create the jar to run the application:

mvn clean package

Now run the application by running the below command:

sam local start-api

You will get an URL from the above command and that URL will use to hit our lambda with the following JSON body:

  { "message" : "Hello Coders" }  

You will get the required response:

{
“status”: “Success”,
“message”: “Got Hello Coders!!”
} 

Yeah, we got the response.

Thanks for your patience!!

Original Link

Executive Insights on the Current and Future State of Containers

This article is featured in the new DZone Guide to Containers: Development and Management. Get your free copy for more insightful articles, industry statistics, and more! 

To gather insights on the current and future state of containers, we talked to executives from 26 companies. Here’s who we spoke to:

Matt Chotin, Sr. Director of Technical Evangelism, AppDynamics

Jeff Jensen, CTO, Arundo Analytics

Jaime Ryan, Senior Director, Project Management and Strategy, CA Technologies

B.G. Goyal, V.P. of Engineering, Cavirin Systems

Tasha Drew, Product Manager,  Chef

James Strachan, Senior Architect, CloudBees

Jenks Gibbons, Enterprise Sales Engineer,  CloudPassage

Oj Ngo, CTO and Co-founder, DH2i

Anders Wallgren, CTO, Electric Cloud

Navin Ganeshan, Chief Product Officer, Gemini Data

Carsten Jacobsen, Developer Evangelist, Hyperwallet

Daniel Berg, Distinguished Engineer Cloud Foundation Services,  IBM

Jack Norris, S.V.P. Data and Applications,  MapR

Fei Huang, CEO, NeuVector

Ariff Kassam, V.P. Product, NuoDB

Bob Quillan, V.P. Container Group, Oracle

Sirish Raghuram, CEO and Co-founder, Platform9

Neil Cresswell, CEO/CTO, Portainer.io

Sheng Liang, Co-founder and CEO and Shannon Williams, Co- founder and VP of Sales, Rancher Labs

Bill Mulligan, Container Success Orchestrator, RiseML

Martin Loewinger, Director of SaaS Operations and  Jonathan  Parrilla, DevOps Engineer, SmartBear

Antony Edwards, CTO, Eggplant

Ady Degany, CTO, Velostrata 

Paul Dul, V.P. Product Marketing Cloud Native Applications, VMware

Mattius McLaughlin, Engineering Manager & Containers SME, xMatters

Roman Shoposhnik, Co-founder, Product & Strategy, Zededa

1. The two most important elements of orchestrating and deploying containers are security and the ability to maintain hybrid environments. On a container platform, there are four major elements that orchestration must address: networking, storage, security, and management. While you need Kubernetes (K8) to take advantage of Docker, you still need a compliance and security platform. Follow the CNCF pathway to containerization. 

You must be able to support containers and non-containers since it will take longer than you think to migrate from VMs to containers and you will need observation into and monitoring of both. You will also need to have a holistic view into the hybrid cloud landscape.

2. The languages, frameworks, and tools mentioned most frequently to orchestrate and deploy containers are Java, Docker, and Kubernetes with Go and Jenkins also mentioned with great frequency.

3. By far the most dramatic change in the orchestration and deployment of containers in the past year has been the growth and adoption of Kubernetes and companies’ desire to move out of dev/test directly into production. K8 came out and emerged with broad adoption as the standard for facilitated ecosystems that everyone is supporting. Now that Docker has begun introducing K8 support into Docker Enterprise Edition, there is a lot more clarity on when and where to use each orchestrator for maximum efficiency. K8 is easier and has more documentation, more maturity, and better tooling. We’re now able to set up K8 in five or 10 minutes versus days or weeks.

4. Having a well-defined security policy and following best practices is most effective for securing containers. There are multiple layers to secure: 1) repository of container images; 2) cluster of nodes; 3) the container layer; 4) the deployment layer; and 5) container hosts.

A security solution must be able to integrate seamlessly, be lightweight, run distributed, be accurate, respond in real-time, and  operate at cloud scale. It must be automated since the orchestration model for application containers is highly automated.

Knowing what’s happening in your environment is paramount. Knowing what containers are running versus what containers you expect to be running is key to ensure you are not exposed to any crypto-jacking exploits where hackers gain access to an insecure Docker daemon and start bitcoin miners on your Docker hosts.

5. There are multiple verticals and two dozen applications of real-world problems being solved using containers. Ad media,  financial services, gaming, healthcare, insurance, oil and gas, retail, and transportation are all using containers to manage AI/ ML workloads, reduce infrastructure costs, improve scalability,  responsiveness, increase security, and handle big data.

Philips is able to spin up containers to read MRIs for anywhere from 30 seconds to five minutes based on what’s needed versus “always on.” The GE Predix Platform uses containers to extend services offered to customers and call back to the platform. American Airlines used our application design process to go from design to production in a month to access on-prem services via a containerized cloud-native app. European financial services companies are moving to

6. The most common issue affecting the orchestration and deployment of containers is lack of knowledge and experience. People don’t know what containers are and the advantages of using them. There’s a learning curve with many instructions of how to use Docker and K8. People will find examples of K8 implementations but fail when they move to production because they have not used all of the features and lack the proper configuration for deployment.

There’s a shortage of experienced K8, Docker, other tool engineers, as well as DevOps professionals. You still need a developer team with container experience. K8 is architected as a modular set. It takes developers of solution architecture to design a system to manage containers. You need to define personnel needs, timelines, and technology for container management in the cloud.

Containers are not a panacea for bad code and a lack of an agile or DevOps methodology. You need to understand how the culture, processes, and tools will change and realize this will take years, not months.

7. The three most frequently expressed concerns over the current state of containers are: 1) security; 2) lack of education; and 3) expectations. People think containers are inherently secure when they are not. You must be proactive in following best practices to secure them. People need to be concerned that the frequency of attacks and exploits will continue to increase.

There is a lack of knowledge about container technology and a lack of education for end users, developers, and security professionals. Everyone needs to become more educated on the different aspects of security.

Understand what containers can and cannot do and how long it will take to see the benefits. Realize you will have mixed workloads between containers and VMs for a long time. Do not silo the two or it will make integration even more difficult. People see containers as being easy and straightforward. Things are moving much faster when you are deploying 10,000 containers versus five or 10 VMs.

8. Serverless and function-as-a-service (FaaS) are the future of containers. There will be the flexibility to run anywhere to go serverless with great tools that manage abstraction and the portability layer. There will be more innovation around serverless consumption models on top of K8 making K8 easier to use and hiding details from developers when they are not interested. Ultimately, organizations will skip containers and go directly to FaaS like Lambda.

9. Security and continuous and automated delivery are most frequently mentioned as things developers needs to keep in mind regarding containers. Think about how your own apps are going to be secured and how the data is going to be secured in motion and at rest. Ensure your application is running in a secure, stable, and repeatable manner. Use the same logic in safely using resources as you did when security and efficiency were primary goals. Figure out how to use continuous delivery and automate the process to improve the quality and security of your applications and containers.

This article is featured in the new DZone Guide to Containers: Development and Management. Get your free copy for more insightful articles, industry statistics, and more! 

Original Link

Dockers on Windows: Powering the 5 Biggest Changes in IT

This article is featured in the new DZone Guide to Containers: Development and Management. Get your free copy for more insightful articles, industry statistics, and more! 

Docker runs on Windows Server 2016 and Windows 10, powering Windows apps in lightweight, portable containers. You can take existing applications and run them in Windows Docker containers with no code changes, and you can take new applications and simplify the whole CI/CD process using Docker. Windows containers come with production support from Microsoft and Docker, so you can confidently make the move away from VMs to containers.

This article explains how Docker works on Windows, and it looks at how containers are helping Windows organizations meet the five biggest challenges in the IT industry — from migrating to the cloud to modernizing traditional applications and driving new innovation.

Docker on Windows

Docker started in Linux, using core features in the kernel for isolating workloads, and making them easy to use. Microsoft added container support to Windows in 2016, partnering with Docker engineers to bring the same user experience to Windows.

All the Docker concepts work in the same in Windows and Linux: you package your app into a Docker image using a Dockerfile, distribute the app by pushing the image to a registry, and run your app by pulling the image and starting a container. The container uses the underlying operating system to run processes so containers are fast and efficient, as well as being portable and easy to secure.

Windows containers are based on images that Microsoft provides and maintains, releasing new versions each month with all the latest Windows updates. Nano Server is a lightweight server option for new applications using technologies like .NET Core, NodeJS, and Go. Windows Server Core is a full server runtime with support for existing workloads, including .NET Framework and Java apps.

The Docker experience on Windows is the same as on Linux: you install Docker on your Windows server, and you don’t need to install any other software. Any apps you run are by Docker, and the application containers have all the dependencies they need. Containers use resources from the host server, but they’re isolated from each other. You can run different versions of the .NET Framework in different containers on the same server with no interference.

You can learn how to Dockerize Windows applications in two video series on YouTube. Modernizing .NET Apps With Docker  for IT Pros is aimed at operations teams. It shows you how to deploy existing apps to containers using artifacts like MSIs, and how to modernize the delivery and management of apps by integrating with the Docker platform without changing code.

Modernizing .NET Apps With Docker for Developers shows how to modernize the architecture of an existing application, break down a monolithic design into smaller services, run them in containers, and plug them together using Docker.

The support for new and old Windows applications in containers is an enabler for meeting the major challenges facing enterprise IT.

1. Cloud Migration

Running in the cloud should bring agility, flexibility, and cost savings. To get those benefits for existing Windows apps, you typically had to choose between two approaches: Infrastructure-as-a-Service (IaaS), and Platform-as-a-Service (PaaS).

IaaS means renting Windows VMs and deploying your apps in the same way as the datacenter. It means you can re-use your existing scripts and processes, but you take the inefficiencies of running virtual machines into the cloud, which means you have a whole suite of VMs to monitor, manage, and update. You can’t scale up quickly because VMs take minutes to start, and you can’t run apps with higher density so you’re unlikely to see significant cost benefits.

PaaS means using the full product suite of your cloud provider and matching the products to the features your app needs. In Azure that could mean using App Services, API Management, SQL Azure, and Service Bus queues. It’s likely to give you high agility, and using shared services means you should save on cost – but it’s going to take a project for every app you want to migrate. For each app, you’ll need to design a new architecture, and in many cases, you’ll need to change code.

Docker on Windows gives you a new option that combines the best of IaaS and PaaS: move your apps to containers first, and then run your containers in the cloud. It’s a much simpler option that uses your existing deployment artifacts without changing code, and it gives you high agility, low cost, and the flexibility to run the same apps in a hybrid cloud or multi-cloud scenario.

2.  Cloud-native Apps

Cloud-native applications are container-packaged, are dynamically managed, and use microservice architectures. Docker brings cloud-native approaches to building new Windows apps. You can deliver a project using a modern technology stack like .NET Core and NodeJS.

You can run those apps on Nano Server containers — which are smaller and faster than full Windows Server Core containers — but they run on Windows Server, so you don’t need your team to become Linux experts to start building cloud-native apps.

You can also integrate your Windows microservices with fantastic open-source projects which already run on Docker to add features and extend capabilities — software like NATS, which is the message queue project from the Cloud Native Computing Foundation (CNCF). It’s enterprise-grade software which comes packaged in a Windows Docker image, so you can drop it right into your solution with no complex configuration.

Cloud-native applications are container-packaged, are dynamically managed, and use microservice architectures. Docker brings cloud-native approaches to building new Windows apps. You can deliver a project using a modern technology stack like .NET Core and NodeJS.

3.  Modernizing Traditional Apps

New cloud-native apps are an important part of innovating for the future, but most enterprises already have a much larger landscape of traditional applications. These are apps which typically have a monolithic architecture and manual deployment steps, which are complex and time-consuming to develop and test, and fragile to deploy.

Many Windows organizations are also managing apps across a range of operating systems – including Windows Server 2003 and 2008. It’s hard to maintain an application landscape that is running on diverse operating systems, which each have different toolsets and different capacities for automation.

The Docker container platform brings consistency to all containerized applications, old and new. The Windows Server Core Docker image — which Microsoft maintains  — has support for older application platforms, including .NET 2.0 and 32-bit apps. You can take a ten-year-old application and run it in a Windows container, without even having the original source code.

This is letting organizations migrate off older operating systems and move to a model where every app runs as a container. You can even run a hybrid Docker swarm cluster, using a mixture of Linux and Windows servers. Then you can run brand-new microservices apps in Linux containers alongside traditional .NET Framework apps on Windows containers on the same cluster, and use one set of tools to package, deploy, and manage all those apps.

4. Innovation

Technical innovation doesn’t end with cloud-native apps. Trends like IoT, machine learning, and serverless functions are all coming closer to mainstream, and they’re all made easier and more manageable by Docker.

One of the biggest concerns in IoT projects is how to manage the software running on the devices, and how to distribute and safely deploy updates. When you run your device software in containers, you can use the distribution mechanism built into Docker to deploy new updates without writing custom code.

Machine learning frameworks like TensorFlow tend to have a large list of dependencies, but running them in Docker makes it trivial to get started. You can even package your own Docker container image with your trained models and make them available publicly on Docker Hub or privately inside your organization. Then, anyone can start taking advantage of your trained models just by running containers.

Serverless is all about containers. Developers write code and the serverless framework takes care of packaging the code into a Docker image, and then it runs containers to execute the code when a trigger comes in, like an HTTP request or a message on a queue. Serverless isn’t just for cloud deployments, and there are great open-source projects like OpenWhisk, Nuclio, Fn, and OpenFaas that are powered by containers. You can run serverless functions on the same Docker cluster as your microservices and your traditional apps.

Gloo is a recently released project that lets you tie together portable serverless functions with proprietary serverless functions, microservices, and traditional apps. Gloo is a function gateway that can be extended with plugins and runs on Kubernetes. It is designed for microservice, monolithic, and serverless applications. For example, an enterprise developer could modernize a traditional application by containerizing it with Docker Enterprise Edition and then, using Gloo, start to add functionalities to it using microservices, portable functions using Fn, and proprietary functions using AWS Lambda.

5.  DevOps

The last big challenge facing enterprise IT is about cultural change and the move to DevOps, which should bring faster deployments and higher quality software. DevOps is usually positioned as people and process change, but it’s hard to make big changes unless you underpin them with technology change.

Moving to Docker helps drive the change to DevOps, even for teams currently building and deploying Windows applications using older technologies. Everything in the Docker container platform is automated, which gets you fast delivery and reliable deployments and rollbacks. And the key artifacts — Dockerfiles and Docker Compose files — become the joint responsibility of developers and IT Pros.

Having teams working on the same technology and speaking the same language is a great way to break down barriers. People are excited by Docker, too. It’s an interesting, powerful new technology which is easy to get started with and can improve practices from development to production. Teams adopting the Docker container platform are enthusiastic and that helps drive big changes like the move to DevOps.

Summing Up

Docker on Windows is today’s technology, and it’s an enabler for meeting the real challenges that enterprises face. You can get started with Docker very easily by packaging up your existing applications or adding Docker support to new projects. Then you can run your apps in containers which are the same in every environment, right up to production, where you have support from Microsoft and Docker.

This article is featured in the new DZone Guide to Containers: Development and Management. Get your free copy for more insightful articles, industry statistics, and more! 

Original Link

Best Practices for Multi-Cloud Kubernetes

This article is featured in the new DZone Guide to Containers: Development and Management. Get your free copy for more insightful articles, industry statistics, and more! 

The 2018 State of the Cloud Survey shows that 81% of enterprises use multiple clouds. Public cloud computing services, and modern infrastructure platforms enable agility at scale. As businesses seek to deliver value faster to their customers, it’s no surprise that both public and private cloud adoption continue to grow at a healthy pace. In fact, according to the latest figures from IDC, worldwide server shipments increased 20.7% year-over-year to 2.7 million units in Q1 of 2018, and revenue rose 38.6%, the third consecutive quarter of double-digit growth!

Another exciting mega-trend is the emergence of containers as the best way to package and manage application components. Kubernetes has been widely accepted as the best way to deploy and operate containerized applications. And, one of the key value propositions of Kubernetes is that it can help normalize capabilities across cloud providers.

But with these advances also come new complexities. Containers address several DevOps challenges, but also introduce a new layer of abstraction that needs to be managed. Kubernetes addresses some of the operational challenges,  but not all. And, Kubernetes is a distributed application that itself needs to be managed.

In this article, we will discuss the best practices and guidelines to address the key operations challenges for the successful deployment and operations of Kubernetes clusters across different cloud providers. The perspective we will take is that of an IT operations team building an enterprise Kubernetes strategy for multiple internal teams.

Image title

1. Leverage Best-Of-Breed Infrastructure

All cloud providers offer storage and networking services, as do on-premises infrastructure vendors. A question that arises when considering multi-cloud strategies is whether to use each providers’ capabilities, or an abstraction layer. While both approaches can work, it’s always prudent to try and minimize abstraction layers and utilize the vendor-native approach. For example, rather than run an overlay network in AWS, it may be best to use the Kubernetes CNI (Container Network Interface) plugin from AWS that offers native networking capabilities to Kubernetes. This approach also enables  use of other services like security groups and IAM.

2. Manage Your Own (Upstream) Kubernetes Versions

Kubernetes is a fast-moving project, with new releases available every three months. A key decision to make is whether you want a vendor to test and curate Kubernetes releases for you or whether you want to allow your teams to directly use upstream releases.

As always, there are pros and cons to consider. Using a vendor-managed Kubernetes provides benefits of additional testing and validation. However, the Cloud Native Computing Foundation (CNCF) Kubernetes community itself has a mature development, test, and release process. The Kubernetes project is organized as a set of Special Interest Groups (SIGs), and the Release SIG is responsible for processes to ensure the quality and stability of each new release. The CNCF also provides a Kubernetes Software Conformance program for vendors to prove that their software is 100% compatible with the Kubernetes APIs.

Within an enterprise, it’s best to use stable releases for production. However, some teams may want clusters with pre-GA features. The best bet is to provide teams with the flexibility of choosing multiple validated upstream releases, or trying newer versions as needed at their own risk.

3. Standardize Cluster Deployments via Policies

There are several important decisions to make when installing a Kubernetes cluster. These include:

  1. Version: the version of Kubernetes components to use.

  2. Networking: the networking technology to use, configured via a CNI (Container Networking Interface) plugin.

  3. Storage: the storage technology to use, configured via a CSI (Container Storage Interface) plugin.

  4. Ingress: the Ingress Controller to use for load-balancer and reverse proxy of external requests to your application services.

  5. Monitoring: an add-on for monitoring Kubernetes components and workloads in the cluster.

  6. Logging: a centralized logging solution to collect, aggregate, and forward logs from Kubernetes components as well as application workloads in the cluster to a centralized logging system.

  7. Other Add-Ons: other services that need to run as part  of a cluster, like DNS and security components.

While it’s possible to go through these decisions for each cluster install, it is more efficient to capture the cluster installation as a template or policy, which can be easily reused. Some examples of this could be a Terraform script or a Nirmata Cluster Policy. Once the cluster installation is automated, it can also be invoked as part of higher-level workflows, like fulfilling self-service provisioning request from a service catalog.

4. Provide End-To-End Security

There are several items to consider for container and Kubernetes security, such as:

Image Scanning: container images need to be scanned for vulnerabilities before they are run. This step can be implemented as part of the Continuous Delivery pipeline before images are allowed into an enterprise’s private registry.

Image Provenance: while image scanning checks vulnerabilities, image provenance ensures that only “trusted”  images are allowed into a running cluster or environment.

Host & Cluster Scanning: in addition to securing images, cluster nodes and also need to be scanned. Additionally, routinely running the Center for Internet Security (CIS) Benchmarks for Securing Kubernetes is a best practice.

Segmentation & Isolation: even when multi-tenancy may not be a hard requirement, it’s best to plan to share clusters across several heterogeneous workloads for increased efficiencies and greater cost savings. Kubernetes provides constructs for isolation (e.g. Namespaces and Network Policies) and for managing resource consumption (Resource Quotas).

Identity Management: in a typical enterprise deployment, user identity is provided by a central directory. Regardless of where clusters are deployed,  access user identity must be federated so that it can be easily controlled and applied in a consistent manner.

Access Controls: while Kubernetes does not have the concept of a user, it provides rich controls for specifying roles and permissions. Clusters can leverage default roles or use custom role definitions that specify sets of permissions. It’s important that all clusters within an enterprise have common definitions for these roles and a way to manage them across clusters.

While each of these security practices can be applied separately, it makes sense to view these holistically and plan for a security strategy that works across multiple cloud providers. This can be achieved using security solutions like AquaSec, Twistlock, and others in conjunction with platforms like Nirmata, OpenShift, etc.

5. Centralize Application Management

As with security, managing applications on Kubernetes clusters requires a centralized and consistent approach. While Kubernetes offers a comprehensive set of constructs that can be used to define and operate applications, it does have a built-in concept of an application. This is actually a good thing as it enables flexibility in supporting different application types and allows different ways of building more opinionated application platforms on Kubernetes.

However, there are several common attributes and features that any Kubernetes application management platform must provide. The top concerns for centralized application management for Kubernetes workloads are discussed below.

Application Modeling & Definition

Users need to define their application components and also compose applications from existing components. A core design philosophy in Kubernetes is its declarative nature, where users can define the desired state of the system. The Kubernetes workloads API offers several constructs to define the desired state of resources. For example, deployments can be used to model stateless workload components. These definitions are typically written as a set of YAML or JSON manifests. However, developers need to organize and manage these manifests,  typically in a Version Control System (VCS) like Git.

While developers will want to define and manage portions of the application manifests,  other portions of these manifests specify operational policies and may be specific to runtime environments. These portions are best managed by operations teams. Hence, the right way to think of how an application manifests itself is as a pipeline that is composed dynamically before deployment and updates.

A Kubernetes project that helps with some of these challenges is Helm, a package manager for Kubernetes. It makes it easy to group, version, deploy, and update applications as Helm Charts.

Kubernetes application platforms must provide easy ways to model, organize, and construct applications manifests and Helm Charts, with proper separation of concerns between development and operational resources. The platform must also provide validation of the definitions to catch common errors as early as possible, along with easy ways to reuse application definitions.

Environments — Application Runtime Management

Once applications are modeled and validated,  they need to be deployed to clusters. However, the end goal is to reuse clusters across different workloads for greater efficiencies and increased cost savings. Hence, it’s best to decouple application runtime environments from clusters and to apply common policies and controls to these environments.

Image title

Kubernetes allows creating virtual clusters using Namespaces and Network Policies. Kubernetes application platforms should make it easy to leverage these constructs and create environments with logical segmentation, isolation, and resource controls.

Change Management

In many cases, runtime environments will live long lives and changes will need to be applied to these environments in a controlled manner. The changes may originate from a build system or from an upstream environment in the delivery pipeline.

Kubernetes application platforms need to offer integrations into CI/CD tools and monitor external repositories for changes. Once changes are detected, they should be validated and then handled based on each environment’s change management policies. Users should be review and accept changes or fully automate the update process.

Application Monitoring

Applications may be running in several environments and in different clusters. With regards to monitoring,  it’s important to have the means to separate the signal from the noise and focus on application instances. Hence, metrics, states, and events need to be correlated with application and runtime constructs. Kubernetes application platforms must offer integrated monitoring with automated granular tagging so that it’s easy for users to drill-down and focus on application instances in any environment.

Application Logging

Similar to monitoring,  logging data needs to be correlated with application definitions and runtime information and should be accessible for any application components. Kubernetes application platforms must be able to stream and aggregate logs from different running components. If centralized logging system is used, it’s important to apply the necessary tags to be able to separate logs from different applications and environments and also manage access across teams and users.

Alerting & Notifications

To manage service levels, it’s essential to be able to define custom alerts based on any metric, state change, or condition.  Once again, proper correlation is required to separate alerts that require immediate action from the rest. For example, if the same application deployment is running in several environments like dev-test, staging, and production, it is important to be able to define alerting rules that only trigger for production workloads. Kubernetes application platforms must

be able to provide the ability to define and manage granular alerting rules that are environment and application-aware.

Remote Access

Cloud environments tend to be dynamic, and containers elevate the dynamic nature to a new level. Once problems are detected and reported, its essential to have a quick way to access the impacted components in the system. Kubernetes application platforms must provide a way to launch a shell into running containers, and to access container runtime details, without having to access cloud instances via VPN and SSH.

Incident Management

In a Kubernetes application, it’s possible that a container ex- its and is quickly restarted. The exit may be part of a normal workflow, like an upgrade, or may be due to an error like an out-of-memory condition. Kubernetes application platforms must be able to recognize failures and capture all details of the failure for offline troubleshooting and analysis.

Summary

Containers and Kubernetes allow enterprises to leverage a common set of industry best practices for application operations and management across cloud providers. All major cloud providers, and all major application platforms,  have committed support of Kubernetes. This includes Platform-as-a-Service (PaaS) solutions where developers provide code artifacts and the platforms do the rest, Container-as-a-Ser- vice (CaaS) solutions where developers provide container images and the platform does the rest, and Functions-as-a- Service (FaaS) solutions where developers simply provide functions and the platform does the rest. Kubernetes has become the new cloud-native operating system.

When developing a multi-cloud Kubernetes strategy, enterprises must consider how they wish to consume infrastructure services, manage  Kubernetes component versions, design and manage  Kubernetes clusters, define common layers of security, and application management.

This article is featured in the new DZone Guide to Containers: Development and Management. Get your free copy for more insightful articles, industry statistics, and more! 

Original Link

Up and Running with Alibaba Cloud Container Registry

Let’s say you are a container microservices developer. You have a lot of container images, each with multiple versions, and all you are looking for is a fast, reliable, secure, and private container registry. You also want to instantly upload and retrieve images, and deploy them as a part of your uninterrupted integration and continuous delivery of services. Well, look no more! This article is for you.

This article introduces you to the Alibaba Cloud Container Registry service and its abundance of features. You can use it to build images in the cloud and deploy them in your Alibaba Cloud Docker cluster or premises. After reading this article, you should be able to deploy your own Alibaba Cloud Container Registry.

What is Alibaba Cloud Container Registry?

Alibaba Cloud Container Registry (ACR) is a scalable server application that builds and stores container images, and enables you to distribute Docker images. With ACR, you have full control over your stored images. ACR has a number of features, including integration with GitHub, Bitbucket, and self-built GitLab. It can also automatically build new images after the compile and test from source code to applications.

In this tutorial, we will build and deploy containerized images using Alibaba Cloud Container Registry.

Step 1: Activating Alibaba Cloud Container Registry

You should have an Alibaba Cloud account set up. If you don’t have one, you can sign up for an account and try over 40 products for free. Read this tutorial to learn more.

The first thing you need to do is to activate the Alibaba Cloud Container Registry. Go to the product page and click on Get it Free.

It will take you to the Container Registry Console where you can configure and deploy the service.

Step 2: Configuring Alibaba Cloud Container Registry

Create a Name Space

A namespace is a collection of repositories and repository is a collection of images. I recommend creating one namespace for each application and one repository for each service image.

Picture1

After creating a namespace, you can set it up as public read or private in the settings.

Create and Upload a Local Repository

A repository (repo) is a collection of images. I suggest you collect all versions of the image of one service in one repository. Click Create Repo and fill out the information in the page. Select Local Repository. After a short while, a new repository will be created, which has its own Repository URL. You can see it on the image list page.

You can now upload your locally built image to this repository.

Picture2

Step 3: Connecting to Container Registry with Docker Client

In order to connect to any container registry from Docker client, you first need to set Docker login password in the ACR console. You will use this password on your Docker client to login onto the registry.

Picture3

Next, on the Image List page, click on Admin in front of the repository you want to connect. Here you can find all the necessary information and commands to allow Docker client access the repository. You can see Image Name, Image Type, Internet and intranet addresses of the repository. You can use the internet address to access the repository from anywhere in the world. If you want to use the repository with your Alibaba Cloud container cluster, you should use the internet address because it will be much faster.

Copy the Login, push and pull commands. You will need it later.

Picture4

Start up the Docker client in your local machine. You can refer to docker.io to install a Docker client on to your computer. On MAC, run the docker.app application to start the Docker client.

Login as user on Docker client.

docker login --username=random_name@163.com registry-intl.ap-southeast-1.aliyuncs.com

Note: Replace random_name with the actual username.

You will see a login successful message after your enter the password and hit enter. At this point, you are authenticated and connected to the Alibaba Cloud Container Registry.

Picture5

Step 4: Building an Image Locally and Pushing to ACR

Let’s write a Dockerfile to build an image. The following is a sample Dockerfile; you can choose to write your own Dockerfie:

######################
# This is the first image for the static site.
#####################
FROM nginx
#A name can be given to a new build stage by adding AS name to the FROM instruction.
#ARG VERSION=0.0.0
LABEL NAME = static-Nginx-image START_TIME = 2018.03.10 FOR="Alibaba Community" AUTHOR = "Fouad"
LABEL DESCRIPTION = "This image is built for static site on DOCKER"
LABEL VERSION = 0.0.0
#RUN mkdir -p /var/www/
ADD /public /usr/share/nginx/html/
EXPOSE 80
RUN service nginx restart</code></pre> Run the Docker build command to build the image. In order to later push the image to the repository, you need to tag the new image with the registry: <pre><code>docker build -t registry-intl-internal.ap-southeast-1.aliyuncs.com/fouad-space/ati-image .

Picture6

Once the build is complete, it will be tagged with repository name already. You can see the new image in by using the command:

Docker image ls

Picture7

Push the image to ACR repository with the command:

docker push registry-intl.ap-southeast-1.aliyuncs.com/fouad-space/ati-image:latest

Picture8

To verify that the image is pushed successfully, see it in the Container Registry console. Click on Admin in front of the repository name and then click Image version.

Picture9

Pull the image and create a container. Run the docker pull command:

docker pull registry-intl.ap-southeast-1.aliyuncs.com/fouad-space/ati-image:latest

Picture10

Since I have already pulled the image to my local computer, the message says image is up to date.

Create a new container using this image:

docker run -ti -p 80:80 registry-intl.ap-southeast-1.aliyuncs.com/fouad-space/ati-image bash

Picture11

Step 5: Building an Image Repo with GitHub

With Alibaba Cloud Container Registry, you can build images in the cloud as well as push them directly to the registry. Besides this, Container Repository supports automatically triggering the build when the code changes.

If Automatically create an image when the code changes is selected in Build Settings, the image can be automatically built after you submit the code, without requiring you to manually trigger the build. This saves manual work and keeps the images up-to-date.

Create a GitHub repo and upload your Docker file to the repo.

Picture12

Then, return to the Container Registry console to create a repo. Select GitHub repo path and complete the repository creation steps.

Picture13

Once the repository is created, go to Image List and click on Admin on the repo name, click Build, and finally click Build Now.

You can see the build progress in the menu and the complete logs of the build process.

Picture14

You can also see all the build logs. Isn’t it neat?

Picture15

Once the build is complete, your image is ready to be deployed. You can pull it to the local Docker engine or deploy this image on Alibaba Cloud Container Service.

Step 6: Creating a Webhook Trigger

Webhook is a type of trigger. If you configure this, it will push a notification when an image is built and therefore set up a continuous integration pipeline.

How does it work? Well, suppose you have set a Container Service trigger for Webhook. When an image is built or rebuilt, the applications in Container Service are automatically triggered to pull the latest image and re-deployed.

To create a webhook, you first need to go to container service and get the application web URL.

Picture16

Now use this URL to configure a hook. Every time the image in the container registry is updated, this application will be re-deployed with the new image. Be very careful though, incorrect setup can bring down the whole application. But rollback is possible in the container service so no big worries.

Picture17

Summary

In this article, you should have learned the following:

  • What is Alibaba Cloud Container Registry service and how you can implement it.
  • How to create a name space and repository to host Docker images.
  • How to build a Docker image locally and push it to ACR.
  • How to pull a Docker image from ACR and instantiate a new scontainer with it.
  • Building an image in Container Registry with GitHub source code.
  • How to automatically trigger the pull request for the latest image and re-deploy the service.

Original Link

What Is Kubernetes? Container Orchestration Tool

We all know how important containers have become in today’s fast-moving IT world. Pretty much every big organization has moved out of their traditional approach of using virtual machines and started using containers for deployment. So, it’s high time you understand what is Kubernetes. 

If you want to read more about the advantages of containers and how companies are reshaping their deployment architecture with Docker, then click here.

Kubernetes is an open-source container management (orchestration) tool. It’s container management responsibilities include container deployment, scaling & descaling of containers & container load balancing.

Note: Kubernetes is not a containerization platform. It is a multi-container management solution.

Going by the definition, you might feel Kubernetes is very ordinary and unimportant. But trust me, this world needs Kubernetes for managing containers, as much as it needs Docker for creating them. Let me tell you why! If you would favor a video explanation on the same, then you can go through the below video.

What Is Kubernetes | Kubernetes Introduction | Kubernetes Tutorial For Beginners

Why Use Kubernetes?

Companies out there may be using Docker or Rocket or simply Linux containers for containerizing their applications. But, whatever it is, they use it on a massive scale. They don’t stop at using 1 or 2 containers in Prod. But rather, 10’s or 100’s of containers for load balancing the traffic and ensuring high availability.

Keep in mind that, as the traffic increases, they have to scale up the number of containers to service the number of requests that come in every second. And, they have to also scale down the containers when the demand is less. Can all this be done natively?

Well, to be honest, I’m not sure it can be done. Even if it can be done, it is only after loads of manual effort for managing those containers. So, the real question is, is it really worth it? Won’t automated intervention make life easier? Absolutely it will!

That is why the need for container management tools is so critical. Both Docker Swarm and Kubernetes are popular tools for container management and orchestration, but Kubernetes is the undisputed market leader. Partly because it is Google’s brainchild and partly because of its better functionality.

Logically speaking, Docker Swarm is a better option because it runs right on top of Docker, right? If I were you, I would have had the same doubt and it would have been my first question. So, if your thinking the same, read this blog on the comparison between Kubernetes vs Docker Swarm here.

This is the right time to talk about Kubernetes’ features 
If I could choose my pick between the two, then it would have to be Kubernetes. The reason simply being: Auto-scaling of containers based on traffic needs. However, Docker Swarm is not intelligent enough to do Auto-scaling. Be as it may, let’s move onto the next topic of this, what is Kubernetes blog.

This is the right time to talk about Kubernetes’ features: 

1. Automatic Binpacking

2. Service Discovery & Load balancing

4. Self-Healing

5. Secret & Configuration Management

7. Horizontal Scaling

These were some of the notable features of Kubernetes. Let me delve into the attractive aspects of Kubernetes with a real-life implementation of it and how it solved a major industry worry.

Case Study: Kubernetes at the Center of Pokemon Go’s Evolution

I’m pretty sure everyone reading this blog would have played this famous smartphone game, or at least you would have heard of this game. I’m so sure because this game smashed every record set by gaming applications in both the Android and iOS markets.

Pokemon Go was developed by Niantic Labs and was initially launched only in North America, Australia & New Zealand. In just a few weeks upon its worldwide release, the game reached 500+ million downloads with an average of 20+ million daily active users. These stats bettered those set by games like Candy Crush and Clash of Clans.

Pokemon Go: Game Backend with Kubernetes

The app backend was written in Java combined with libGDX. The program was hosted on a Java cloud with Google Cloud Bigtable NoSQL database. And this architecture was built on top of Kubernetes, making it their scaling strategy.

Rapid iteration of pushing updates worldwide was done thanks to MapReduce and in particular Cloud Dataflow for combining data, doing efficient MapReduce shuffles, and for scaling their infrastructure.

The actual challenge

For most big applications like this, the challenge is horizontal scaling. Horizontal scaling is when you are scaling up your servers for servicing the increasing the number of requests from multiple players and playing environments. But for this game in particular, vertical scaling was also a major challenge because of the changing environment of players in real-time. And this change also has to be reflected to all the others playing nearby because reflecting the same gaming world to everyone is how the game works. Each individual server’s performance and specs also had to be scaled simultaneously, and this was the ultimate challenge which needed to be taken care of by Kubrenetes.

Conclusion

Not only did Kubernetes help in the horizontal and vertical scaling of containers, but it excelled in terms of engineering expectations. They planned their deployment for a basic estimate and the servers were ready for a maximum of 5x traffic. However, the game’s popularity rose so much that, they had to scale up to 50x times. Ask engineers from other companies, and 95% of them will respond with their server meltdown stories and how their business went down crashing. But not at Niantic Labs.

Edward Wu, Director of Software Engineering, at Niantics said,

“We knew we had something special on hand when these were exceeded in hours.”

“We believe that people are healthier when they go outside and have a reason to be connected to others.”

Pokemon Go surpassed all engineering expectations by 50x times and has managed to keep running despite its early launch problems. This became an inspiration and a benchmark for modern-day augmented reality games as it inspired users to walk over 5.4 billion miles in a year. The implementation at Niantic Labs, thus made this the largest Kubernetes ever deployed.

Kubernetes Architecture

So, now let me explain the working architecture of Kubernetes.

Since Kubernetes implements a cluster computing background, everything works from inside a cluster. This cluster is hosted by one node acting as the “master” of the cluster, and other nodes as “nodes” which do the actual “containerization.” Below is a diagram showing the same.

Master controls the cluster, and the nodes in it. It ensures the execution only happens in nodes and coordinates the act. Nodes host the containers; in-fact these containers are grouped logically to form Pods. Each node can run multiple such Pods, which are a group of containers, that interact with each other, for a deployment.

Replication Controller is Master’s resource to ensure that the requested number of pods are always running on nodes. Service is an object on Master that provides load balancing across a replicated group of Pods.

So, that’s the Kubernetes architecture in simple fashion. You can expect more details on the architecture in my next blog. A better news is, the next blog will also have a hands-on demonstration of installing Kubernetes cluster and deploying an application.

DevOps Online Training On that note, let me conclude this “what is Kubernetes” blog. For learning more on Kubernetes, you can check out Edureka’s Kubernetes Training Certification here. You can also reach our DevOps Training Certification here.

Original Link

Using Amazon EFS for Container Workloads

When using containers for different application workloads, it is a common use case where we need to store data for persistence. Although it looks simple from the outside where we can save the data directly in the underlying container host, this is not practical when using container orchestration tools like Kubernetes, Amazon ECS, Docker Swarm where the containers could be placed in different Nodes. Therefore the containers need to run without knowing the underlying host machine.

Image title

The built-in solution that comes with containers is to use a mechanism called volumes that provides the following advantages: 

  • Volumes allow sharing things across containers.

  • Volume drivers enable to store data not only within the container cluster but also on remote hosts or cloud providers.

  • It is also possible to encrypt these volumes.

  • Containers can pre-populate the data in new volumes.

  •  Since volume is external, its contents exist outside of the lifecycle of the containers which is most important when doing modifications to containers.

Container Volume Use Cases

Let’s look at few use cases where we can use Volumes. There are several common use cases where you can use container volumes to simplify the architecture.

Database Containers

If you are building an application with containers and require a database container, volumes are useful to mount as the storage path for database files. This creates the possibility of upgrading the database container for different versions without impacting the underlying storage of the data. It also allows mounting a different container to the same volume which could take care of backing up the filesystem at the block level.

Shared File Storage

This is one of the direct use cases for volumes, where it’s possible to directly upload and save files from individual containers. When building scalable systems, the uploading could be handled by a fleet of containers instead of one, which still requires you to upload the files to a central place. In these situations, volumes become quite useful in implementing the shared storage.

Application Deployment

Although this is more of an advanced use case, it is possible to keep application deployment in a volume or keep common artifacts such as binaries in a volume where it could be mounted for different containers for faster initialization and recovery.

Container Workflows

When there are file-related operations, which are handled by multiple containers for scalability, it is possible to use a volume to do modifications to files and make it available for other containers (in the same place or move it to another directory) to coordinate content for different containers. This makes things simple (and improves the performance when dealing with large files) if each of the containers in the workflow is doing a well-defined job on each of these files where the files don’t need to be moving between containers.

Volumes and Amazon EFS

We’ve been talking about volumes so far, so let’s look at what Amazon EFS has to do with container volumes.

When we are looking at volumes, having a persistent and scalable and shareable underlying storage infrastructure are important properties. Amazon EFS is just the right technology where we could use it as the underlying storage infrastructure. Some of the useful properties of Amazon EFS is listed below.

  • Regional availability and durability.

  • Autoscaling storage.

  • By nature a shared file system.

  • Inter-operable and easy to mount network file system (NFS).

So how complex is it to use Amazon EFS for container volumes?

This is pretty straightforward and AWS comes with developer guidelines on using Amazon EFS for volumes which makes things much simpler.

This involves the following steps as described in the tutorial: Using Amazon EFS File Systems with Amazon ECS.

  • Step 1: Gather Cluster Information.

  • Step 2: Create a Security Group for an Amazon EFS File System.

  • Step 3: Create an Amazon EFS File System.

  • Step 4: Configure Container Instances.

  • Step 5: Create a Task Definition to Use the Amazon EFS File System.

  • Step 6: Add Content to the Amazon EFS File System.

  • Step 7: Run a Task and View the Results.

Make note look at Step 5, which is an inbuilt feature coming with Amazon ECS managed container service to directly connect Amazon EFS as a file system. If it’s a different container orchestration like Swarm it will require to mount the Amazon EFS using the command line in the container bootup process.

Original Link

Auditing Container Activity — A Real Example with wget and curl Using Sysdig Secure

One of the first questions Sysdig Secure and Sysdig Falco users have is how to audit container activity or detect when X happens inside a container, on a host, or anywhere in their environments. In today’s example, we’ll cover detecting curl and the use of other programs that fetch contents from the web. There are illegitimate reasons to execute curl, such as downloading a reverse shell or a rootkit, but there are many legitimate reasons as well. In either case, detecting the usage of web fetch programs, in general, is something many organizations need to do for compliance reasons.

If you’re new to the Falco rules syntax I recommend you check out some of these great resources to get up and running:

Name the Web Fetch Programs

First, create a list + macro that names the programs that count as “web fetch programs.” Here’s one possibility. By convention, “binaries” are used to name programs, and “programs” refer to a condition that compares proc.name to a list of binaries. Also, note the good practice of parentheses surrounding the condition field of each macro to ensure it is always treated as a single unit.

list: web_fetch_binaries
items: [curl, wget]
macro: web_fetch_programs
condition: (proc.name in (webfetchbinaries))

Capture Spawning a Web Fetch Program

Next, write a macro that captures any exec of a web fetch program. Here’s one way, using the existing spawned_process macro:

macro: spawned_process
condition: evt.type = execve and evt.dir=<

If you have any questions about the filtering syntax on the condition check out some of the available filters documented here.

Next combine the web_fetch_programs and spawned_process macros , to have a single macro that defines “curl or wget were executed on my system”:

macro: spawn_web_fetcher
condition: (spawned_process and web_fetch_programs)

Add Exclusions, Define the Rule

When creating this rule we’ll want to add a macro that adds the ability to name a set of exceptions and combine that with the macros we created earlier.

Note use of the container macro to restricts the rule to containers. To monitor for this behavior on all hosts and inside containers just remove the container macro in the condition below.

macro: allowed_web_fetch_containers
condition: (container.image startswith quay.io/my-company/services)


rule: Run Web Fetch Program in Container desc: Detect any attempt to spawn a web fetch program in a container condition: spawn_web_fetcher and container and not allowed_web_fetch_containers output: Web Fetch Program run in container (user=%user.name command=%proc.cmdline %container.info image=%container.image) priority: INFO tags: [container]

Now you’re all set to use this rule in your environment with Falco. Next, we’ll cover adding this rule to Sysdig Secure so you can have more controls over the scope of where a policy applies, take actions like killing or pausing a container, or record all the system activity before and after this web fetch event for forensic analysis.

To add this rule to Sysdig Secure, copy the rule and exclusion into the custom rules section, and then save it be able to add the rule Run Web Fetch Program in Container to a policy.

Create Policy Associated With Rule

Now, create a Sysdig Secure policy associated with the rule.

This policy can be scoped by container and orchestrator labels so you can apply it to different areas of your infrastructure depending on their security and compliance needs. Actions like killing a container can also be taken.

By switching to the Falco tab within the policy you can select the new rule Run Web Fetch Program in Container to apply it to the policy you just created.

Verify Policy Is Working

Finally, a command like  docker run --rm byrnedo/alpine-curl https://www.google.com  should result in a policy event:

Within this policy event, you’ll get the full context of what triggered the event and where it occurred in your physical and logical infrastructure.

Hopefully, this example provides an easy way to get started writing your first Falco rules to audit the activity occurring inside your containers. If you need help contact us on Slack and share any rules you create with the community by submitting a PR to the Sysdig Falco project.

Original Link

Build and Deploy a Spring Boot App on Minikube (Part 1)

In this post, we will take a look how we can build a Spring Boot application, create the Docker image, deploy it to a Docker registry and deploy it to a Kubernetes cluster. This will give us the opportunity to get acquainted with the basics, from building an application up to deploying it to Kubernetes. Sources can be found at GitHub.

Why Are We Doing This?

So, why should we take a look at Kubernetes? The answer is quite simple: Docker containers are more and more used and Kubernetes offers us a way to manage our containers. Besides that, Kubernetes is often used in CI/CD in order to fulfill the Continuous Deployment. Therefore,  it is good to have at least a basic understanding of how Kubernetes works and get familiar with the terminology.

Basically, Kubernetes manages a cluster of machines which act as a single unit. Within a cluster, you will have a Master which coordinates the cluster and nodes, the nodes run the applications. We will explain the terminology later on in part 2. Kubernetes is also often abbreviated as K8s (8 because 8 letters are left out).

A good place to start learning, are the tutorials on the Kubernetes website itself.

In order to start working with Kubernetes, you can make use of local solutions or hosted solutions, the list of options you have can be viewed here. In our tutorial, we will make use of Minikube, which is a local solution. This way we don’t need an account for a hosted solution and it will work perfectly for trying out some things.

What Are We Going to Do?

Here are the steps we will follow in order to reach our goal:

  1. Create a Hello World Spring Boot application
  2. Create a Docker image of the application
  3. Push the Docker image to a Docker registry
  4. Install Minikube on Windows
  5. Deploy the application to Minikube
  6. Update the application

Steps 1 up to 4 will be covered in this post, steps 5 en 6 will be covered in part 2.

I will be working from a Windows 10 machine and will try to run Minikube on Windows (as we will see later in this post, the plans will change a little).

Step 1: Create a Hello World Spring Boot Application

As always, our starting point is https://start.spring.io/. We select Java 10, Spring Boot 2.0.1, select Web MVC and Spring Actuator. Generate the project and open it with your IDE. This will get us started.

We will create a simple Spring Boot application with one endpoint returning a “Hello Kubernetes!” message. So, we add the following HelloController:

@RestController
public class HelloController { @RequestMapping("/hello") public String hello() { return "Hello Kubernetes!"; } }

Running the application gives us the following error:

[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.7.0:compile (default-compile) on project mykubernetesplanet: Execution default-compile of goal org.apache.maven.plugins:maven-compiler-plugin:3.7.0:compile failed. IllegalArgumentException

It seems that there is an issue with a dependency of the Maven Compiler Plugin with Java 10. A temporary fix is to set a newer version of the asm dependency in your pom:

<plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>3.7.0</version> <configuration> <source>${java.version}</source> <target>${java.version}</target> </configuration> <dependencies> <dependency> <groupId>org.ow2.asm</groupId> <artifactId>asm</artifactId> <version>6.1</version> <!-- Use newer version of ASM --> </dependency> </dependencies>
</plugin>

Run the application again and verify whether http://localhost:8080/hello returns the following output:

Hello Kubernetes!

Step 2: Create a Docker Image of the Application

Now that we have our application running, it is time to create a Docker file which takes the image of Java 10 and places our jar file into it. The Docker file is:

FROM openjdk:10-jdk
VOLUME /tmp
ARG JAR_FILE
ADD ${JAR_FILE} app.jar
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]

In order to be able to build our image and push it to a Docker registry, we need a Maven plugin. We will make use of the dockerfile-maven-plugin provided by Spotify. Add the following to your pom file:

<properties> <docker.image.prefix>mydeveloperplanet</docker.image.prefix> <dockerfile-maven-version>1.3.6</dockerfile-maven-version>
</properties> <build> <plugins> <plugin> <groupId>com.spotify</groupId> <artifactId>dockerfile-maven-plugin</artifactId> <version>${dockerfile-maven-version}</version> <executions> <execution> <id>default</id> <goals> <goal>build</goal> <goal>push</goal> </goals> </execution> </executions> <configuration> <repository>${docker.image.prefix}/${project.artifactId}</repository> <tag>${project.version}</tag> <buildArgs> <JAR_FILE>target/${project.build.finalName}.jar</JAR_FILE> </buildArgs> </configuration> </plugin> </plugins>
</build>

Run  Maven:install in order to check whether we can create the Docker image. Unfortunately, this returns the following error:

Caused by: java.lang.ArrayIndexOutOfBoundsException: 1 at org.codehaus.plexus.archiver.zip.AbstractZipArchiver.(AbstractZipArchiver.java:116)

And the build fails with the generic message:

Failed to execute goal com.spotify:dockerfile-maven-plugin:1.3.6:build (default) on project mykubernetesplanet: Execution default of goal com.spotify:dockerfile-maven-plugin:1.3.6:build failed: An API incompatibility was encountered while executing com.spotify:dockerfile-maven-plugin:1.3.6:build: java.lang.ExceptionInInitializerError: null

It seems that this is caused by the Maven Archiver plugin, which is not yet fully compatible with Java 9/10, see https://github.com/spotify/dockerfile-maven/issues/163

Adding the following dependencies to the dockerfile-maven-plugin section solves the problem for the time being:

<dependency> <groupId>org.codehaus.plexus</groupId> <artifactId>plexus-archiver</artifactId> <version>3.4</version>
</dependency>
<dependency> <groupId>javax.activation</groupId> <artifactId>javax.activation-api</artifactId> <version>1.2.0</version>
</dependency>

Make sure you have Docker locally running and that the setting ‘Expose daemon on tcp://localhost:2375 without TLS’ is enabled in the Docker settings, this is due to the following error: https://github.com/spotify/docker-maven-plugin/issues/351

Run  Maven:install  again. Now the build is successful and we have created our Docker image.

Step 3: Push the Docker Image to a Docker Registry

Prerequisite is an account to a Docker registry, e.g. Docker Hub (my account can be found here). Besides this, you will need to change the following property in your pom to your own Docker id:

<docker.image.prefix>mydeveloperplanet</docker.image.prefix>

Next, create the repository mykubernetesplanet into your Docker Hub registry.

In order to be able to push anything to your Docker Hub registry, you need to add the account settings to your Maven settings.xml:

<servers> <server> <id>docker.io</id> <username>docker_username</username> <password>docker_password</password> </server>
</servers>

And at last, add the useMavenSettingsForAuth to the dockerfile-maven configuration tag in your pom:

<configuration> <repository>${docker.image.prefix}/${project.artifactId}</repository> <tag>${project.version}</tag> <buildArgs> <JAR_FILE>target/${project.build.finalName}.jar</JAR_FILE> </buildArgs> <useMavenSettingsForAuth>true</useMavenSettingsForAuth>
</configuration>

Then run maven docker:push. The image is being pushed to our repository and within the tags section, we find a 0.0.1-SNAPSHOT tag.

Install Minikube

Windows

We will install Minikube on a Windows system. Note that this is still experimental, but we will only use basic functionality, so I assume that this will work just fine.

Installation instructions can be found at GitHub.

Also, you need to download and install kubectl, the command line tool in order to interact with your Kubernetes cluster.

First, we will start our Minikube cluster. By default, minikube will use VirtualBox, but I have already HyperV installed and therefore I need to specify the vm driver I want to use.

sudo minikube start --vm-driver hyperv

An ISO is being downloaded and then the following error is shown:

Hyper-V PowerShell Module is not available.

It seems that this problem often occurs when using HyperV, so I installed VirtualBox and ran the following command:

sudo minikube start

The following error is shown:

This computer is running Hyper-V. VirtualBox won't boot a 64bits VM when Hyper-V is activated. Either use Hyper-V as a driver, or disable the Hyper-V hypervisor. (To skip this check, use --virtualbox-no-vtx-check).

Go to ‘Windows Features’ and turn ‘Hyper-V’ off. This requires a restart of your computer. After that, run the command again.

Again this gave an error, I had to enter the command from my home directory:

C:\Users\<username>\

Unfortunately, this again gave some errors, and therefore I decided to gave up installing Minikube on Windows.

Ubuntu

We will now try to install Minikube onto an Ubuntu VM. First step is to download Ubuntu Desktop 16.04.4 and create a Hyper-V VM.

In the Ubuntu VM, I first needed to install Virtualbox. After this, running the Minikube start command returns the following error:

This computer doesn't have VT-X/AMD-v enabled. Enabling it in the BIOS is mandatory.

This is caused because virtualization inside virtualization gives some problems.

Now it is time for plan number 3 (and also the final plan): run Minikube without a VM (we will still use our Ubuntu VM of course, but Minikube normally runs in combination with a VM, but this can also be skipped).

We will install Docker inside our Ubuntu VM and then run the following command which will start the Minikube cluster:

sudo minikube start --vm-driver=none

Now install  kubectl, which will allow us to interact with our Kubernetes cluster

curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/v1.10.0/bin/linux/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/

We check the installation with the following command which will show us the  client and server version information:

sudo kubectl version

And finally, with the following command we can start the Minikube dashboard:

sudo minikube dashboard

Summary

In this part 1, we made all preparations in order to get started with Minikube in part 2. We created a basic Spring Boot application, created a Docker image for it and pushed it to our Docker registry. During installation of Minikube on Windows, we encountered several problems. In the end, we got it working with an Ubuntu VM and we started Minikube by means of Docker instead of its own VM driver. In part 2 we will deploy our application to the Minikube cluster and explore some of the concepts and terminology of Kubernetes.

Original Link

A Developer’s Guide To Docker — Docker Compose

Good developers care as much about efficiency as they do about writing clean code. Containerization can add efficiency to both your workflow and your application, and has thus become all the rage among modern development. And, as a good developer, you know that manually creating containers from images using docker run … or even using the Dockerfile to create containers is less than ideal. How would you like to have one command that tells Docker to build the containers for the UI, the API, the database, and the cache server? Let me show you how that works with Docker Compose!

In this tutorial, you’ll take the base application from Github and complete the docker-compose.yml file in it. This application uses Node, NPM and MongoDB. Don’t worry about installing all those things; you only need Docker installed!

Just like the Dockerfile, the docker-compose.yml file tells Docker how to build what you need for your containers. Unlike the Dockerfile, it is written using the YAML file spec, and it does a lot more than just building one image.

Choose Your Docker Compose Version

The first line of any docker-compose.yml file is the version setting. In this case, you’ll use version 3.3, so just tell Docker Compose that.

version: ‘3.3’

You can see the documentation for docker-compose version 3 at https://docs.docker.com/compose/compose-file/ and you can see what the differences are between versions.

Services are how Docker refers to each container you want to build in the docker-compose file. In this case, you’ll create two services: one for the NodeJS application, and one for the MongoDB database.

services: app: db:

Remember that indention is how the YAML file formats group information, so indention is important. Here, you’ve indented the app and db services under the services tag. These can be named whatever you want. In this case app and db are just easiest to refer to. Now, you’ll put some meat on these two services.

First, tell Docker what image you want to build the app service from by specifying that you’ll be building from the sample:1.0 image. So you’ll specify that indented under the app tag.

app: image: sample:1.0

Of course that image doesn’t exist, so you’ll need to let Docker know where to find the Dockerfile to build it by setting a build context. If you don’t, Docker will try to pull the image from Docker Hub and when that fails, it will fail the docker-compose command altogether.

app: image: sample:1.0 build: .

Here, you’ve specified that the build context is the current directory, so when Docker can’t find the sample:1.0 image locally, it will build it using the Dockerfile in the current directory.

Next, you’ll tell Docker what the container name should be once it’s built the image to create the container from.

 app: image: sample:1.0 container_name: sample_app build: .

Now, when Docker builds the image, it will immediately create a container named sample_app from that image.

By default, NodeJS apps run on port 3000, so you’ll need to map that port to 80, since this is the “production” docker-compose file. You do that using the ports tag in YAML.

app: image: sample:1.0 container_name: sample_app build: . ports: - 80:3000

Here, you’ve mapped port 80 on the host operating system, to port 3000 from the container. That way, when you’ve moved this container to a production host, users of the application can go to the host machine’s port 80 and have those requests answered from the container on port 3000.

Your application will be getting data from a MongoDB database and to do that the application will need a connection string that it will get from an environment variable called “MONGO_URI”. To set environment variables in the container once it’s built, use the environment tag in the YAML file.

app: image: sample:1.0 container_name: sample_app build: . ports: - 80:3000 environment: - MONGO_URI=mongodb://sampledb/sample

For the application service to actually be able to reach the sample database, it will need to be on the same network. To put both of these services on the same network, create one in the docker-compose file by using the networks tag at the top level (the same level of indention as the services tag.

version: '3.3' services: app:... db:...
networks: samplenet: driver: bridge

This creates a network called “samplenet” using a bridge type network. This will allow the two containers to communicate over a virtual network between them.

Back in the app section of the file, join the app service to the “samplenet” network:

app: image: sample:1.0 container_name: sample_app build: . ports: - 80:3000 environment: - MONGO_URI=mongodb://sampledb/sample networks: - samplenet

Now the app service is ready, but it won’t be much good without the db service. So add the same kinds of things in the next section for the db service.

 db: image: mongo:3.0.15 container_name: sample_db networks: samplenet: aliases: - "sampledb"

This service builds from the official MongoDB 3.0.15 image and creates a container named “sample_db”. It also joins the “samplenet” network with an alias of “sampledb”. This is like a DNS name on a physical network, it allows other services on the “samplenet” network to refer to it by its alias. This is important because, without it, the app service would have a much harder time talking to it. (I don’t know that it couldn’t, it would just probably have to use the container’s full hash!)

You’ll also want to create a volume mount in the database service. Volumes allow you to mount folders on the host machine to folders in the container. Meaning, when something inside the container refers to a folder, it will actually be accessing a folder on the host machine. This is especially helpful for database containers because containers are meant to be disposable. With a mount to the physical folder on the host machine, you’ll be able to destroy a container and rebuild it and the data files for the container will still be there on the host machine. So add a volume tag in the db section mounting the /data/db folder in the container (where Mongo stores its data) to the db folder in your application’s root folder so that the final db section looks like the following.

db: image: mongo:3.0.15 container_name: sample_db volumes: - ./db:/data/db networks: samplenet: aliases: - "sampledb"

With all that done, your final docker-compose.yml file should look like:

version: '3.3' services: app: image: sample:1.0 container_name: sample_app build: . ports: - 80:3000 environment: - MONGO_URI=mongodb://sampledb/sample depends_on: - db networks: - samplenet db: image: mongo:3.0.15 container_name: sample_db volumes: - ./db:/data/db networks: samplenet: aliases: - "sampledb"
networks: samplenet: driver: bridge

With that all done, you should be able to save the file and run docker-compose up -d in the folder where you docker-compose.yml file is and watch Docker build and start your environment for you.

If everything completes successfully, you can then go to http://localhost/users and see something like the image below.

Docker Compose Running

Congratulations! You have a complete environment that is defined in your source code. It can be versioned and checked in to source control. This is what people refer to as “Infrastructure as Code.” It also means that recreating this environment on the test, staging and production environments is as easy as running docker-compose up -d on the corresponding machine! I told you good developers are lazy!

You can learn more about Docker Compose and Docker in general from their respective documentation. If you want to learn more about the Dockerfile used in this project, check out part two of this series on the Dockerfile.

As always, if you have any questions or comments about this, or any, of my articles, feel free to hit me up on Twitter or Github.

A Developer’s Guide To Docker – Docker Compose was originally published on the Okta developer blog on October 11, 2017.

Original Link

What is Docker?

Docker is not a new term to most of us; it’s everywhere. But what exactly is Docker?

Quite simply, Docker is a software containerization platform, meaning you can build your application, package it along with their dependencies into a container, and then these containers can be easily shipped to run on other machines.

Okay, but what is containerization?

Containerization, also called container-based virtualization and application containerization, is an OS-level virtualization method for deploying and running distributed applications without launching an entire VM for each application. Instead, multiple isolated systems, called containers, are run on a single control host and access a single kernel.

A container image is a lightweight, stand-alone, executable package of a piece of software that includes everything needed to run it: code, runtime, system tools, system libraries, settings.

So the main aim is to package the software into standardized units for development, shipment, and deployment.

For example,  suppose there’s a Linux application which is written in Scala and R. So, in order to avoid any version conflicts for Linux, Scala, and R, Docker will just wrap this application in a container with all the versions and dependencies and deploy it on any OS or server without any version-hassle.

Now, all we need to do is to run this container without worrying about the dependent software and libraries.

So, the process is really simple. Each application will run on a separate container and will have its own set of libraries and dependencies. This also ensures that there is process level isolation, meaning each application is independent of other applications, giving developers assurance that they can build applications that will not interfere with one another.

Containers vs. Virtual Machines

Containers are an abstraction at the application layer that packages code and dependencies together. Multiple containers can run on the same machine and share the OS kernel with other containers, each running as isolated processes in userspace. Containers take up less space than VMs (container images are typically tens of MBs in size) and start almost instantly.

As you can see in case of Containerization, there’s a Host OS, then above that there’ll be containers having dependencies and libraries for each of the application, which makes processing and execution very fast. There is no guest OS here and it utilizes a host’s operating system, sharing relevant libraries & resources as and when needed, unlike virtual machines.

Virtual machines (VMs) are an abstraction of physical hardware turning one server into many servers. The hypervisor allows multiple VMs to run on a single machine. Each VM includes a full copy of an operating system, one or more apps, necessary binaries and libraries—taking up tens of GBs. VMs can also be slow to boot.

In this case of virtualization, there is a host operating system on which there are 3 guest operating systems running which is nothing but the virtual machines. But running multiple Virtual Machines on the same host operating system leads to performance degradation as each will have its own kernel and set of libraries and dependencies. This takes up a large chunk of system resources, i.e. hard disk, processor and especially RAM.

So, that was a quick overview of Docker, containerization, and virtualization.

References:

https://www.edureka.co/blog/docker-tutorial
https://www.docker.com/what-container#/virtual_machines

This article was first published on the Knoldus blog

Original Link

Demystifying the Data Volume: Storage in Docker

What are Volumes, and Why Do We Need Them?

In layman’s terms, volumes are external storage areas used to store data produced by a Docker container. Volumes can be located on the docker host or even on remote machines.

Containers are ephemeral, a fancy way of saying that they have very short lives. When a container dies all the data it has created (logs, database records, etc…) dies with it. So how do we ensure that data produced by containers is stored? Volumes are the answer to this question. Volumes are used to store the data generated by a container so even when its gone the data it produces still lives on.

Original Link

A Complete Introduction to Kubernetes — an Orchestration Tool for Containers

Kubernetes is Greek for “Captain” or “Pilot”. Kubernetes was born in Google. It was donated to CNCF in 2014 (open source). It is written in Go language. It focuses on building a robust platform for running thousands of containers in production.

Kubernetes repository is available on GitHub.

What is Kubernetes?

Kubernetes (or just K8s) is an open source orchestration system for Docker containers. It lets us manage containerized applications in a clustered environment. It simplifies DevOps tasks such as deployment, scaling, configuration, versioning, and rolling updates. Most of the distributed applications built with scalability in mind are actually made up of smaller services called microservices and are hosted and run through a container.

A container provides an isolated context in which an app/microservice together can run with its environment. But containers do need to be managed externally and must be scheduled, distributed, and load balanced to support the needs of modern apps and infrastructure. Along with this, data persistence and network configuration makes it hard to manage containers and therefore, however powerful containers are, they bring scalability challenges in a clustered environment.

Kubernetes provides a layer over the infrastructure to address these challenges. Kubernetes uses labels as name tags to identify its objects, and it can query based on these labels. Labels are open-ended and can be used to indicate role, name, or other important attributes.

Kubernetes Architecture:

Kubernetes ArchitectureKubernetes Master:

The controlling services in a Kubernetes cluster are called the master, or control plane, components. They are in charge of the cluster and monitor the cluster, make changes, schedule work, and respond to events.

Kubernetes Master

The Kubernetes Master is a collection of four processes that run on a single node in your cluster, which is designated as the master node. 

  1. Kube-apiserver

    It is the brain to the master and is front-end to the master or control plane. Kube-apiserver implements the RESTful API and consumes json via a manifest file. Manifest files declare the state of the app like a record of intent and are validated and deployed on the cluster. It exposes an endpoint (by default on port 443) so that kubectl (command line utility) can issue commands/queries and run on the master.

  2. Cluster Store

    It provides persistent storage and is stateful. It uses etcd. It is distributed, consistent and watchable. etcd – etcd is open source distributed key-value store that serves as the backbone of distributed systems by providing a canonical hub for cluster coordination and state management. Kubernetes uses etcd as the “source of truth” for the cluster. It takes care of storing and replicating data used by Kubernetes across the entire cluster. It is written in Go language and uses Raft protocol, which helps etcd in recovering from hardware failure and network partitions.

  3. Kube-controller-manager

    Kubernetes controller manager is a daemon that implants the core control loops shipped with Kubernetes. It is the controller of controllers. It watches the shared state of the cluster through the API server and makes changes attempting to move the current state towards the desired state. Examples of controllers that ship with Kubernetes today are the replication controller, endpoints controller, namespace controller, and service accounts controller. At the point when a change is seen, the controller reads the new information and implements the procedure that fulfills the desired state. This can involve scaling an application up or down, adjusting endpoints, and so forth. A Replication controller provides a pod template for creating any number of pod copies. It provides logic for scaling pod up or down. It can also be used for rolling deployments.

  4. Kube-scheduler

    This is the process that watches API-server for new pods and assigns workloads to specific nodes in the cluster. It is responsible for tracking resource utilization on each host to make sure that workloads are not scheduled in excess of the available resources.

Kubernetes Node:

The servers that do the actual work are called as nodes.

Image title

Each node in a cluster runs two processes:

  1. Kubelet

    • the main Kubernetes agent on the node
    • registers node with the cluster
    • watches API server for work assignment
    • instantiate pods for carrying out the work
    • reports back to master
    • exposes endpoint on port-10255. It lets you inspect the specs of a Kubelet.
  2. Kube-proxy

    It is like the network brain of the node. It is a network proxy which reflects Kubernetes networking services on each node. It ensures every pod gets its own unique IP. If there are multiple containers in a pod, then they all will share same IP. It load balances across all pods in a service.

Kubernetes Objects:

  1. Pod

    A pod is the basic building block of Kubernetes and is deployed as a single unit on a node in a cluster. A pod is a ring-fenced environment to run containers. Usually, you will run only one container inside a pod but in some cases where containers are tightly coupled, you can run two from a pod. A pod is connected via an overlay of networks to the rest of the environment. Each pod is assigned a unique IP address. Every container in a Pod shares the network namespace, including the IP address and network ports.

  2. Service

    Kubernetes Pods are mortal and when they die they can not be resurrected. As Kubernetes has to maintain the desired state of the app, when pods crash or go down, new pods will be added which will have a different IP address. This leads to problems with the pod discovery as there is no way to know which pods are added or removed. This brings service into action. A service is like hiding multiple pods behind a network address. Pods may come and go but the IP address and ports of your service remain the same. Any other applications can find your service through Kubernetes service discovery. A Kubernetes Service:

    • is persistent
    • provides discovery
    • load balances
    • provides VIP layer
    • identifies pods by label selector
  3. Volume

    A volume represents a location where containers can store and access information. On-disk files in a container are ephemeral and will be lost if a container crashes. Secondly, when running containers together in a Pod it is often indispensable to share files between those containers. A Kubernetes volume will outlive any containers that run within a pod and data is preserved across container restarts. For applications, volumes appear as part of a local file system. Volumes may be backed by other storage backends like local storage, EBS etc

  4. Namespace

Namespace functions as grouping mechanism within Kubernetes. Services, pods, replication controllers, and volumes can easily cooperate within a namespace. It provides a degree of isolation from other part of the cluster. Namespaces are intended for use in environments with many users spread across multiple teams, or projects. Namespaces are a way to divide cluster resources between multiple uses.

Conclusion

Kubernetes is exciting!! It is an amazing tool for microservices clustering and orchestration. It is relatively new and under active development. I believe it is going to bring a lot of functional improvements in how a clustered infrastructure is managed.

If you want to get started with deploying containerized apps to Kubernetes, then minikube is the way to go. Minikube is a tool that helps you deploy Kubernetes locally.

Original Link

Top 5 Kubernetes Best Practices From Sandeep Dinesh (Google)

At a recent Weave Online User Group (WOUG), two speakers presented topics on Kubernetes.  Sandeep Dinesh (@SandeepDinesh), Developer Advocate for Google Cloud presented a list of best practices for running applications on Kubernetes. Jordan Pellizzari (@jpellizzari), a Weaveworks engineer, followed up with a talk on lessons learned after two years of developing and running our SaaS Weave Cloud on Kubernetes.

Best Practices for Kubernetes

The best practices in this presentation grew out of discussions that Sandeep and his team had about the many different ways that you can perform the same tasks in Kubernetes. They compiled a list of those tasks and from that derived a set of best practices.

Best practices were categorized into:

  1. Building Containers
  2. Container Internals
  3. Deployments
  4. Services
  5. Application Architecture

#1: Building Containers

Don’t Trust Arbitrary Base Images!

Unfortunately, we see this happening all the time, says Pradeep. People will take a basic image from DockerHub that somebody created—because at first glance it has the package that they need—but then push the arbitrarily chosen container to production.

There’s a lot wrong with this: you could be using the wrong version of code that has exploits, has a bug in it, or worse it could have malware bundled in on purpose—you just don’t know.

To mitigate this, you can run a static analysis like CoreOS’ Clair or Banyon Collector that you can use to scan your containers for vulnerabilities.

Keep Base Images Small

Start with the leanest most viable base image and then build your packages on top so that you know what’s inside.

Smaller base images also reduces overhead. Your app may only be about 5 mb, but if you blindly take an off-the-shelf image, with Node.js for example, it includes an extra 600MB of libraries you don’t need.

Other advantages of smaller images:

  • Faster builds
  • Less storage
  • Image pulls are faster
  • Potentially less attack surface

Use the Builder Pattern

This pattern is more useful for static languages that compile like Go and C++ or Typescript for Node.js.

In this pattern you’ll have a build container with the compiler, the dependencies, and maybe unit tests. Code then runs through the first step and outputs the build artifacts. These are combined with any static files, bundles, etc. and go through a runtime container that may also contain some monitoring or debugging tools.

In the end, your Docker file should only reference your base image and the runtime env container.

#2: Container Internals

Use a Non-Root User Inside the Container

When packages are updated inside your container as root, you’ll need to change the user to a non-root user.

The reason being, if someone hacks into your container and you haven’t changed the user from root, then a simple container escape could give them access to your host where they will be root. When you change the user to non-root, the hacker needs an additional hack attempt to get root access.

As a best practice, you want as many shells around your infrastructure as possible.

In Kubernetes you can enforce this by setting the Security context  runAsNonRoot: true  which will make it a policy-wide setting for the entire cluster.

Make the File System Read-Only

This is another best practice that can be enforced by setting the option  readOnlyFileSystem: true .

One Process per Container

You can run more than one process in a container; however, it is recommend to run only one single one. This is because of the way the orchestrator works. Kubernetes manages containers based on whether a process is healthy or not. If you have 20 processes running inside a container, how will it know whether its healthy or not?

To run multiple processes that all talk and depend on one another, you’ll need to run them in Pods.

Don’t Restart on Failure. Crash Cleanly Instead.

Kubernetes restarts failed containers for you, and therefore you should crash cleanly with an error code, so that they can restart successfully without your intervention.

Log Everything to stdout and stderr

By default Kubernetes listens to these pipes and sends the outputs to your logging service. On Google Cloud for example they go to StackDriver logging automatically.

#3: Deployments

Use the “Record” Option for Easier Rollbacks

When applying a yaml use the –record flag:

 kubectl apply -f deployment.yaml --record 

With this option, everytime there is an update, it gets saved to the history of those deployments and it provides you with the ability to rollback a change.

Use Plenty of Descriptive Labels

Since labels are arbitrary key-value pairs, they are very powerful. For example consider the diagram below with app named ‘Nifty’ spread out in four containers. With labels you can select only the backend containers by selecting the backend (BE) labels.

Use Sidecars for Proxies, Watchers, Etc.

Sometimes you need a group of processes to communicate with one another. But you don’t want all of those to run in a single container (see above “one process per container”) and instead you would run related processes in a Pod.

Along the same lines is when you are running a proxy or a watcher that your processes depend on. For example, a database that your processes depend on. You wouldn’t hardcode the credentials into each container. Instead, you can deploy the credentials as a proxy into a sidecar where it securely handles the connection:

Don’t Use Sidecars for Bootstrapping!

Although sidecars are great for handling requests both outside and inside the cluster, Sandeep doesn’t recommend using them for bootstrapping. In the past bootstrapping was the only option, but now Kubernetes has “init containers.”

In the case of a process running in one container that is dependant on a different microservice, you can use “init containers” to wait until both processes are running before starting your container. This prevents a lot of errors from occurring when processes and microservices are out of sync.

Basically the rule is: use sidecars for events that always occur and use init containers for one time occurrences.

Don’t Use :Latest or No Tag

This one is pretty obvious and most people doing this today already. If you don’t add a tag for your container, it will always try to pull the latest one from the repository and that may or may not contain the changes you think it has.

Readiness and Liveness Probes are Your Friend

Probes can be used so that Kubernetes knows if nodes are healthy and if it should send traffic to it. By default Kubernetes checks if processes are running or not running. But by using probes, you can leverage this default behaviour in Kubernetes to add your own logic.

 

#4: Services

Don’t Use type: LoadBalancer

Whenever you add load balancer to your deployment file on one of the public cloud providers, it spins one up. This is great for high availability and speed, but it costs money.

Use Ingress instead which lets you load balance multiple services through a single end-point. This is not only simpler, but also cheaper. This strategy, of course, will only work if you doing http or web stuff and it won’t work for UDP or TCP based applications.

Type: Nodeport Can Be “Good Enough”

This is more of a personal preference and not everyone recommends this. NodePort exposes your app to the outside world on a VM on a particular port. The problem with it is it may not be as highly available as a load balancer. For example, if the VM goes down so does your service. 

Use Static IPs They Are Free!

On Google Cloud this is easy to do by creating Global IPs for your ingress’. Similarly for your load balancers you can use Regional IPs. In this way, when your service goes down you don’t have to worry about your IPs changing.

Map External Services to Internal Ones

This is something that most people don’t know you can do in Kubernetes. If you need a service that is external to the cluster, what you can do is use configMaps to map the name of a service to your cluster or to a Pod. Now you can just call the service by its name and the Kubernetes manager passes you on to it as if it’s part of the cluster. Kubernetes treats the service as is if it is on the same network, but it sits actually outside of it.

#5: Application Architecture

Use Helm Charts

Helm is basically a repository for packaged up Kubernetes configurations. If you want to deploy a MongoDB. There’s a preconfigured Helm chart for it with all of its dependencies that you can easily use to deploy it to your cluster.

There are many Helm charts for popular software components that will save you a lot of time and effort.

All Downstream Dependencies Are Unreliable

Your application should have logic and error messages in it to account for any dependencies over which you have no control. To help you with the downstream management, Sandeep suggests, you can use a service mesh like Istio or Linkerd.

Use Weave Cloud

Clusters are difficult to visualize and manage and using Weave Cloud really helps you see what’s going on inside and to keep track of dependencies.

Make Sure Your Microservices Aren’t Too Micro

You want logical components and not every single function turned into a microservice.

Use Namespaces to Split Up Your Cluster

For example you can create Prod, Dev and Test in the same cluster with different namespaces and also use namespaces to limit the amount of resources so that one buggy process doesn’t use all of the cluster resources.

Role-Based Access Control

Enact proper access control to limit the amount of access to your cluster as a best practices security measure.

Lessons Learned from Running Weave Cloud in Production

Next Jordan Pellizzari spoke what we’ve learned from running and developing Weave Cloud on Kubernetes for the past two years. We currently run on AWS EC2 and have about 72 Kubernetes Deployments running on 13 hosts and across about 150 containers. All of our persistent storage is kept in S3, DynamoDB or RDS, and we don’t keep state in containers. For more details on how we set up our infrastructure refer to Weaveworks & AWS: How we manage Kubernetes clusters.

Challenge 1. Version Control for Infrastructure

At Weaveworks all of our infrastructure is kept in Git, and when we make an infrastructure change, like code, it is also done via pull request. We’ve been calling this GitOps and we have a number of blogs about it. You can start with the first one: GitOps – Operations by Pull Request”.

At Weave, Terraform scripts, Ansible and of course Kubernetes YAML files are all in Git under version control.

There’s a number of reasons that it’s best practice to keep your infrastructure in Git:

  • Releases are easily rolled back
  • An auditable trail of who did what is created
  • Disaster recovery is much simpler

Problem: What Do You Do When Prod Doesn’t Match Version Control?

In addition to keeping everything in Git, we also run a process that checks the differences between what’s running in the prod cluster with what’s in checked into Version Control. When it detects a difference, an alert is sent to our slack channel.

We check differences with our open source tool called Kube-Diff.

Challenge 2. Automating Continuous Delivery

Automate your CI/CD pipeline and avoid manual Kubernetes deployments. Since we are deploying multiple times a day, this approach saves the team valuable time as it removes manual error-prone steps. At Weaveworks, developers simply do a Git push and Weave Cloud takes care of the rest:

  • Tagged Code runs through CircleCI tests and builds a new container image and pushes the new image to the registry.
  • The Weave Cloud ‘Deploy Automator’ notices the image, pulls the new image from the repository and then updates its YAML in the config repo.  
  • The Deploy Synchronizer, detects that the cluster is out of date, and it pulls the changed manifests from the config repo and deploys the new image to the cluster.

 

Here is a longer article (The GitOps Pipeline) on what we believe to be the best practices when building an automated CICD pipeline.

In Summary

Sandeep Dinesh provided an in-depth overview of 5 Best Practices for creating, deploying and running applications on Kubernetes. This was followed by a talk by Jordan Pellizzari on how Weave manages its SaaS product Weave Cloud in Kubernetes and the lessons learned.

Watch the video in its entirety:

For more talks like these, join the Weave Online User Group.

Original Link

Ultimate Guide to Red Hat Summit 2018 Labs: Hands-on with Linux Containers

This year you’ve got a lot of decisions to make before you got to Red Hat Summit in San Francisco, CA from 8-10 May 2018.

There are breakout sessions, birds-of-a-feather sessions, mini sessions, panels, workshops, and instructor led labs that you’re trying to juggle into your daily schedule. To help with these plans, let’s try to provide an overview of the labs in this series.

Our first article is starting with a focus on Linux containers, where you can get hands-on with everything from container security, containerizing applications, developing container solutions and digging in to container internals.

The following hands-on labs are on the agenda, so let’s look at the details of each one.

Linux Container Internals: Part 1 and Part 2

Have you ever wondered how Linux containers work? How they really work, deep down inside? Do you have questions like:

– How does sVirt/SELinux, SECCOMP, namespaces, and isolation really work?

– How does the Docker Daemon work?

– How does Kubernetes talk to the Docker Daemon?

– How are container images made?

In this lab, we’ll answer all of these questions and more. If you want a deep technical understanding of containers, this is the lab for you. It’s an engineering walk through the deep, dark internals of the container host, what’s packaged in the container image, and how container orchestration work. You’ll get the knowledge and confidence it takes to apply your current Linux technical knowledge to containers.

Presenters: Scott McCarty, Red Hat; John Osborne, Red Hat; Jamie Duncan, Red Hat

A Practical Introduction to Container Security (3rd Ed.)

Linux containers provide convenient application packing and run time isolation in multi-tenant environments. However, the security implications of running containerized applications is often taken for granted. For example, today it is very easy to pull container images from the Internet and run them in the enterprise without examining their content and authenticity.

In this lab, you’ll complete a series of low-level, hands-on exercises aimed at understanding the concepts, challenges, and best practices associated with deploying containers in a secure fashion. Topics include registry configuration, SELinux, capabilities, and SECCOMP profiles, along with image inspection, scanning, and signing. This third edition may be based on CRI-O, depending on Red Hat Enterprise Linux feature release time frames.

Presenters: Bob Kozdemba, Red Hat, Inc.; Daniel Walsh, Red Hat; Aaron Weitekamp, Red Hat

Containerizing Applications—Existing and New

In this hands-on lab, based on highly rated labs from Red Hat Summit 2016 and 2017, you’ll learn how to create containerized applications from scratch and from existing applications.

Learn how to build and test these applications in a Red Hat OpenShift environment, as well as deploy new containers to Red Hat Enterprise Linux Atomic Host. You’ll quickly develop a basic containerized application, migrate a simple popular application to a containerized version, and deploy your new applications to container host platforms. You’ll get a feel for the different container host platforms and learn how to choose the best one for your container needs. And finally, you’ll learn what to consider and what tools you can use when implementing a containerized microservices architecture.

Presenters: Langdon White, Red Hat; Scott Collier, Red Hat; Tommy Hughes, Red Hat; Dusty Mabe, Red Hat

Develop IoT Solutions with Containers and Serverless Patterns

The Internet of Things (IoT) is expected to generate a diverse range of data types that will need different mechanisms to process and trigger actions. Combined with other considerations, like total cost of ownership, required skill set, and operations, the back end solution may be composed of different architectures and patterns.
ultimate guide red hat summit labsIn this hands-on lab, you’ll learn how to build a containerized. Intelligent IoT solution that can process different data types using the elasticity of a container platform, such as Red Hat OpenShift Container Platform, and serverless architectures to execute on-demand functions in response to IoT events. Using the qualities of each architectural style, developers can focus on writing code without worrying about provisioning and operating server resources, regardless of the scale. The methodologies exemplified within this lab can be used by companies looking to use their cloud computing infrastructure to build complex and robust IoT solutions.

Presenters: Ishu Verma, Red Hat, Andrew Block, Red Hat

Stay tuned for more Red Hat Summit 2018 Labs and watch for more online under the tag #RHSummitLabs.

Original Link

10 Steps to Cloud Happiness (Step 10): Agile Cloud Service Integration

10 steps cloud happiness

This is the tenth step on our journey introducing a path to cloud happiness, one that started four months ago and it’s been a vast array of content to help you discover the joys of cloud development.

It’s the pinnacle of your climb to the top where you find cloud happiness, having led you from the basics to the more advance solutions as you learn how to leverage development, containers, a container platform and more.

As previously discussed in the introduction, it’s possible to find cloud happiness through a journey focused on the storyline of digital transformation and the need to deliver applications in to a cloud service.

Application delivery and all its moving parts such as containers, cloud, platform as a service (PaaS) and a digital journey requires some planning to get started. There’s nothing like hands-on steps to quickly leverage real experiences as you prepare.

In earlier steps you covered how to get a cloud, the use of a service catalog, how to add cloud operations functionality, centralizing business logic, process improvement, the human aspect, a retail web shop, curing travel woes, and explored financial solutions, so what’s next?

10 steps cloud happinessAgile Cloud Service Integration

In this final step you’ll reach for the stars, deploying a six container solution and be given a full backing workshop that takes you through the solution and it’s correct deployment step-by-step.

The project showcases application development in the Cloud leveraging services, containers and cloud integration. Technologies like containers, Java, PHP, .NET, business rules, services, container platforms, integration, container integration and much more are presented for a hands-on experience.

Tasks include installing OpenShift Container Platform 3.7, JBoss Enterprise Application Platform (EAP), JBoss Business Rules Management System (BRMS), several containerized web services and testing the solution using a REST client.

You can follow the instructions provided to get this up and running on your local machine with just 6GB of memory and the installation identifies any missing requirements while pointing you to where they can be found for your installation.

The installation is in several parts, first installing your cloud as covered in Step 1 – Get a Cloud.

Second, you’ll deploy a container with JBoss EAP and JBoss BRMS with a rules project for determining travel booking discounts.

Install JBoss BRMS on OpenShift

  1. (OPTIONAL if you did step 1) First ensure you have an OpenShift container based installation, such as one of the following installed first:
  1. Download and unzip this demo.
  2. Download JBoss EAP & JBoss BRMS, add to installs directory (see installs/README).
  3. Run ‘init.sh’ or ‘init.bat’ file. ‘init.bat’ must be run with Administrative privileges:
 # The installation needs to be pointed to a running version # of OpenShift, so pass an IP address such as: # $ ./init.sh 192.168.99.100 # example for OCP.

Now log in to JBoss BRMS and start developing containerized rules projects (the address will be generated by the init script):

  • http://destinasia-rules-demo-appdev-in-cloud.192.168.99.100.nip.io/business-central ( u:erics / p:jbossbrms1! )

After that, you’ll move on to installing four services using Ansible automation. Once they are up and running it’s the final deployment of the agile integration service to provide a single end-point for submitting travel bookings to this application.

Ansible Playbooks for Automated Service Deployment on OpenShift

Click on link to instructions for Ansible Playbooks Service Deployment to deploy:

  1. Rules from container JBoss BRMS to xPaaS Decision Server
  2. .Net service to container
  3. Java service to xPaaS EAP Server
  4. PHP service to container
  5. Fuse service to xPaaS Integration Server

Once all of this has been successfully deployed, it’s time to test the results by submitting a travel booking through a web browser RestAPI client. The process is described in the project readme.

Not only is this experience outlined in the project, there’s also a step-by-step hands-on workshop online that you can work through.

This workshop has you taking on the role of lead developer of the Destinasia travel discount project to set up a development environment in the Cloud for container-based application services deployments. Once it’s set up, you’re shown how to validate the services using end-to-end testing.

Rest of the story

If you are looking for the introduction to the 10 steps series or any of the individual steps:

  1. Get a Cloud
  2. Use a Service Catalog
  3. Adding Cloud Operations
  4. Centralize Business Logic
  5. Real Process Improvement
  6. Human Aspect
  7. Retail Web Shop
  8. Curing Travel Woes
  9. Exploring Financial Services
  10. Agile Cloud Service Integration

This completes our walk through the 10 Steps to Cloud Happiness! Thanks for coming along on our journey. Are you ready to start tackling the various challenges of your very own application delivery in the cloud as part of your digital journey?

Original Link

What Is Kubernetes and How Can Your Enterprise Benefit From This DevOps Trend?

Container use is exploding right now. Developers love them and enterprises are embracing them at an unprecedented rate.

If your IT department is looking for a faster and simpler way to develop applications, then you should be considering container technology. But what are containers and what problems do they address? Where does Kubernetes fit into the container and cluster management space? Why is it presenting enterprises with implementation challenges? And, what considerations should you bear in mind as you explore whether containers and cluster management tools are right for your application development needs?

Here are some essentials that every enterprise needs to know about containers, container cluster management, the pros and cons of Kubernetes, and how to get the most out of our Kubernetes implementation.

What Are Containers and What Problems Do They Address?

When application developers test software, they must ensure it runs reliably when moved from one computing environment to another. This could be from a staging environment to production or from an on-premises server to a virtual machine in the cloud. The problem is that different environments are rarely symbiotic. The software may be different and the network and security environments are almost certainly going to be different.

Containers address this problem by bundling everything that makes up your development environment into one package. This introduces a level of environmental consistency and lets developers deploy applications quickly, reliably, and in the same manner, regardless of the deployment environment. By containerizing the application platform and its dependencies, differences in operating system distributions and the underlying infrastructure are abstracted away. In fact, with containers, you can forget about the infrastructure altogether.

Modular, executable standalone packages for software, containers include every element you’ll ever need to run an application – from code to settings. This portability makes containers a great asset to organizations thinking about a multi-cloud strategy. 

Containers can also help you prepare for a proper DevOps implementation and its promise of efficient, rapid delivery. With containers, you can update and upgrade, without the legacy headache of starting over each time. Thanks to containers, implementing new applications and efficiencies into existing systems isn’t as hard as you think anymore.

Lighter than virtual machines and less resource-intensive, container adoption is through the roof. A recent survey shows that 94% of respondents had either investigated or used some container technology over last 12 months.

What Companies Are Leading the Way With Containers?

Docker is now the de facto container technology. With a mature technology stack, strong open source ecosystem, compatibility with any platform, and great timing (Docker launched just as the popularity of virtual machines was waning). Docker has left its competitors – rkt, OpenVZ, and LXC way behind. Immutable and independent from the underlying infrastructure, Docker runs the same way on a developer machine as it does in a production environment.

How Do Container Management Tools Help Efficiently Manage Containers?

To effectively implement this DevOps approach to application development and effectively manage container technologies and platforms at an enterprise level, you need the right tools.

This is where container cluster management or container orchestration solutions come into play. As enterprises expand their use of containers into production, problems arise with managing which containers run where, dealing with all those containers, and ensuring streamlined communication between containers across hosts. These scaled-out containers are called “clusters.”

Container cluster management tools provide an enterprise framework for integrating and managing containers at scale and ensure essential continuity as you embrace DevOps. Basically, they can help you define your initial container deployment while taking care of mission-critical IT functions on the back end such as availability, scaling, and networking – all of which are streamlined, standardized, and unified.

What Enterprise Container Cluster Management Solutions Are Available?

There are many options for container cluster management. Kubernetes, however, is winning the container war and is now the most widely used, open source solution. With 15 years of Google development behind it and an enviable open source community (including Red Hat, Canonical, CoreOS, and Microsoft), Kubernetes has matured faster than any other product on the market.

Kubernetes hits the sweet spot for container cluster management because it gives developers the tools they need to quickly and efficiently respond to customer demands while relieving the burden of running applications in the cloud. It does this by eliminating many of the manual tasks associated with deploying and scaling your containerized applications so that you can run your software more reliably when moved from one environment to another. For example, you can schedule and deploy any number of containers onto a node cluster (across public, private, or hybrid clouds) and Kubernetes then does the job of managing those workloads so they are doing what you intend.

Thanks to Kubernetes, container tasks are simplified, including deployment operations (horizontal auto-scaling, rolling updates, canary deployments) and management (monitoring resources, application health checks, debugging applications, and more).

Yet, Kubernetes still poses high entry barriers

But, and there’s always a “but.” Despite its many benefits, as we previously discussed when offering tips on choosing the best Kubernetes management platform, Kubernetes is still relatively difficult to set-up and use. Managing Kubernetes is a time-consuming process requiring highly-skilled staff and a potentially large monetary commitment. To the untrained eye, Kubernetes looks like it can be up and running in hours or days, but this is far from true for production environments where additional functionality is needed – security, high availability, disaster recovery, backups, and maintenance – everything you need to make Kubernetes “production-ready.”

The result is that organizations that go the Kubernetes route quickly realize that they are unable to deliver it without bringing in skilled and costly external resources.

So, what are your options? The answer lies in Kubernetes management tools. Designed to simplify Kubernetes management for the enterprise, even if your systems are rigid, popular solutions include Tectonic, Red Hat’s OpenShiftContainer Platform, Rancher, and Kublr.

How to Choose the Right Kubernetes Management Platform

There are a number of things to consider as you choose your Kubernetes management platform for your enterprise, including:

  • Production-readiness – Does it provide the features you need to fully automate Kubernetes configuration, without the configuration hassles? Does it have enterprise-grade security features? Will it take care of all management tasks on the cluster – automatically? Does it provide high-availability, scalability, and self-healing for your applications?
  • Future-readiness – Does the platform support a multi-cloud strategy? Although Kubernetes lets you run your apps anywhere and everywhere without the need to adapt them to the new hosting environment, be sure your Kubernetes management platform can support these capabilities so you can configure them when you need them in the future.
  • Ease of management – Does it incorporate automated intelligent monitoring and alerts? Does it remove the problem of analyzing Kubernetes’ raw data so that you have a single pane of glass view into system status, errors, events, and warnings?
  • Support and training – As your enterprise ramps up its container strategy, will your Kubernetes management platform provider assure you of 24×7 support and training?

Of all the available options, only a few check each of these boxes. Kublr, for instance, is a cost-effective and production-ready platform that accelerates and streamlines the set-up and management of Kubernetes. With it, you can gain a self-healing, auto-scaling solution that brings your legacy systems to the cloud on a single engine, while you seamlessly maintain, rebuild, or replace them in the background. Dynamism, flexibility, and unmatched transparency between modules. It’s a win-win.

How to Choose the Right Kubernetes Management Platform Vendor

As you think about and plan your Kubernetes enterprise strategy, educate yourself about the hurdles along the way and the challenges and misconceptions about Kubernetes. Find out what you should be looking for in a Kubernetes platform, spend some time doing a Kubernetes management platform comparison. Finally, see for yourself how automation tools can provide the production-readiness (the single most important feature), future-readiness, ease-of-management, and the support you need to use Kubernetes, without the management overhead.

Original Link

KISS Kubernetes: Building Your First Container Management Cluster

Kubernetes manages &quot;boatloads&quot; of containers for you

In the space of a few years, container platforms like Docker have become mainstream tools for deploying applications and microservice architectures. They offer gains in speed, efficiency, and security. They also fit well with another rising star, DevOps. But container platforms typically do not offer tools for managing containers at scale. Manual management processes may work for a few containers and the apps or microservices they contain, but they rapidly become unworkable as the number of containers rises.

Spotting the problem, or indeed the opportunity to improve matters, Google created Kubernetes to automate container management. Kubernetes (Greek for “pilot (of a ship)”) deploys, scales, and manages “boatloads” of containers for you. The building blocks for the Kubernetes technology are reasonably straightforward, at least for initial working configurations.

In this article, we go through the main steps of getting your first Kubernetes management cluster up and running. We use the KISS (Keep It Simple, Stupid!) approach to focus on the key points. More advanced aspects are left as subjects for other posts. We also assume you have a basic understanding of containers, images, and container platforms like Docker or Rocket.

Kubernetes Concepts 101

A common understanding of terms is always a good idea. Here’s a mini-glossary for Kubernetes.

  • Master. The server or machine running Kubernetes and directing the container management activities on other servers.
  • Nodes. The other servers, also known as slaves or minions, running Kubernetes components like agents and proxies, as well as pods (see below) that hold the containers. Nodes take their orders from, and report back to, the master.
  • Pod. The smallest manageable unit for Kubernetes. A pod may contain one container, several containers, a complete app or part of an app. In other words, a pod is also flexible.
  • Service. An interface (endpoint) for access to one or more pods on a node. Acts as a proxy and does load balancing across a group of replicated pods. Pods may come and go, but a service is designed to stay put.
  • Replication Controller. Management functionality for ensuring deployment of the required number of pods. Will automatically create or eliminate pods, as needed to maintain the required state. Newer functionality (Kubernetes “deployment”) now offers an advanced alternative to replication controllers, but for the sake of simplicity, we still discuss them here.
  • Kubectl. A command line interface for you to tell Kubernetes what to do.

Kubernetes terminology extends further, but this is enough for us to get started.

Getting and Installing Kubernetes

Google made Kubernetes freely available and free to use. You can:

  • Download binaries for installation in a Linux or similar environment (Linux running in Windows, etc.)
  • Download a version called Minikube to automatically install a master and a node on the same machine (possibilities include Linux, Mac OS X and Windows).

A Minikube installation can greatly simplify things. Initial installation is often much easier. Master-node communication is done within the same (virtual) machine, so network configuration is not a factor. Minikube also conveniently includes container runtime functionality (the Docker engine). Later, you may well want the master on one machine and nodes on others. To start with, however, the Minikube installation for Mac OS X as an example can be as simple as running the following two commands:

brew cask install minikube
brew install kubectl

You can also try out this Kubernetes online server that lets you launch Kubernetes and try out its commands, even before you proceed to an installation of your own.

After installation, starting Minikube is simple too:

minikube start

Kubectl and Apiserver

When Kubernetes is installed (via Minikube or another way), it makes an API available through which you can issue different Kubernetes commands. The component for this is called apiserver and it is part of the master. Kubectl, the CLI, communicates with apiserver. Kubectl can communicate locally, as in Minikube, or remotely, for example when the Kubernetes master is running on another machine or in the cloud. So, when you’re ready, you can simply point kubectl to a remote installation from the same machine on which you started with Minikube. Of course, by that stage you may be accessing apiserver programmatically anyway, rather than via the CLI.

Starting a Kubernetes Pod

A pod holds one or more containers. Pods are the ultimate management goal of Kubernetes, so let’s make a pod and start managing it. A simple way to do this is to make a YAML file with the basic pod configuration details inside. After, we’ll use kubectl to make a pod, simply giving it the file name to point it to the configuration of the pod to be made. The configuration file (“pod1.yml” for instance) will have content like this:

apiVersion: v1
kind: Pod
metadata:
name: my_pod
labels:
app: my_app
spec:
containers:
name: my_app
image: my_app
ports:
containerPort: 80

The kubectl command to make the pod looks like this:

kubectl create -f pod1.yml

We can check that the pod now exists by entering:

kubectl get pods

To find out more about the range of kubectl commands available, simply enter: kubectl

Creating a Kubernetes Service

Next, we’ll make a Kubernetes service. The service will let us address a group of replicated pods as one entity. It will also do load balancing across the group. We can use the same approach as for making a pod, i.e. make another YAML file (“service1.yml”), this time with the service configuration details. The contents of the file will look something like this:

apiVersion: v1
kind: Service
metadata:
name: my_service
spec:
selector:
app: my_app
version: v1
ports:
protocol: TCP
port: 4000

The kubectl command to make the service looks like this:

kubectl create -f service1.yml

We can check that the service now exists by entering:

kubectl get svc

Making a Replication Controller

We’re now ready for some Kubernetes automation, albeit on a local basis. The replication controller for a group of replicated pods makes sure that the number of pods you specify is also the number of pods running at any given time. Once again, we can create another YAML file (“rc1.yml”) along the lines of the following:

apiVersion: v1
kind: ReplicationController
metadata:
name: my_rc
labels:
app: my_app
version: v1
spec:
replicas: 8
selector:
app: my_app
version: v1
template:
metadata
labels:
app: my_app
version: v1
spec:
containers:
name: my_app
image: (path)/my_app:1.0.0
ports:
containerPort: 4000

The kubectl command to make the replication controller looks like this:

kubectl create -f rc1.yml

When you now use the command:

kubectl get pods

you will see eight pods running, because of the “replicas: 8” instruction in the “rc1” YAML file. Kubernetes also uses information in the YAML files we have shown here to associate service1 and rc1, so that somebody addressing service1 will now be routed by service1 to one of the eight pods created by rc1.

The replication controller keeps track of each pod via a label that Kubernetes gives the pod, in the form of a key/value pair. The Kubernetes master keeps all such labels in an etcd key value store. After you have started the replication controller, you can try deleting (kubectl command “delete”) one of the replicated pods to see how the replication controller automatically detects the deletion because a known label has disappeared, and makes a new replicated pod to replace the deleted one.

Setting Up Separate Kubernetes Nodes

So far, we’ve done everything on the same local machine. This is good for seeing how to start using Kubernetes. It is also a useful environment for development, so the local one-master-one-node cluster that we have set up already brings benefit.

The next step is to distribute Kubernetes management of containers over multiple nodes that are also remote from the master. This allows you to make use of additional Kubernetes functionality, like its ability to decide which servers to use for which containers, making efficient use of server resources as it goes. The Kubernetes components in the master are the apiserver, the etcd key value store, a scheduler, and a controller-manager. For each node, the components are an agent (kubelet) for communicating with the master, a container runtime component (like Docker), and a proxy (a Kubernetes load-balancing service).

The main steps to split out the master and the nodes are:

  • Make sure the system with the master and the node systems can resolve each other’s address (for example, by adding /etc/hosts entries for Linux systems)
  • Configure the apiserver and etcd components to listen on the network connecting the master and the nodes
  • Install the kubernetes-node package on each node
  • Install and configure tunneling (such as flannel) for inter-node communication.

More Advanced Steps

Further possibilities with Kubernetes include automated rollover and rollback for new container images, the use of a graphical dashboard instead of a command line interface, and the addition of monitoring tools. Kubernetes also continues to be actively supported and developed, so expect new functionality and possibilities to arrive at regular intervals to supplement what we have discussed here. Sooner or later, our KISS approach will have to be modified, too. Simplicity can only take you so far, but at least far enough to already get the benefit of a local Kubernetes management cluster, as we have described in this article.

Original Link

The Real Reason Red Hat Is Acquiring CoreOS

Last week, enterprise open-source leader Red Hat announced it was acquiring CoreOS, an up-and-coming player in the red-hot container marketplace.

Superficially, the motivation for this deal is straightforward: Red Hat needs to round out its container story, and CoreOS fits the bill.

However, as with most of the enterprise infrastructure market, the vendor’s motivations are more complex – as is everything else about the world of containers.

Some might even say that complexity is the point.

Making Containers Enterprise-Ready

Since Docker, Inc. brought containers to the forefront of enterprise infrastructure software innovation back in 2014, the community of both vendors and enterprise developers have been struggling to implement containers in true enterprise scenarios.

Among the missing pieces: container orchestration and container management. Orchestration provides companies with the ability to deploy containers at scale, handling the ins and outs of the elastic scaling essential to the container value proposition.

Management complements the orchestration value proposition, providing visibility and control into orchestration environments as well as added security and other capabilities essential for running containers at the enterprise level.

Leading the container orchestration charge is Kubernetes, an open source effort largely out of Google. Docker has its own orchestration tool dubbed Swarm, but Kubernetes has the edge in terms of product maturity, and has established itself as the leader via an increasingly robust open source ecosystem.

Kubernetes, however, does not directly address the complexities of container management – and it’s this niche that CoreOS sought to fill with its Tectonic product. “Tectonic combines Kubernetes, the leading container management solution, with everything needed to run containers at scale,” explains the CoreOS Web site. “That means the best open source components, battle-tested security systems, and fully automated operations. Tectonic is enterprise Kubernetes.”

The Complexity Challenge

If the layers of technology necessary for containers to meet enterprise requirements sound complex, you’re correct – and containers’ complexity is itself an area of some controversy. “Unlike the platform itself, the routine pre-requisite tasks needed to build a Kubernetes cluster are complex and hard,” says Rob Hirschfeld, CEO and cofounder of RackN. “Multi-node operations [are] hard: that’s why we want platforms like Kubernetes.”

Kubernetes, however, doesn’t make containers any less complicated. “The truth is that Kubernetes is complex. It can be a challenge to get up and running — there’s no denying it. But this very complexity means that there’s an argument to be made in its favor,” says Matt Rogish, Founder at ReactiveOps. “[Amazon] ECS and Docker Swarm seem simpler on the surface, but they both have more accidental complexity — and they foist that complexity onto you,” he continues. “Kubernetes, in turn, has low accidental complexity and high essential complexity (the complexity needed to achieve the things you actually want to achieve).”

The addition of a container management layer like CoreOS Tectonic to Kubernetes doesn’t reduce its complexity, either. Rather, it helps organizations manage it. “The next era of technology is being driven by container-based applications that span multi- and hybrid cloud environments, including physical, virtual, private cloud and public cloud platforms,” explains Paul Cormier, President of Products and Technologies at Red Hat. “We believe this acquisition cements Red Hat as a cornerstone of hybrid cloud and modern app deployments.”

Complexity Déjà Vu: OpenStack

Open source enterprise infrastructure software of enormous complexity is nothing new, of course. Take OpenStack, for example. This private cloud infrastructure initiative has so many moving parts and such a diverse, crowded ecosystem that it has earned a reputation as being extraordinarily complicated and difficult to work with. “The reality is that all multi-node clusters suffer from the same complexity problem. We’ve heard the same thing about OpenStack for years,” RackN’s Hirschfeld says.

Much of the attention directed at OpenStack over the last few years has thus predictably shifted to Kubernetes and the rest of the container community – and today, OpenStack has become part of the complexity that technology like CoreOS Tectonic must manage. “Tectonic is the universal Kubernetes solution for deploying, managing and securing containers anywhere and will unite the benefits of OpenStack with the container-based tooling of Kubernetes,” according to the CoreOS Web site. “With CoreOS by your side, OpenStack will be easier to manage and deploy using the best of container infrastructure.”

The CoreOS Web site continues: “OpenStack has a reputation for complexity that can sometimes rival its power. Kubernetes cluster orchestration makes OpenStack much easier to deploy and manage.”

For Red Hat, OpenStack’s complexity is a problem it can help solve for its customers. “Containers enable application portability across the hybrid cloud, so today customers are deploying their applications in different footprints: in public clouds like Amazon, Azure, and Google, on-premises on platforms like VMWare and OpenStack, but also on bare-metal servers,” explains Joe Fernandes, Red Hat’s Senior Director of Product Management at OpenShift. “What we’ve been doing with OpenShift and with our investment with Kubernetes and containers is building that abstraction so that applications can be deployed efficiently across all these footprints.”

Alex Polvi, CEO of CoreOS, puts a finer point on this topic. “By running OpenStack as an application on Kubernetes we will be able to pull together the entire data center into a single platform that has been proven by hyperscale giants,” Polvi says.

Red Hat’s Open Source Strategy

Red Hat’s business model centers on support and services for essentially free open source software. Yet, while CoreOS built its technology on open source, Tectonic also includes proprietary code as well. “We wanted it to be really clear that CoreOS is all about open source projects and collaboration, even if that means our competitors can compete with us, that we took major effort in keeping the two brands separate,” explained Kelsey Hightower back in 2015, when he was Developer and Advocate at CoreOS. Hightower is now Staff Developer Advocate at Google. “There is coreos.com for ‘Open Source Projects for Linux Containers,’ and tectonic.com that combines those projects in a commercial offering. There are some non-open bits in the commercial offering, but they don’t conflict with the opensource projects.”

For its part, Red Hat has been a major contributor to the Kubernetes effort all along. “Red Hat was early to embrace containers and container orchestration and has contributed deeply to related open source communities, including Kubernetes, where it is the second-leading contributor behind only Google,” the company explains in a press release. “Now with the combination of Red Hat and CoreOS, Red Hat amplifies its leadership in both upstream community and enterprise container-based solutions.”

As to whether Red Hat will open source the bits of Tectonic that are currently proprietary, the vendor plays its cards closer to the vest. “Most of CoreOS’s offerings are already open source today,” explains a Red Hat FAQ. “Red Hat has long shown its commitment to open-sourcing the technology it acquires when it is not open source, and we have no reason to expect a change in this approach.”

Putting the Pieces Together

As an open source vendor, Red Hat doesn’t make money from intellectual property in its software – and thus, the value of CoreOS’s IP has little to do with the acquisition.

This story, in fact, is more about people – not simply the 130 people at CoreOS, although in many ways this deal is an ‘acquihire’ – but also leveraging Red Hat’s team of professionals to provide an increasingly comprehensive services offering to its enterprise customers.

Competitively, this deal is less about IBM and Oracle, Red Hat’s traditional competitors, and more about positioning it against Docker, Inc. “Their union to me signifies a merger of talent…that strengthens Red Hat’s presence in the enterprise market of OpenShift Enterprise against Docker’s Docker Enterprise Edition,” explains Will Kinard, CTO of BoxBoat.

OpenShift is Red Hat’s Platform-as-a-Service offering – and it’s likely that some of the CoreOS technology and human expertise will find their way into the OpenShift product and the OpenShift team at Red Hat, respectively.

Janakiram MSV from Janakiram & Associates and fellow Forbes contributor agrees. “In the enterprise segment, Red Hat is one of the key competitors of Docker, Inc.,” Janakiram explains. “This acquisition puts pressure on Docker, Inc, which has raised over $240 million of funding from a variety of investors. It has to move fast in acquiring enterprise customers to drive adoption of its commercial products.”

For Red Hat’s customers, however, the battle is over talent – a pattern that both OpenStack and Kubernetes have followed in turn. “Customers didn’t have OpenStack expertise, they knew they wanted it,” says Jon Keller, Field CTO for Technologent. “Kubernetes is the same. It’s such a good fit because they literally aren’t going to be able to hire enough people to do it themselves.”

The fact that the container ecosystem is so complex, therefore, is actually a plus for Red Hat – especially within the context of hybrid IT, which adds additional layers of complexity. “We believe this acquisition cements Red Hat as a cornerstone of hybrid cloud and modern app deployments,” Red Hat’s Cormier concludes.

In the final analysis, Red Hat’s customers should come out the winners. “We think our largest customers will benefit from this,” adds Red Hat VP and General Manager Ashesh Badani.

The addition of the CoreOS technology and team to Red Hat’s already extensive expertise with Kubernetes, in the overall context of hybrid IT, gives it perhaps the most credible and comprehensive modern enterprise infrastructure today. As the entire container ecosystem matures, Red Hat’s dominance should only get stronger.

Intellyx publishes the Agile Digital Transformation Roadmap poster, advises companies on their digital transformation initiatives, and helps vendors communicate their agility stories. As of the time of writing, IBM, Microsoft, and VMWare are Intellyx customers. None of the other organizations mentioned in this article are Intellyx customers. 

Originally appeared at Forbes.com.

Original Link

Docker for Beginners Part 1: Containers and Virtual Machines

Hello!

This is the Part 1 from the Docker series, Docker for Beginners in 8 Parts. In this post we’re going to explore the differences between Containers and Virtual Machines!

  • Part 1Differences between Containers and Virtual Machines
  • Part 2 – Overview of Docker Installation for Mac and Ubuntu
  • Part 3 – Docker Images and Containers
  • Part 4 – Exploring Docker Images in Details
  • Part 5 – Exploring Docker Containers in Details
  • Part 6 – Building Custom Docker Images with Dockerfile
  • Part 7 – Pushing our Great Docker Image to Docker Hub
  • Part 8 – Keeping MongoDB Data with Docker Volumes

Before jump into hacking Docker, let’s explore a few differences between Containers and Virtual Machines. Actually, we should understand what is a Container and what is a Virtual Machine even before compare them.

It’s common to compare them and in theory, as Internet always  sometimes says, Containers are better than Virtual Machines.

Although you can run your Apps in “almost” the same way in both technologies, they’re different and sometimes you can’t just compare them, since one can be better than other based on your context. Even more: They can be used together! They’re are not even enemies!

Let’s jump into the fundamental concepts of both technologies.

Applications Running in a Real Computer

Before starting the comparison, what about get a step back and remember how does a classical application runs in a real computer?

To get a more real example, imagine an application that has 3 main components that should run together:

  • MySQL Database
  • Nodejs Application
  • MongoDB Database

As you can see, we should execute 3 different applications. We can run this applications directly in a real computer, using its Operating System (let’s say a Linux Ubuntu) as below:

Notice that:

Server: is the real physical computer that runs an operating system

Host OS: is the operating system running upon the server, in this case a Linux Ubuntu

Applications: Are the 3 applications running together in the operating system

But you can fall in the challenge to get these 3 applications running isolated from each other, each with its own operating system. Imagine that:

  • MySQL should run on Linux Fedora
  • Nodejs should run on Linux Ubuntu
  • MongoDB should run on Windows

If we follow the approach above, we can create the next architecture with 3 real computers:

Hmm, that doesn’t seems good because it is too heavy. Now we’re working with 3 physical machines, each one with its own operating system and besides that they must communicate with each other.

Virtual Machines come to the game to create a better isolated environment without using hundreds of real computers!

Virtual Machines

Long story short:

Virtual Machines emulate a real computer by virtualizing it to execute applications, running on top of a real computer.

Virtual Machines can emulate a real computer and can execute applications separately. To emulate a real computer, virtual machines use a Hypervisor to create a virtual computer.

Hypervisor is responsible to create a virtual hardware and software environment to run and manages Virtual Machines.

On top of the Hypervisor, we have a Guest OS that is a Virtualized Operating System where we can run isolated applications, called Guest Operating System.

Applications that run in Virtual Machines have access to Binaries and Libraries on top of the operating system.

Let’s see a terrible picture designed by me beautiful picture with this architecture:

As you can see from this picture, now we have a Hypervisor on top of the Host OS that provides and manages the Guest OS. In this Guest OS we would run applications as below:

Great! Now the 3 applications can run on the same Real Computer but in 3 Virtualized Machines, completely isolated from each other.

Virtual Machines Advantages

Full Isolation

Virtual Machines are environments that are completely isolated from each other

Full Virtualization

With full virtualization we can have a fully isolated environment, with each Virtual Machine with its own CPU virtualization

Virtual Machines Drawbacks

Heavy

Virtual Machines usually execute in a heavy isolated process, because it needs am entire Guest OS

More Layers

Depending on your configuration, you would have one more layer when your virtual machine doesn’t have direct access to the hardware (hosted hypervisor) and it brings to us less performance

Containers

Long story short again:

Containers are isolated processes that share resources with its host and, unlike VMs, doesn’t virtualize the hardware and doesn’t need a Guest OS

One of the biggest differences between Containers and VMs is that Containers share resources with other Containers in the same host. This automatically brings to us more performance than VMs, since we don’t have a Guest OS for each container.

Instead of having a Hypervisor, now we have a Container Engine, like below:

The Container Engine doesn’t need to expose or manage Guest OS, therefore our 3 applications would run directly in the Container as below:

Applications in Containers can also access Binaries and Libraries:

Containers Advantages

Isolated Process

Containers are environments that will be executed by isolated processes but can share resources with other containers on the same host

Mounted Files

Containers allow us to mount files and resources from inside the container to the outside.

Lightweight Process

Containers don’t run in a Guest OS, so its process is lightweight with a better performance and can start up the container in seconds

Containers Drawbacks

Same Host OS

You can fall in a situation when each application requires a specific OS and it easier to achieve with VMs, since we can have different Guest OS

Security Issues

Containers are isolated process that have direct access to a few important namespaces such as Hostname, Networks and Shared Memory. Your container can be used to do bad things more easily! Of course you can control your root user, create a few barriers but you should worry about it.

Result from the Comparison

From this simple comparison you can have thoughts like this:

Hey, Virtual Machines is not for me! It’s really heavy, run an entire operating system and I can’t pack one hundred apps in seconds!

We can list more problems with Containers and Virtual Machines. Actually the point is:

There is no winner here! There is no better approach if you’re just analyzing them in isolation.

You can have a better approach based on a context, a scope, a problem

Actually, Virtual Machines are great and you can even work with Virtual Machines and Containers together! Imagine a situation that:

  • Your production environment uses Windows, but you should have an application that just runs on Linux
  • So, you can have a Virtual Machine to run a Linux distribution
  • Then, you can have Containers running inside this Virtual Machine

So, What is Docker?

What is Docker and why we were talking about Virtual Machines and Containers so far?

Docker is a great technology to build containers easily.

Where you read Container Engine in the picture, now you can read Docker Engine.

But containers technology is more than a decade old. Google has been working with a thousand of Containers so far.

So, why we’re talking about Containers? And why Docker?

Docker is really simple and easy to use and has a great adoption of the community.

Containers are really old, but the way to create containers was really complicated. Docker shows us a new way to think about container creation.

In a nutshell, Docker is:

  • A container technology written in Go Language
  • A technology that brings to us the facility to start up a container in seconds
  • A technology that has a huge community adoption
  • A technology with its own tool to run multiples containers easily
  • A technology with its own tool to run manage a cluster of containers easily
  • A technology that uses cgroups, namespaces and file systems to create lightweight isolated process

That’s it!

You’ll get your hands dirty by running Docker containers in the next posts of the series.

But before that, let’s just install Docker on your machine in the next post!

I hope that this article would be helpful to you!

Thanks!

Original Link

Serverless Containers Intensify Secure Networking Requirements

When you’re off to the races with Kubernetes, the first order of business as a developer is figuring out a microservices architecture and a DevOps pipeline to build pods. However, if you are the Kubernetes cluster I&O Pro, also known as site reliability engineer (SRE), then your first order of business is figuring out Kubernetes itself, as the cluster becomes the computer for pod-packaged applications. One of the things a cluster SRE deals with-even for managed Kubernetes offerings like GKE-is the cluster infrastructure: servers, VMs or IaaS. These servers are known as the Kubernetes nodes.

The Pursuit of Efficiency

When you get into higher-order Kubernetes there are a few things you chase.

First, multi-purpose clusters can squeeze out more efficiency of the underlying server resources and your SRE time. By multi-purpose cluster, I mean running a multitude of applications, projects, teams/tenants, and DevOps pipeline stages (dev/test, build/bake, staging, production), all on the same cluster.

When you’re new to Kubernetes, such dimensions are often created on separate clusters, per project, per team, etc. As your K8s journey matures though, there is only so long you can ignore the waste this causes in the underlying server-resource capacity. Across your multicloud, consolidating many clusters into as few as practical for your reliability constraints also saves you time and less swivel-chairing for: patching, cluster upgrades, secrets and artifact distribution, compliance, monitoring, and more.

Second, there’s the constant chase of scaling efficiency. Kubernetes and active monitoring agents can help take care of auto-scaling individual micro-services, but scaling out assumes you have capacity in your cluster nodes. Especially if you’re running your cluster atop IaaS, it’s actually wasteful to maintain and pay for extra capacity in spare VM instances. You probably need some buffer because spinning up VMs is much slower than for containers and pods. Dynamically right-sizing your cluster is quite the predicament, particularly as it becomes more multi-purpose.

True CaaS: Serverless Containers

When it comes to right-sizing your cluster scale, while the cloud providers are happy to take your money for extra VMs powering your spare node capacity, they do have a better solution. At Re:Invent 2017, AWS announced Fargate to abstract away the servers underneath your cluster. Eventually, it should support EKS in addition to ECS. In the meantime, Azure Container Instances (ACI) is a true Kubernetes-pods as a service offering that frees you from worrying about the server group upon which it’s running.

Higher-Order Kubernetes SRE

While at Networking Field Day 17 ( NFD17 video recording), I presented on “shifting left” your networking and security considerations to deal with DevOps and multi-purpose clusters. It turns out that on the same day, Software Engineering Daily released their Serverless Containers podcast. In listening to it you’ll realize that such serverless container stacks are probably the epitome of multi-purpose Kubernetes clusters.

What cloud providers offer in terms of separation of concerns with serverless container stacks, great cluster SREs will also aim to provide to the developers they support.

When you get to this level of maturity in Kubernetes operations, you’re thinking about a lot of things that you may not have originally considered. This happens in many areas, but certainly in networking and security. Hence me talking about “shift left,” so you can prepare to meet certain challenges that you otherwise wouldn’t see if you’re just getting Kubernetes up and running (great book by that name).

In the domain of open networking and security, there is no project that approaches the scalability and maturity of OpenContrail. You may have heard of the immortal moment, at least in the community, when AT&T chose it to run their 100+ clouds, some of the enormous size. Riot Games has also blogged about how it underpins their DevOps and container runtime environments for League of Legends, one of the hugest online games around.

Cloud-Grade Networking and Security

For cluster multi-tenancy, it goes without saying that it’s useful to have multi-tenant networking and security like OpenContrail provides. You can hack together isolation boundaries with access policies in simpler SDN systems (indeed, today, more popular due to their simplicity), but actually having a multi-tenant domain and project isolation in your SDN system is far more elegant, scalable and sane. It’s a cleaner hierarchy to contain virtual network designs, IP address management, network policy and stateful security policy.

The other topic I covered at NFD17 is the goal of making networking and security more invisible to the cluster SRE and certainly to the developer, but providing plenty of control and visibility to the security and network reliability engineers (NREs) or NetOps/SecOps pros. OpenContrail helps here in two crucial ways.

First, virtual network overlays are a first-class concept and object. This is very useful for your DevOps pipeline because you can create exactly the same networking and secure environment for your staging and production deployments (here’s how Riot does it). Landmines lurk when staging and production aren’t really the same, but with OpenContrail you can easily have exactly the same IP subnets, addresses, networking and security policies. This is impossible and impractical to do without overlays. You may also perceive that overlays are themselves a healthy separation of concerns from the underlay transport network. That’s true, and they easily enable you to use OpenContrail across the multicolored on any infrastructure. You can even nest OpenContrail inside of lower-layer OpenContrail overlays, although for OpenStack underlays, it provides ways to collapse such layers too.

Second, OpenContrail can secure applications on Kubernetes with better invisibility to your developers-and transparency to SecOps. Today, a CNI provider for Kubernetes implements pod connectivity and usually NetworkPolicy objects. OpenContrail does this too, and much more that other CNI providers cannot. But do you really want to require your developers to write Kubernetes NetworkPolicy objects to blacklist-whitelist the inter-micro-service access across application tiers, DevOps stages, namespaces, etc? I’d love to say security is shifting left into developers’ minds, and that they’ll get this right, but realistically when they have to write code, tests, fixes, documentation and more, why not take this off their plates? With OpenContrail you can easily implement security policies that are outside of Kubernetes and outside of the developers’ purview. I think that’s a good idea for the sanity of developers, but also to solve growing security complexity in multi-purpose clusters.

If you’ve made it this far, I hope you won’t be willfully blind to the Kubernetes SRE-fu you’ll need sooner or later. Definitely give OpenContrail a try for your K8s networking-security needs. The community has recently made it much more accessible to quick-start with Helm packaging, and the work continues to make day-1 as easy as possible. The Slack team is also helpful. The good news is that with the OpenContrail project, it is very battle tested and going on 5 years old; your day-N should be smooth and steady.

PS. OpenContrail will soon be joining Linux Foundation Networking, and likely renamed, making this article a vestige of early SDN and cloud-native antiquity.

Original Link

Architect’s Corner: Hugo Claria of Naitways Talks Kubernetes Storage

Today’s Architect’s Corner is with Hugo Claria, head of the hosting division at Naitways. Naitways is one of a growing number of Portworx customers across the UK and Europe. We sat down with Hugo and talked about some of the challenges of building a multi-tenant application on top of Kubernetes, including Kubernetes storage.

Key Technologies Discussed

Container Runtime – Docker

Scheduler – Kubernetes, Swarm

Stateful Services – MySQL, WordPress, Drupal, Magento, Joomla, Redis

Infrastructure provider- On-premises

Can you first tell me a little bit about what Naitways does?

Naitways is an IT service provider company, based in Paris, France. Naitways is composed by two business units, the hosting business unit and the infrastructure business unit. We were founded 10 years ago with one mission, to help and advise our clients on their infrastructure, especially as they move into the cloud. In order to provide the best service to our clients, we have built our entire infrastructure ourselves from scratch. We have always offered a VMWare based private cloud, but nowadays, more and more customers are asking for public cloud services. For these services, we are using Kubernetes. We now have around 30 employees and we look forward to hiring new talent in 2018.

Can you tell me a little bit about your role at Naitways?

I was employee #3 at the company. I started as a systems engineer but over the years, I was promoted to manager of the Hosting business unit. The Hosting business unit provides services inside the virtual machines while the Infrastructure BU provides the physical hosting layer. That is to say switches, routers, circuits.

Can you tell us how you are using containers at Naitways?

Yes. We’re doing two things with containers. First, for our private cloud customers, we have built a Docker service for them managed by Swarm or Kubernetes. These are customers who have already purchased some infrastructure from us, but now they want to try Docker. For these customers, we installed Docker stacks managed by Swarm or Kubernetes inside virtual machines.

In addition, we are also building a shared public cloud infrastructure running on Kubernetes where a customer doesn’t have to worry about virtual machines or even containers. They can just run an application like WordPress directly. We have a provisioning portal interface that will allow users to provision software as a service.

They can pick from GitLab, Nextcloud for file management and sharing, Drupal, WordPress, Magento, Joomla, Redis and a MySQL database as a service. We provide this pre-packaged software, pre-installed on the computer instance, ready to be consumed by the customer.

We are building a shared public cloud infrastructure running on Kubernetes where a customer doesn’t have to worry about virtual machines or even containers. They can just run an application like WordPress directly.

If they pick MySQL for example, they don’t have to install it, patch it, upgrade it. We do all that. They just get access to phpMyAdmin if they want, a public IP for access and they are ready to go. This is a click-and-use solution to facilitate our customers’ lives.

What were some of the challenges you needed to overcome in order to run stateful services like databases, and queues, and key-value stores in containers?

One of the biggest challenges was lifecycle management. Once a customer is running in production with a stateful service, I have to think about managing all the operational tasks without downtime. How do I launch my instance, how do I limit the resources being shared among the customers? What happens when an instance dies? How do I manage my backups?

The Kubernetes API provides a lot of management for the stateless services, but for volume management there is Portworx. That API-based management for our storage provides a lot of value to us as a service provider.

For example, we run an S3-like object store at Naitways. We use the Portworx CloudSnap feature to do backup and restore from this object store.

Anytime a customer requests a backup, it is really easy for us to provide.

The Kubernetes API provides a lot of management for the stateless services, but for volume management there is Portworx. That API-based management for our storage provides a lot of value to us as a service provider.

We’ve also been able to use Portworx to easily implement customer upgrades as a way to expand revenue. We have tiered pricing where the customer can pay for what they consume. If a customer wants more backups, or more storage, this is simple to implement via the Portworx API.

The other real business value of Portworx is time. We can get to market faster with Portworx than we could if we implemented everything ourselves. For instance, I love the automated snapshots. I don’t have to code anything on my side in order to implement a snapshot or backup schedule. I think that you saved me four or five months with that feature. With Portworx, we bought ourselves time!

We did look at a couple other alternatives before settling on Portworx, but those were not as mature as Portworx by comparison and we had problems with installation, maintenance and performance. Ceph was also an option, but implementing that would have been a lot of work because it just provides the basic features. Extending Ceph for our use case would have taken an additional four to five months of development.

The real business value of Portworx is time. We can get to market faster with Portworx than we could if we implemented everything ourself.

What advice would you give someone else thinking about running stateful services in production?

Don’t build it from scratch yourself! If you don’t have a large team of very good engineers, it is going to be very hard to build and bring support to these services yourself. Just like you wouldn’t want to build the Kubernetes API yourself, you probably shouldn’t try to build a volume management API yourself.

Original Link

Proxy Models in Container Environments

Inline, side-arm, reverse, and forward. These used to be the terms we used to describe the architectural placement of proxies in the network.

Today, containers use some of the same terminology, but they are introducing new ones. That’s an opportunity for me to extemporaneously expound* on my favorite of all topics: the proxy.

One of the primary drivers of cloud (once we all got past the pipedream of cost containment) has been scalability. Scale has challenged agility (and sometimes won) in various surveys over the past five years as the number one benefit organizations seek by deploying apps in cloud computing environments.

That’s in part because in a digital economy (in which we now operate), apps have become the digital equivalent of brick-and-mortar “open/closed” signs and the manifestation of digital customer assistance. Slow, unresponsive apps have the same effect as turning out the lights or understaffing the store.

Apps need to be available and responsive to meet demand. Scale is the technical response to achieving that business goal. Cloud not only provides the ability to scale, but offers the ability to scale automatically. To do that requires a load balancer. Because that’s how we scale apps – with proxies that load balance traffic/requests.

Containers are no different with respect to expectations around scale. Containers must scale – and scale automatically – and that means the use of load balancers (proxies).

If you’re using native capabilities, you’re doing primitive load balancing based on TCP/UDP. Generally speaking, container-based proxy implementations aren’t fluent in HTTP or other application layer protocols and don’t offer capabilities beyond plain old load balancing (POLB). That’s often good enough, as container scale operates on a cloned, horizontal premise – to scale an app, add another copy and distribute requests across it. Layer 7 (HTTP) routing capabilities are found at the ingress (in ingress controllers and API gateways) and are used as much (or more) for app routing as they are to scale applications.

In some cases, however, this is not enough. If you want (or need) more application-centric scale or the ability to insert additional services, you’ll graduate to more robust offerings that can provide programmability or application-centric scalability or both.

To do that means plugging-in proxies. The container orchestration environment you’re working in largely determines the deployment model of the proxy in terms of whether it’s a reverse proxy or a forward proxy. Just to keep things interesting, there’s also a third model – sidecar – that is the foundation of scalability supported by emerging service mesh implementations.

Reverse Proxy

Image title

A reverse proxy is closest to a traditional model in which a virtual server accepts all incoming requests and distributes them across a pool (farm, cluster) of resources.

There is one proxy per ‘application’. Any client that wants to connect to the application is instead connected to the proxy, which then chooses and forwards the request to an appropriate instance. If the green app wants to communicate with the blue app, it sends a request to the blue proxy, which determines which of the two instances of the blue app should respond to the request.

In this model, the proxy is only concerned with the app it is managing. The blue proxy doesn’t care about the instances associated with the orange proxy, and vice-versa.

Forward Proxy

Image title

This mode more closely models that of a traditional outbound firewall.

In this model, each container node has an associated proxy. If a client wants to connect to a particular application or service, it is instead connected to the proxy local to the container node where the client is running. The proxy then chooses an appropriate instance of that application and forwards the client’s request.

Both the orange and the blue app connect to the same proxy associated with its node. The proxy then determines which instance of the requested app instance should respond.

In this model, every proxy must know about every application to ensure it can forward requests to the appropriate instance.

Sidecar Proxy

Image title

This mode is also referred to as a service mesh router. In this model, each container has its own proxy.

If a client wants to connect to an application, it instead connects to the sidecar proxy, which chooses an appropriate instance of that application and forwards the client’s request. This behavior is the same as a forward proxy model.

The difference between a sidecar and forward proxy is that sidecar proxies do not need to modify the container orchestration environment. For example, in order to plug-in a forward proxy to k8s, you need both the proxy and a replacement for kube-proxy. Sidecar proxies do not require this modification because it is the app that automatically connects to its “sidecar” proxy instead of being routed through the proxy.

Summary

Each model has its advantages and disadvantages. All three share a reliance on environmental data (telemetry and changes in configuration) as well as the need to integrate into the ecosystem. Some models are pre-determined by the environment you choose, so careful consideration as to future needs – service insertion, security, networking complexity – need to be evaluated before settling on a model.

We’re still in early days with respect to containers and their growth in the enterprise. As they continue to stretch into production environments it’s important to understand the needs of the applications delivered by containerized environments and how their proxy models differ in implementation.

*It was extemporaneous when I wrote it down. Now, not so much.

Original Link