Simplifying Kubernetes With Docker Compose and Friends

Today we’re happy to announce we’re open sourcing our support for using Docker Compose on Kubernetes. We’ve had this capability in Docker Enterprise for a little while but as of today, you will be able to use this on any Kubernetes cluster you choose.

Why Do I Need Compose If I Already Have Kubernetes?

The Kubernetes API is really quite large. There are more than 50 first-class objects in the latest release, from Pods and Deployments to ValidatingWebhookConfiguration and ResourceQuota. This can lead to a verbosity in configuration, which then needs to be managed by you, the developer. Let’s look at a concrete example of that.

Original Link

CI/CD With Kubernetes and Helm

In this blog, I will be discussing the implementation of CI/CD pipeline for microservices which are running as containers and being managed by Kubernetes and Helm charts

Note: Basic understanding of Docker, Kubernetes, Helm, and Jenkins is required. I will discuss the approach but will not go deep into its implementation. Please refer to the original documentation for a deeper understanding of these technologies.

Original Link

K8s KnowHow — Running Deployment

 This is the fourth article of Kubernetes KnowHow series. In the first three articles, we learned how to use podsservices and replicaset in K8s. 

Deployment is a controller that provides declarative updates for the pod and replica set. Deployment is a sophisticated form of the replica set. Deployment gives us a huge benefit along with replica set features. That benefit comes in the form of rolling updates that guarantee zero downtime. On the top of it, if something goes wrong then you can do elegant rollbacks. You may be thinking of learning one more YAML definition file and how complicated it could be. However, you do not have to worry about it as Deployment is just an extension of the replica set definition. Let me demonstrate first why we require rolling updates.

Original Link

Thoughtworks Technology Radar 19 — Cloud, Chaos, and Cross-Platform

I always look forward to the latest installment of Thoughtworks’ Technology Radar. Every quarter I wait to see if the trends I’ve been following match their predictions or suggestions on what I should look into over the coming months. Typically there’s a launch meetup in Berlin where attendees discuss the contents over drinks, which is another excellent way to feel the pulse of a fast-moving industry.

Here are my highlights and thoughts, split into the four sections that Thoughtworks splits each radar.

Original Link

Amazon Corretto: Another OpenJDK

Amazon Corretto, a no-cost distribution of the OpenJDK, is the new OpenJDK distribution from Amazon.

This is really great news for Java developers. Amazon has released blog post with the following text that explains their reasoning for releasing Corretto:

Original Link

Kubernetes KnowHow – Working With ReplicaSet

This is the third article of Kubernetes KnowHow series. In the first two articles, we have learned how to use pods and services in K8s. With pods and services, you have learned the core element of Kubernetes. However, in a production environment, we hardly deal with pods and services directly. In a production environment, you are more likely to work with deployment or replica set. In this article, we will learn what a replica set is and how to use it. So, let’s get started!

Pods can die anytime and be shortlived. Reasons could be anything such as pods consuming many resources or pods node crash or out of memory exception. If you deploy a pod directly, something we have been doing until now, then once the pod is crashed, then that’s it. There is no self-healing process. K8s will not reinstate the service that pod was providing. K8s does not resurrect pod. You are responsible for the lifetime of a pod. I believe in a more pragmatic approach, so I will demonstrate it.

Original Link

Journey to Containers – Part II

This is second part and continuation of the first article “Journey to Containers – Part I.” Please make sure you read Part I to correlate what we are going to do in Part II.

In this section, we are going to package the Python application from Part I inside a Docker image and then run the application as a container in a standalone Docker environment.

Original Link

Continuous Integration and Delivery for Maven Projects With Jenkins and Docker

In this article, we look at continuous integration for Maven projects using Jenkins and Docker. The project from my previous post Multiple Databases in Spring Boot will be built and deployed and run by Jenkins. The codebase for Multiple Databases in Spring Boot is here.

First, I will discuss the methodology I use to build the Maven project and create the Docker image. There are normally two approaches:

Original Link

Running Imply Druid Distribution Inside Docker Container

Druid is an open-source data store designed for sub-second queries on real-time and historical data. Druid can scale to store trillion of events and ingest millions of events per second. Druid is best used to power user-facing data applications.

Imply is an analytics solution powered by druid. The Imply Analytics platform includes Druid bundled with all its dependencies, an exploratory analytics UI, and a SQL layer. It also provides additional tools and scripts for easy management of druid nodes.

Original Link

How to Create a Builder Image With S2I

 Source-To-Image  (S2I) is a standalone toolkit and workflow for creating builder images. It allows you to build reproducible Docker  images from source code. The magic of the S2I is to produce ready-to-run images by injecting source code into a Docker container. This means that the builder image contains the specific intelligence required to produce that executable image based on the source code and you can have reusable dynamic images for creating build and runtime environments based on your needs.

The S2I project includes some ready-to-use builder images. You can extend these images and also create your own images.

Original Link

Journey to Containers – Part I

About 6 months back, I received an opportunity to work with containers. Since then I’ve gone through a lot of documentation on containers, containers history and eventually, Docker and Kubernetes. There is lot of community work going on to promote container ecosystem. I am also mesmerized to see so many new tools and enhancements to existing tools are coming in the market to support container ecosystem mainly from the CI/CD, Security, Monitoring and Orchestration perspective.

I have been working in the DevOps space for a while and worked with various tools and technologies with major work in designing and setting up CI/CD pipelines with various tools, integrations, automations, defining processes, user trainings, and so on.

Original Link

K8s KnowHow: Using A Service

This is the second article of the Kubernetes Knowhow series. In this article, we will see how to expose a pod to the outside world. In the last article, we learned how to run a Spring Boot sample application in a pod. We also learned to start an interactive shell in order to access pod.  However, we could not access the welcome url from outside of K8s cluster, for an instance from the browser, a Rest client. Now, the next step is to expose this pod outside of K8s cluster. Let’s get started!

I will start with our pod definition.

Original Link

Deploying Spring Boot and MongoDB as Containers Using Kubernetes and Docker

For this tutorial, you’ll have a Dockerized sample spring-boot application that talks to MongoDB for GET/POST REST APIs and deployed in Kubernetes cluster.


  • minikube
  • kubectl
  • docker
  • maven

Docker is a Linux container management toolkit with a “social” aspect, allowing users to publish container images and consume those published by others. A Docker image is a recipe for running a containerized process, and in this guide, we will build one for a simple Spring boot application.

Original Link

Deploying Spring Boot and MongoDB as Containers Using Kubernetes and Docker

For this tutorial, you’ll have a Dockerized sample spring-boot application that talks to MongoDB for GET/POST REST APIs and deployed in Kubernetes cluster.


  • minikube
  • kubectl
  • docker
  • maven

Docker is a Linux container management toolkit with a “social” aspect, allowing users to publish container images and consume those published by others. A Docker image is a recipe for running a containerized process, and in this guide, we will build one for a simple Spring boot application.

Original Link

Serverless With AWS: Image Resize On-The-Fly With Lambda and S3

Handling large images has always been a pain in my side since I started writing code. Lately, it has started to have a huge impact on page speed and SEO ranking. If your website has poorly optimized images it won’t score well on Google Lighthouse. If it doesn’t score well, it won’t be on the first page of Google. That sucks.


I’ve built and open-sourced a snippet of code that automates the process of creating and deploying an image resize function and an S3 bucket with one simple command. Check out the code here.

Original Link

The DevOps Road Map — A Guide for Programmers

DevOps is really hot at the moment and most of my friends, colleagues, and senior developers I know are working hard to become a DevOps engineer and project themselves as DevOps champion in their organization.

While I truly acknowledge the benefits of DevOps, which is directly linked to improved software development and deployment, from my limited experience I can say that it’s not an easy job. It’s very difficult to choose the right path in the middle of so many tools and practices.

Original Link

Comparing Windows and Linux SQL Containers

By several measures, Windows SQL Server containers offer better enterprise support than Linux MySQL or Postgres containers. SQL Server containers provide more backward compatibility, and support for existing apps, storage arrays, and infrastructure.

Windocks has evolved as an independent port of Docker’s open source project to include database cloning, a web UI, secrets store, and other capabilities. These capabilities are customer driven, and seem to diverge from Linux mainstream development. This article takes looks at the capabilities being driven by Windows customers. Full disclosure, I am a principal of Windocks, and this article focuses on the Windows-based SQL Server containers provided by Windocks.

Original Link

Running SQL Server on a Linux Container Using Docker for Windows

Recently, I have been investigating what all the fuss is about Docker and it has been well worth my time as Docker is pretty awesome for automating stuff.

My development environment has typically required installing SQL Server. SQL is a bit of a beast with lots of options and takes time to set up how you want.

Original Link

Data Science and Engineering Platform in HDP 3: Hybrid, Secure, Scalable

What Is a Data Science and Engineering Platform

Apache Spark is one of our most popular workloads both on-premises and cloud. As we recently announced HDP 3.0.0 (followed by a hardened HDP 3.0.1), we want to introduce the Data Science and Engineering Platform powered by Apache Spark.

As noted in the marketecture above, our Data Science and Engineering Platform is powered by Apache Spark with Apache Zeppelin notebooks to enable Batch, Machine Learning, and Streaming use cases, by personas such as Data Engineers, Data Scientists and Business Analysts. We recently introduced Apache TensorFlow 1.8 as a tech preview feature in HDP 3.0.x to enable the deep learning use cases – while this is intended for proof of concept deployments, we also support BYO dockerized TensorFlow in production environments. Some of the reasons our customers choose our Data Science and Engineering Platform on HDP 3.0 are:

Original Link

Building Your Own Docker Images [Video]

To get anything out of Docker, you must know how to build your own Docker images, so that you can, later on, deploy your Java application to it. In this quick and practical episode you will learn how to do so.

Original Link

K8s KnowHow – Running Your First Pod

In K8S, a pod is the smallest deployment unit. It’s the collection of one or more containers (preferably one container). All the containers packed in the pod share the same storage, network, and specifications of how to run the container. Let me make it simpler: running a pod represents a process and that’s it. 

For Java world developers, let me give you the other perspective of a pod. In simple terms, a pod is nothing but an execution environment consisting of JVM running the base container and other services to work with the K8S ecosystem. A pod is just a wrapper on the top of JVM. In the pre-container world, imagine you have virtual hosts on the physical node and different JVMs running on those virtual hosts. Now with K8S, a pod provides similar kind of isolation and abstraction in a quite elegant way with lightweight containers.

Original Link

Containers Are and Will Be the New Linux

Linux is the operating system which has revolutionized data centers over last 2 decades, and today it is the undisputed leader in application hosting platforms. It’s very hard to imagine deploying any mission critical production workloads to any other platform than Linux.

A similar revolution in packaging, deploying, and hosting applications was started a few years ago, when Docker made the Linux containers popular. After that, the growth in container adoption across the industry was exponential and it’s multiplying with each passing day.

Original Link

Running Java on AWS Infrastructure: How To Put the Bricks Together

Startups tend to grow and expand, and successful startups tend to propagate this progress. I see this variety of possible ways mainly from the engineer’s perspective, from the point of striving to create technically reliable, automated, simple to use, and supportive applications, which can be called "alive." I am calling them "alive" here because I see those apps as living beings with their own lifecycles. And now let me elaborate on how this quite abstract idea can be applied to the real world. 

I became a member of my current team at the moment when one crucial question had to be addressed. The question was "How are we going to make our product easier to develop and use?" Originally, the company agreed to deliver integration with a large third-party system, but there was one obstacle: it is hard or even impossible to integrate the system into the larger one (and I am talking here about medical, financial or similar areas) when the product is a mere web app, without a backend (AWS Lamdas are not included here) which should provide an easy way to scale and customize the app, without CI/CD flow, without profound data store (which DynamoDB is not) to effectively aggregate and analyze data (which is becoming a crucial requirement) and, moreover, without a clear and easy way to administer the application itself (and I mean here a customer’s onboarding mostly). Considering the fact that this integration is extremely determinative in terms of clients’ and the development company’s success, all these impediments were defined as a "dragon" that should be defeated to make our product "alive." 

Original Link

Build a Container Image Inside a K8s Cluster

Learn how to build a source into a container image from a Dockerfile inside a Kubernetes cluster and push the image to IBM Cloud Container Registry; all of this using Google’s Kaniko tool.

So, What is Kaniko?

Kaniko is a tool to build container images from a Dockerfile, inside a container or Kubernetes cluster.

Original Link

Docker Containers and Kubernetes: An Architectural Perspective

Understanding both Docker and Kubernetes is essential if we want to build and run a modern cloud infrastructure. It’s even more important to understand how solution architects and developers can use them when designing different cloud-based solutions. Both VMs and Containers are aimed at improving server utilization and reducing physical server sprawl and total cost of ownership for the IT department. However, if we can see the growth in terms of adoption in the last few years, there is an exponential growth in container-based cloud deployment.

Fundamentals of VM and Docker Concepts

Virtual machines provide the ability to emulate a separate OS (the guest), and therefore a separate computer, from right within your existing OS (the host). A Virtual Machine is made up of a userspace plus the kernel space of an operating system. VMs run on top of a physical machine using a “hypervisor,” also called a virtual machine manager, which is a program that allows multiple operating systems to share a single hardware host.

Original Link

50 Useful Docker Tutorials, From Beginner to Advanced (Part 1)

Containers bring many benefits to DevOps teams along with a number of security concerns. This post brings you details about 50 Docker training resources that are designed to train beginner, intermediate, and advanced practitioners on current knowledge about Docker.

Containers can be a big help in shipping and deploying your application where it’s needed. But using them adds a layer of complexity to your architecture and can be painful to implement and operate. The introduction of Docker to the IT community transformed the way many departments handled this type of work.

Original Link

Monitoring Docker With InfluxDB

Thankfully, monitoring my containers with InfluxDB was surprisingly easy. Unfortunately, deriving value from container data is not. Understanding how to manage and allocate container resources is far from easy and DevOps still remains largely mysterious to me. My lack of understanding has come into focus as I started monitoring some containers locally. In this blog post, I will share my journey to understanding container monitoring better.

But first, let me show you:

Original Link

The 10 Best DevOps Tools for 2018

The integration of Development and Operations brings a new perspective to software development. If you’re new to DevOps practices or looking to improve your current processes, it can be a challenge to know which tool is best for your team.

We’ve put together this list to help you make an informed decision on which tools should be part of your stack. So, let’s take a look at the 10 best DevOps tools, from automated build tools to application performance monitoring platforms.

Original Link

Solving Java EE Nightmares Without Docker Containers or Microservices

Developers and application owners have many new tools and technologies such as microservices, Docker containers, CI/CD, and DevOps that help them produce better software, faster. Yet, the sad truth is that most organizations rely on untold numbers of legacy applications, many of which are Java EE, to power mission-critical systems that can’t be migrated to some of the emerging technologies and processes.

Legacy Java EE apps are almost a necessary evil, providing core business functions while forcing IT teams to face myriad operational problems including planning for and addressing scaling challenges, managing inefficient and unpredictable resource consumption, protecting unsecured confidential information, and applying patching and restarting applications without service disruption.

Original Link

SQL Server Containers With SSRS

SSRS support has been among the most frequently requested new features and Windocks 3.5 introduces SQL Server containers with the database engine and SSRS running as Windows Services. Windocks 3.5 supports native mode on SQL Server 2008 through SQL Server 2016, with SQL Server 2017 slated for later this year. 

Setup and Planning

Windocks installs on Windows 8.1 or 10, Pro or Enterprise editions, or Windows Server 2012 R2 or Server 2016, with SSRS support for all editions of SQL Server 2008 through 2016 (SQL 2017 support is slated for later this year). Install on a machine that has one or more SQL Server instances that can be managed by the Windocks service, which clones the instance to create containers. SQL Server containers are lightweight (~300 MB), so can be stored on the system disk, or assigned to a separate disk. Database clone mages are each a full byte copy of the data environment and should be stored on a separate disk or network attached storage. The automated Windocks install takes 10-15 minutes to complete. 

Original Link

The Role of Enterprise Container Platforms

As container technology adoption continues to advance and mature, companies now recognize the importance of an enterprise container platform. More than just a runtime for applications, a container platform provides a complete management solution for securing and operationalizing applications in containers at scale over the entire software lifecycle.

While containers may have revolutionized the way developers package applications, container platforms are changing the way enterprises manage and secure both mission-critical legacy applications and microservices both on-premises and across multiple clouds. Enterprises are beginning to see that container runtime and orchestration technologies alone don’t address these critical questions:

Original Link

9 Reasons DevOps Is Better With Docker and Kubernetes

One of the main challenges that companies face with is a long time to market, which usually happens when your development process is slowed down. When deploying applications most of the teams usually face a problem between Dev and Ops because these two departments make the same application but work completely in different ways.

Wouldn’t it be nice if they work together without any misunderstandings to make shorten time to market? I’ve assembled this list of advantages that DevOps plus Docker and Kubernetes can provide you compared to a traditional DevOps approach.

Original Link

Building an ActiveMQ Docker Image on Kubernetes

In our project, we require a message broker to pave a way for asynchronous communication between different microservices. So, one of the microservices required is a messaging service. In this example, we are going to use Apache ActiveMQ as a message broker.

In order to deploy messaging microservice, we must containerize ActiveMQ. It’s a very straightforward process as we are only going to set up 1 node cluster.

Original Link

Deploying Spring Boot to ECS (Part 2)

This post is a continuation from our previous post on deploying Spring Boot to ECS.  In our second installment, we will cover how to deploy a Spring Boot application in the ECS container. We will be using a simple task planner application that is available on GitHub.

A Docker file is already available inside this project. Make sure you have Git, Docker, and AWS CLI installed wherever you are running the following commands. Check out this link for a Git clone.

Original Link

Docker Containers: Challenges of Modern Application Delivery

In this article, we will be discussing some of the many challenges which present themselves as organizations adopt microservices, Docker, containers, and continuous delivery practices. This blog post isn’t aimed at solving all your problems, but to give you an idea of where you will likely encounter friction and how you might go about solving these issues in an organic fashion which aligns with your organization.

When to Use Docker Containers

You’ll find hordes of articles about leveraging Docker containers and microservice patterns, so I won’t repeat what’s already been said in at least 20 other places. Something that is often overlooked is that Docker allows us to package almost any application that was built in the past 10 years (longer, if you’re in for a challenge) and deliver that application using a vast landscape of mature tooling. You can choose from Kubernetes, Marathon DC/OS, Docker Swarm, and now Netflix’s Titus. This is the secret sauce behind Docker: it allows for an opinionated approach to delivering software. This software can be microservices or legacy services, each having their own sets of challenges. If you can master and automate the tooling for packaging your applications, you will be in a great position for delivering the next generation of services for your business, microservices.

Original Link

Creating a Private Repository for Visual Studio Extensions with Docker

Extensions and project templates nowadays are very common; we use extensions every day in Visual Studio. 

Extensions and templates are hosted in VisualStudio’s MarketPlace and are public, in some cases, especially when we talk about project templates, where we can have intellectual property of the company or project. We need a private “MarketPlace,” and I will demonstrate how to create one with Docker:

Original Link

[DZone Research] Cloud Platforms, Frameworks, and Containers

This article is part of the Key Research Findings from the DZone Guide to Cloud: Serverless, Functions, and Multi-Cloud.


For this year’s DZone Guide to Cloud, we surveyed 739 software professionals from across the IT industry, asking them questions on various topics about cloud technology. In this article, we take a look at the data around our respondents’ favorite platforms, frameworks, and containers, and some reasons for the adoption of these technologies.

Original Link

Hybris With Docker for Development in Windows Machine


This document covers the setup of Hybris in Docker for Windows. In general, the Hybris build and server startup in a Windows machine takes close to an hour in 16GB RAM. With the increase in extensions, the time varies further. This affects the local development a considerable time. This solution will help in building and starting of the application faster and more performant in Windows. As per the solution, Hybris will start in a new container named as hybris_docker. Solr will start in the same container where Hybris is starting. MySQL will be a separate container hybris_sql and the volume is mounted in the Linux VM. The volume for media is also mounted in Linux VM. Hence with container start/stop the data will persist.

Please note this document helps in setting up the standard Hybris environment. If you already have an existing Hybris project running, this document can help you migrate to Docker container with subtle changes. All the steps might not work in existing projects and might need Docker knowledge to do the subtle changes.

Original Link

5 Keys to Modernizing Windows Development

Docker has emerged as a leading standard for application packaging and portability, garnering industry-wide support from Microsoft, AWS, Google, Red Hat, and others. Interest in Docker on Windows is accelerating, driven in part by Microsoft’s support of SQL Server 2017 on Linux containers.

Windocks was launched in 2016 as an independent port of Docker’s open-source project to Windows, with SQL Server database cloning enabling delivery of TB databases in seconds. These capabilities have generated widespread enterprise adoption and a growing understanding of the keys to success for successful Windows software development modernization. For full disclosure, I am a co-founder at Windocks, and the observations outlined here are from our support of these varied capabilities. 

Original Link

The Art of the Helm Chart: Patterns from the Official Kubernetes Charts

Helm Charts package up applications for installation on Kubernetes clusters. Installing a Helm Chart is a bit like running an install wizard, so Helm Chart developers face some of the same challenges faced by developers producing installers:

  • What assumptions can be made about the environment that the install is running into?

    Original Link

[DZone Research] Containers, Docker, and Popular Tools

This article is part of the Key Research Findings from the 2018 DZone Guide to Containers: Development and Management.


For our 2018 DZone Guide to Containers, we surveyed 711 software professionals, asking a range of questions related to the topic of containers. In this article, we take a look at what these developers told us about how they use containers, the prevalence of Docker, and the tools/methodologies they use when working with containers.

Original Link

Use Docker Instead of Kubernetes

Today we are all talking about containers and container-based infrastructure.  But what is this container technology? And how does it solve today problems?

I am using containers myself and of course, I am fascinated by this server technology. Containers can really simplify things. After more than 20 years in building server applications, I have experienced many problems very closely.

Original Link

Docker-Based Dev Environment for Active-Active Redis Enterprise

Redis Enterprise as an active-active database is ideal for geo-distributed apps. Its architecture is based on breakthrough academic research surrounding conflict-free replicated data types (CRDT). This approach offers many advantages over other active-active databases, including:

  1. Local latency for read and write operations,
  2. Built-in conflict resolution for simple and complex data types,
  3. Cross-region failover, and
  4. Streamlined implementation of use cases like leaderboards, distributed caching, shared sessions, multi-user metering and many more.

Recently, we published a tutorial on how to develop apps using active-active Redis Enterprise. In order to simulate the production setup, developers or testers need a miniaturized development environment — something that’s easy to create with Docker.

Original Link

Multiple MySQL Databases With One MySQL Container

Problem Statement

I want to create 2 databases inside one MySQL container and give the user of the first database full access to the 2nd database. With the official MySQL image one can easily create a database and allow a user access to that database. However, creating a 2nd database is not easily provisioned.


Docker images work on the concept of layers. Each new command, so to speak, creates a new layer, and herein lies our solution.

Original Link

Creating Dual Layer Docker Images for Spring Boot Apps

In the first part of this series on Optimizing Spring Boot Apps for Docker, we looked at the single layer approach to building Docker images for Spring Boot applications and the implications it has for CI/CD pipelines. I proposed that a dual layer approach has concrete benefits over the single layer approach and that these benefits are in the form of efficiencies in iterative development environments.

Here, we introduce an approach to creating dual layer Docker images for existing Spring Boot applications using a new tool in Open Liberty called springBootUtility. There are alternate approaches to creating multi-layered Docker images for Spring Boot applications[1], but this approach focuses on creating a dual layer image from the existing application rather than altering a Maven or Gradle build step.

Original Link

Getting Started With Docker and Java [Video]

Docker is gaining a lot of hype these days. But, why would you want to use Docker with Java in the first place? Before you learn the answer to that question, you obviously need to setup Docker first. In this short and practical episode, you’ll learn how to install Docker on your machine and then finish with a small ‘hello-world’ achievement.

Original Link

Optimizing Spring Boot Apps for Docker

Docker is powerful and simple to use. It allows developers to create portable, self-contained images for the software they create. These images can be reliably and repeatably deployed. You can easily retrieve the value from Docker, but to get the most out of Docker, there are some concepts that are important for you to understand. How you build your Docker image has a measurable impact when you are doing continuous integration and delivery. In this article, I will focus on how to take a more efficient approach to build Docker images for Spring Boot applications when doing iterative development and deployment. The standard approach has some drawbacks, so here, we look at what they are and how to do it better.

Key Docker Concepts

There are four key Docker concepts at play: images, layers, the Dockerfile, and the Docker cache. Simply put, the Dockerfile describes how to build the Docker image. An image consists of a number of layers. The Dockerfile starts with a base image and adds additional layers. A new layer is generated when new content is added to the image. Each layer that is built is cached so it can be re-used on subsequent builds. When a Docker build runs, it will re-use any existing layers that it can from the cache. This reduces the overall time and space needed for each build. Anything that has changed, or has not been built before, will be built as needed.

Original Link

Best Practices of ECS Container Network Multi-NIC Solution

Container-based virtualization is a type of virtualization technology. Compared with a virtual machine (VM), a container is lighter and more convenient to deploy. Docker is currently a mainstream container engine, which supports platforms such as Linux and Windows, as well as mainstream Docker orchestration systems such as Kubernetes (K8S), Swarm, and Rocket (RKT). Common container networks support multiple models such as Bridge, Overlay, Host, and user-defined networks. Systems such as K8S rely on the Container Network Interface (CNI) plug-ins for network management. Commonly used CNI plug-ins include Calico and Flannel.

This article will introduce the basics of container networks. Based on Alibaba Cloud’s Elastic Network Interface (ENI) technology, the ECS container network features high performance, easy deployment and maintenance, strong isolation, and high security.

Original Link

Kubernetes vs. Docker Swarm: A Complete Comparison Guide

There are a countless number of debates and discussions talking about Kubernetes and Docker. If you have not dived deep, you would think that both of the open-source technologies are in the fight of the container supremacy. Let’s make it clear that, Kubernetes and Docker Swarm are not rivals! Both have their own pros and cons and can be used depending on your application requirements.

In this article, more light is shed upon these questions:

Original Link

How to Run Any Dockerized Application on Spring Cloud Data Flow

Spring Cloud Data Flow (SCDF) provides tools to build streaming and batch data pipelines. The data processing pipelines could be deployed on top of a container orchestration engine, for example,  Kubernetes. Spring Boot applications could be one of the starter apps or custom boot applications packaged as Docker images if Kubernetes is the runtime for these pipelines. However, Python is a popular data munging tool, but SCDF cannot natively interpret Python Docker images. In this article, we will create a streaming data pipeline that starts a Python Docker image in one of the intermediate steps and use the result of computation in downstream components.


Let us consider a scenario where we are receiving data from sensors that generate sensor data tags as time series data. The sensor data is a JSON array, an example of which is shown below.

Original Link

  • 1
  • 2
  • 6