container application

Evolvement of Kubernetes to Manage Diverse IT Workloads

Kubernetes started in 2014. For the next two years, the adoption of Kubernetes as a container orchestration engine was slow but steady, as compared to its counterparts – Amazon ECS, Apache Mesos, Docker Swarm, GCE, etc. After 2016, Kubernetes started creeping into many IT systems that have a wide variety of container workloads and demand higher performance for scheduling, scaling and automation. This is to enable a cloud-native approach having a microservices architecture in application deployments. Leading tech giants (AWS, Alibaba, Microsoft Azure, Red Hat) have started new solutions based on Kubernetes and in 2018, they are consolidating to build a de facto Kubernetes solution which can cover every use case that handles dynamic hyperscale workloads.

Two very recent acquisitions depict how Kubernetes has created a huge impact in the IT ecosystem. One is IBM’s Red Hat and VMware’s Heptio acquisition. IBM did not show the direct interest to target container orchestrations but had eyes on Red Hat’s Kubernetes Based Openshift.

Original Link

Comparing Containers vs. Serverless

 This post was originally published here.

In development history, we used to rely on single, physical servers. We manually set up, coded, scaled, and maintained our servers—practically nursing our charges day and night—to provide functionality for other machines. The process was slow, detailed, and required a lot in terms of personal time. From there, we began to mix combining our physical servers with a cluster format or used virtual machines to run multiple applications. Things got faster, but everything was still pretty manual. Eventually, we moved into Infrastructure-as-a-Service, otherwise known as IaaS, otherwise known as the Cloud.

Individuals could rent servers for a monthly fee, it was easier to scale up or down, and the process was significantly faster. The technology, Platform-as-a-Service (PaaS) already existed under Cloud technology as a method of delivery but provided even further improvements in security, scaling, etc. Containers also made PaaS possible and smoother too, providing even more benefits. And finally, later, Function-as-a-Service (FaaS) came into being, which many may know better by the name Serverless.

Some say that containers are yesterday’s news, and serverless is the way to go for creating modern-day applications. Are they right? In fact, both represent architecture that is designed for future changes, and both are intended for leveraging the latest innovations. So, let’s compare the two.


Containers are dedicated or exclusive light-weight boxes which contain all pre-installed dependencies and application code. They can be run anywhere in a single package, quickly, consistently and reliably regardless of the deployment environment. Initially, the technology was innovative, but devs had to know Linux, and also had to know how to design a script to place an application in a container and run it as a host. When the San Francisco PaaS company Dotcloud was launched, they presented a CLI tool known as Docker which made managing containers easy. Then Google worked on an open-source platform called Kubernetes for managing containers. The rest is history, as they say, and today, there are now a great many cloud providers that offer hosts for containers.


Serverless is called such because the owner of the system does not have to purchase or rent servers for the back-end code to operate. They can be run without containers. That said, there are in fact still physical servers, as with any cloud-based service, but the end-user doesn’t have to bother themselves about them. That’s the responsibility of the service provider and the developers. Amazon’s Lambda Service, launched in 2014, made the serverless technique a hot trend. When Amazon introduced API Gateway, the functionality became even better and more extensive.

Which is Better: Containers or Serverless?

In reality, it seems that even though serverless is a newer technology than containers, they both have pros and cons that make them both useful and both relevant. So, it entirely depends on individual circumstances that need to be factored in when attempting to decide which solution is the right choice.

Advantages of Containers

  1. Container technology allows you to make your applications as large as you want. Going serverless, that’s not always the case, because size and memory constraints may occur.
  2. Migrating existing applications is easier to do with containers than it is serverless.
  3. You have flexibility and full control with containers when managing resources, setting policies, and controlling security. They can be run with various software stacks inside too. Testing, debugging, and monitoring is not as easy with serverless tech.
  4. A container, just by its nature, is portable. Move them around and run them anywhere. So, it doesn’t matter if it’s in the Cloud, on a bare metal server or whatever. They are vendor-agnostic whereas going serverless, by its nature, depends on a third party.
  5. A big complicated application may be better with containers. Then, if you choose to move your application over to the Cloud, it will be easy to follow through.

Disadvantages of Containers

  1. One of the biggest disadvantages is the overhead. You are the one who has to run the container, make sure there are security fixes, and monitor them all as needed.
  2. The learning curve with containers can be steep; not only to get them going but to keep them maintained.
  3. Running containers can also get expensive.

Advantages of Serverless

  1. One of the most important advantages is that there’s no administration of infrastructure needed. Just upload your functions, and that’s it. The rest is the responsibility of the service provider and the developer’s problem. Without having to worry about hardware, the load is lightened on the application developer. So, IT organizations can focus on what they should be focusing on, which is developing applications instead of worrying about maintenance.
  2. There’s no worry about scalability. The Cloud provider you choose does it for you by scaling automatically.
  3. Serverless is cheaper than containers. You pay per function execution. You don’t pay for the idle time. When an application is not being used, it shuts down, and it’s not incurring a cost. This is great for startups that are short on cash. Serverless computing can save a cash-strapped startup a lot of money.
  4. Updating or modify a single function is typically easier to do with loosely coupled architecture.
  5. Almost all serverless solutions support event triggers. Which means they are great for pipelines and sequenced workflows.

Disadvantages of Serverless

  1. It’s a technology that’s not good for long-running applications. Containers are better for that.
  2. Serverless is considered “black box” technology, meaning you don’t necessarily know what’s going on inside.
  3. Serverless is commonly dependent on a third party. Changing to a third party provider can be a headache.
  4. The architecture of a true microservices environment can be very tricky to get right and typically requires significant upfront human resource costs for serverless.

So when comparing containers vs. serverless, you can see that there are pros and cons with both. It really comes down to choosing what is right for your solution needs.

If you have the income, the flexibility, and the knowledge to install and maintain containers, and you want to be in control, then they’re a good choice, especially if you have large deployment needs. A developer can use Docker that runs on both Windows and Linux. And then, of course, there is Kubernetes which can help to manage large-scale container set-ups. The world of Kubernetes offers a wide variety of tools, (I’ve listed a full 50 here) such as kubectl, for deploying and troubleshooting a container; Telepresence, another great development tool that uses a hybrid model; and Minikube, that allows you to run Kubernetes on your laptop without the need for WiFi.

Serverless, the newest of the technologies we’ve discussed, is typically fully managed. All the developer has to do is upload the code to the providers, which saves a lot of time and headaches. It’s hands-off, so you don’t have to concern yourself with the underlying infrastructure. The technology is great for startups who wish to save money or for those with a limited income because when it’s not in use, it shuts down and no costs are incurred. It’s a pay-as-you-go model. If you don’t have an issue with limitations via vendor support or ecosystem lock-in, it can be a good solution for your needs.

Even though serverless is the newer technology, containers will still continue to play a significant and much-needed role. In fact, there are still developers who feel that serverless will not kill containers, and, furthermore, they don’t even see serverless as a threat to container tech. There’s a potential for their functionality overlapping, but only in some instances. Indeed, as we compare the pros and cons, that is currently the case. In fact, with the existence of serverless and containers, you may likely end up using both to satisfy different solutions.

The counterargument to this as anyone in technology knows is that things change at a rapid pace. What was once state-of-the-art becomes obsolete, replaced by something that is more efficient—and often cheaper. So, as serverless technology continues to evolve, we may see the time when containers do indeed become ‘yesterday’s news’.

Caylent offers DevOps-as-a-Service to high growth companies looking for help with microservices, containers, cloud infrastructure, and CI/CD deployments. Our managed and consulting services are a more cost-effective option than hiring in-house and we scale as your team and company grow. Check out some of the use cases and learn how we work with clients by visiting our DevOps-as-a-Service offering.

Original Link

Top 4 Innovations in Containers and Cloud Computing

The container ecosystem has been maturing rapidly since the first Docker container was deployed by Solomon Hykes at PyCon in 2013 – when he unveiled a tool so revolutionary that it would bring simplicity and portability to the software development community. Five years later, containers and container platforms continue to be at the heart of innovation in the cloud computing industry. From the announcement of Docker Engine 1.0 to the breakthroughs in security, networking, and orchestration, the container industry continues to drive the emergence of new technologies that create new efficiencies and enhance productivity for developers and IT professionals alike.

So what does that container innovation look like in 2018 and beyond? Here is an overview of the top 4 innovations in container technologies that are emerging in the cloud computing industry, which will be discussed in even greater deal at this year’s DockerCon 2018.


As with containers, serverless has enabled developers to focus on application development without worrying about underlying infrastructure considerations such as the number of servers, amount of storage, etc. Although we’re still in the early days of serverless with a limited number of apps in production, it’s becoming more and more apparent that containers and functions are interrelated. Now seems like a good time to take a closer look at the different Functions-as-a-Service (FaaS) options beyond proprietary cloud providers such as AWS Lambda, Azure Functions or Google Cloud Functions — which come with lock-in concerns for enterprises. Docker has enabled the creation of modern serverless frameworks such as Apache OpenWhisk, Fn, Gestalt, Nuclio, or OpenFaaS, which are great ways to easily build and deploy portable serverless applications.

These frameworks package functions as Docker images and run functions as Docker containers, and can be deployed on a container platform such as Docker Enterprise Edition. They let you structure your application as a set of functions that are triggered either by an event coming from an event bus, or by a call through an API gateway. This space is maturing with standardization: the CNCF Serverless working group recently unveiled an initial version of the OpenEvents specification for a common, vendor-neutral format for event data.

Service Meshes

Microservices architecture are becoming more popular as enterprises modernize their legacy applications, migrate workloads to the cloud and build greenfield applications. Modern languages and products such as the Docker container platform have played a significant role in removing some of the complexity associated with both developing and deploying microservices. However, some challenges remain, the most important one being observability. The concept of “Service Mesh” has recently emerged as the solution manages the inter-microservice communication complexity and provides observability and tracing in a seamless way.

Open Source projects such as Envoy, Istio, and Linkerd provide a large set of features such as resiliency, service discovery, routing, observability, security and interservice communication protocols.

Machine Learning

In addition to Developers and IT pros, Docker products have become extremely popular with data scientists. From the ability to share reproducible data research and analysis to rapid prototyping of deep learning models, Docker containers come with a lot of benefits for data analysts. Additionally, with the portability benefits of the Docker platform, data scientists have the flexibility to change their compute environment to leverage different compute resources as their data requirements change.

With the development of projects such as Kubeflow, it’s becoming easier to run machine learning workflows leveraging the TensorFlow open source machine learning framework on Kubernetes with Docker, improving both the portability and the scalability of running models. There have been many advances in running containerized Machine Learning workloads in production this year, from leveraging GPUs for containerized workloads to using RDMA sockets to accelerate network transfer via custom CNI plugin or avoiding weight servers with the Horovod project.


Revenue opportunities in the financial and cryptocurrency markets have made blockchain technology one the hottest trends over the course of the past year. Blockchain frameworks such as Ethereum and Hyperledger make it possible to build modular applications where multiple parties can record, immutable and verifiable transactions without the need for an independent third party. Blockchain frameworks usually leverage the Docker platform to develop the framework and as part of running it, Hyperledger Fabric “leverages containers to host smart contracts that comprise the application logic of the system.”

As new developments continue to emerge in cloud computing, containers will continue to be the baseline for new innovation. By implementing a container platform like Docker Enterprise Edition, users and organizations alike will have a strong foundation that provides the security, operational agility and choice of cloud or infrastructure needed to drive new innovation.

Original Link

Rapid Development of Kubernetes Services With Telepresence

Imagine you’re developing a new Kubernetes service. Typically, the way you’d test is by changing the code, rebuilding the image, pushing the image to a Docker registry, and then redeploying the Kubernetes Deployment. This can be slow.

Or, you can use Telepresence. Telepresence will proxy a remote Deployment to a process running on your machine. That means you can develop locally, editing code as you go, but test your service inside the Kubernetes cluster.

Let’s say you’re working on the following minimal server,

#!/usr/bin/env python3 from http.server import BaseHTTPRequestHandler, HTTPServer class RequestHandler(BaseHTTPRequestHandler): def do_GET(self): self.send_response(200) self.send_header('Content-type', 'text/plain') self.end_headers() self.wfile.write(b"Hello, world!\n") return httpd = HTTPServer(('', 8080), RequestHandler)

You start a proxy inside your Kubernetes cluster that will forward requests from the cluster to your local process, and in the resulting shell you start the web server:

localhost$ telepresence --new-deployment hello-world --expose 8080
localhost$ python3

This will create a new Deployment and Service named hello-world, which will listen on port 8080 and forward traffic to the process on your machine on port 8080.

You can see this if you start a container inside the Kubernetes cluster and connect to that Service. In a new terminal run:

localhost$ kubectl --restart=Never run -i -t --image=alpine console /bin/sh
kubernetes# wget -O - -q http://hello-world:8080/
Hello, world!

Now, switch back to the other terminal, kill and edit it so it returns a different string. For example:

localhost$ sed s/Hello/Goodbye/g -i
localhost$ grep Goodbye self.wfile.write(b"Goodbye, world!\n")
localhost$ python3

Now that we’ve restarted our local process with new code, we can send it another query from the other terminal where we have a shell running inside a Kubernetes pod:

kubernetes# wget -O - -q http://hello-world:8080/
Goodbye, world!
kubernetes# exit

And there you have it: You edit your code locally, and changes are reflected immediately to clients inside the Kubernetes cluster without having to redeploy, create Docker images, and so on.

Additional Resources

If you’re interested in trying Telepresence on your own you can install locally with Homebrew, apt, or dnf.

Or check out these other tutorials:

Have questions? Ask in the Telepresence Gitter chatroom or file an issue on GitHub.

Original Link

Under the Hood: An Intro to Kubernetes Architecture

If you’re making the move to containers, you’ll need a container management platform. And, if you’re reading this article, chances are you’re considering the benefits of Kubernetes.

But what is Kubernetes? What’s under the hood of this incredibly popular container orchestration engine? How does it all come together to deliver the potential of a future-ready, solid and scalable solution for handling in-production, containerized applications? (Note the deliberate use of the word “potential,” we’ll come back to why we inserted that word later).

In this article, we’ll discuss how Kubernetes works and why it has the potential (there’s that word again) to support enterprise-scale software/container management.

What Is Kubernetes?

Kubernetes (often abbreviated to K8S), is a container orchestration platform for applications that run on containers.

Not only does Kubernetes have everything you need to support your complex container apps, it’s also the most convenient framework on the market for both developers and operations.

Kubernetes works by grouping containers that make up an application into logical units for easy management and discovery. It’s particularly useful for microservice applications, apps made up of small and independent services that come together to create a more meaningful app.

Although Kubernetes runs on Linux, it is platform agnostic and can be run on bare metal, virtual machines, cloud instances, or OpenStack.

What’s Under the Hood?

To understand how Kubernetes works, let’s look at the anatomy of Kubernetes.

The Kubernetes Master Node

First let’s talk about the master. This is the Kubernetes control panel or control plane. This is where decisions are made about the cluster, such as scheduling, and detecting/responding to cluster events. The components of the master can be run on any node in the cluster. Below is a breakdown of each of the key components of the master:

  • API Server – This is the only component of the Kubernetes control panel with a user-accessible API and the sole master component that you’ll interact with. The API server exposes a restful Kubernetes API and consumes JSON manifest files.
  • Cluster Data Store – Known as “etcd.” This is a strong, consistent, and highly-available key value store that Kubernetes uses for persistent storage of all API objects. Think of it as the “source of truth” for the cluster.
  • Controller Manager – Known as the “kube-controller manager,” this runs all the controllers that handle routine tasks in the cluster. These include the Node Controller, Replication Controller, Endpoints Controller, and Service Account and Token Controllers. Each of these controllers works separately to maintain the desired state.
  • Scheduler – The scheduler watches for newly-created pods (groups of one or more containers) and assigns them to nodes.
  • Dashboard (optional) – Kubernetes’ web UI that simplifies the Kubernetes cluster user’s interactions with the API server.

Kubernetes Worker Nodes

The second important component under the hood are nodes. Whereas the master handles and manages the cluster, worker nodes run the containers and provide the Kubernetes runtime environment.

Worker nodes comprise a kubelet. This is the primary node agent. It watches the API server for pods that have been assigned to its node. Kubelet carries out tasks and maintains a reporting backchannel of pod status to the master node.

Inside each pod there are containers, kubelet runs these via Docker (pulling images, starting and stopping containers, etc.). It also periodically executes any requested container liveness probes. In addition to Docker, RKT is also supported and the community is actively working to support OCI.

Another component of worker nodes is kube-proxy. This is the network brain of the node, maintaining network rules on the host and performing connection forwarding. It’s also responsible for load balancing across all pods in the service.

Kubernetes Pods

As mentioned earlier, a pod is a group of one or more containers (such as Docker containers), with shared storage/network. Each pod contains specific information on how the containers should be run. Think of pods as a ring-fenced environment to run containers.

Pods are also a unit for scaling. If you need to scale an app component up or down, this can be achieved by adding or removing pods.

It’s possible to run more than one container in a pod (where each share the same IP address and mounted volumes), if they’re tightly coupled.

Pods are deployed on a single node and have a definite lifecycle. They can be pending, running, succeeding, or failing, but once gone, they are never brought back to life. If a pod dies, a replication controller or other controller must be used to create a new one.

How Does It All Work Together?

Now that you have an understanding of what’s under the Kubernetes hood, let’s take a look at how it all works to automate the deployment, scaling, and operation of containerized applications.

Like all useful automation tools, Kubernetes utilizes object specifications or blueprints that take care of running your system. Simply tell Kubernetes what you want to happen, and it does the rest. A useful analogy is hiring a contractor (albeit a good one) to renovate your kitchen. You don’t need to know stage-by-stage what they’re doing. You just specify the outcome, approve the blueprint and let them handle the rest. Kubernetes works in the same way. Kubernetes operates on a declarative model, object specifications provided in so called manifest files declare how you want the cluster to look. There’s no need for a list of commands, it’s up to Kubernetes to do anything and everything it needs to get there.

Building Your Blueprints

Kubernetes blueprints consist of several building or Lego® blocks. You’ll compose your blueprint out of these blocks and Kubernetes brings it to life. Blocks include things like the specifications to set-up containers, you can also modify the specifications of running apps and Kubernetes will adjust your system to comply.

This is quite a revolution. Just like the cloud revolutionized infrastructure management, Kubernetes and other systems are taking the application development space by storm. Now DevOps teams have the potential (there’s that word again) to deploy, manage, and operate applications with ease. Just send your blueprints to Kubernetes via the API interface in the master controller.

There are several available Lego blocks that can help define your blueprint. Some of the more important ones are:

  • Pods – A description of a set of containers that need to run together.
  • Services – An object that describes a set of pods that provide a useful service. Services are typically used to define clusters of uniform pods.
  • Volumes – A Kubernetes abstraction for persistent storage. Kubernetes supports many types of volumes, such as NFS, Ceph, GlusterFS, local directory, etc.
  • Namespaces – This is a tool used to group, separate, and isolate groups of objects. Namespaces are used for access control, network access control, resource management, and quoting.
  • Ingress rules – These specify how incoming network traffic should be routed to services and pods.
  • Network policies – This defines the network access rules between pods inside the cluster.
  • Configuration maps and secrets – Used to separate configuration information from application definition.
  • Controllers – These implement different policies for automatic pod management. There are three types:
    1. Deployment – Responsible for maintaining a set of running pods of the same type.
    2. DemonSet – Runs a specific type of pod on each node based on a condition.
    3. StatefulSet – Used when several pods of the same type are needed to run in parallel, but each of the pods is required to have a specific identity.

How Kubernetes Bulletproofs Itself

Kubernetes simultaneously runs and controls a set of nodes on virtual or physical machines. This is achieved by running agents on each node. The agent talks to the master via the same API used to send the blueprint to Kubernetes. The agent registers itself in the master, providing Kubernetes with information about the nodes. Reading through the API, the agent determines which containers are required to run on the corresponding node and how they are to be configured.

The master node runs several Kubernetes components. Together, these make all control decisions about which container needs to be started on which node and how it should be configured.

In addition, the master and agent may interact with a cloud provider and manage additional cloud resources such as load balancers, persistent volumes, persistent block storage, network configuration, and number of instances. The master can be a single instance running Kubernetes components or a set of instances to ensure high availability. A master can also serve (in certain configurations) as a node to run containers, although this is not recommended for production.

Realizing the True “Potential” of Kubernetes

As the market leader in container management, Kubernetes has all the components needed to deliver a solid architectural foundation and scalability for your enterprise’s in-production containerized applications. As an open source project with open standards and a huge community behind it, it also provides the flexibility needed to quickly adapt in today’s ever-changing IT environment. But recall our opening statement about the “potential” of Kubernetes. While it certainly can deliver on many of its promises, there are hurdles along the way that every enterprise should be mindful of.

As much as it is beloved, managing Kubernetes is a time-consuming process requiring highly-skilled staff and a potentially large monetary commitment. To address these challenges, Kubernetes management tools are emerging each day, but finding the right tool with the needed flexibility to adapt in an ever-changing IT landscape can be a problem.

Original Link