CI/CD With Kubernetes and Helm

In this blog, I will be discussing the implementation of CI/CD pipeline for microservices which are running as containers and being managed by Kubernetes and Helm charts

Note: Basic understanding of Docker, Kubernetes, Helm, and Jenkins is required. I will discuss the approach but will not go deep into its implementation. Please refer to the original documentation for a deeper understanding of these technologies.

Original Link

Create, Install, Upgrade, and Rollback a Helm Chart (Part 2)

In part 1 of this post, we explained how we can create a Helm Chart for our application and how to package it. In part 2, we will cover how to install the Helm package to a Kubernetes cluster, how to upgrade our Helm Chart, and how to rollback our Helm Chart.

Install Chart

After all the preparation work we have done in part 1, it is time to install our Helm package. First, check whether our Helm package is available:

Original Link

Steering the Wheel of Your App Deployment With Helm


A packaging manager is essential as the number of services increases complexity in an enterprise application development environments. Historically, enterprise app developers used to deploy on-premise applications with a simple copy and paste of the binaries, and then started writing basic scripts to deploy. This evolved into package managers like rpm, yum, pip and install anywhere, etc. 

Deployment infrastructure is changing from on-premise to cloud, and deployment environment have gone from OS to container engines or orchestration engines. This step demands the need for a new package manager which never existed before. 

Original Link

eksctl Makes It Easy to Run Istio On EKS

It is now possible to run Istio on EKS in your Kubernetes cluster. Even better, Istio is fully supported by eksctl — a tool that makes spinning up clusters simple. Read on for a short tutorial on how to get Istio running in your cluster on EKS.

Two months ago we announced the first major release of eksctl — 0.1.0. This week we released — 0.1.7, and felt that the time was right to discuss the improvements we’ve made since the 0.1.0 release.

Original Link

IntelliJ IDEA 2018.3: Helm support

Not so long ago, IntelliJ IDEA 2018.1 Ultimate Edition introduced the initial support for Kubernetes through the new Kubernetes plugin. The forthcoming IntelliJ IDEA 2018.3 takes it even further and now the Kubernetes plugin gets Helm support!

In the blog post covering the first EAP build of IntelliJ IDEA 2018.3, we only briefly mentioned the availability of Helm support in the Kubernetes plugin. Now, the time has come to dive into the details.

Original Link

Create, Install, Upgrade, and Rollback a Helm Chart (Part 1)

In this post we will explain how we can use Helm for installing our application. In part 1 we will take a look at how we can create a Helm Chart for our application and how to package it. In part 2 we will cover how to install the Helm package to a Kubernetes cluster, how to upgrade our Helm Chart and how to rollback our Helm Chart.

Create a Helm Chart

A prerequisite for what we are going to do in this post is an installed Helm Client and Server (Tiller). We explained in a previous post how to do this.

Original Link

Deploy to Kubernetes With Helm

In this post, we will take a closer look at Helm: a package manager for Kubernetes. We will take a look at the terminology used, install the Helm Client and Server, deploy an existing packaged application and take a look at some useful Helm commands.


We can deploy our Docker image manually in Kubernetes and configure Kubernetes to manage our Docker image. So, why do we need a package manager for our application to deploy it to Kubernetes? A package manager will package your Docker image together with the Kubernetes configuration and let you deploy this all together. The advantage is that you are able to put your Kubernetes configuration under version control for your different environments (e.g. development, staging, production) and create a new package when something changes to it. There are several tools available which make it easier to deploy your application to Kubernetes and Helm is one of those. Helm is also maintained by the CNCF (Cloud Native Computing Foundation).

Original Link

The Art of the Helm Chart: Patterns from the Official Kubernetes Charts

Helm Charts package up applications for installation on Kubernetes clusters. Installing a Helm Chart is a bit like running an install wizard, so Helm Chart developers face some of the same challenges faced by developers producing installers:

  • What assumptions can be made about the environment that the install is running into?

    Original Link

Managing Helm Releases the GitOps Way

What is GitOps?

GitOps is a way to do Continuous Delivery, it works by using Git as a source of truth for declarative infrastructure and workloads. For Kubernetes this means using git push instead of kubectl create/apply or helm install/upgrade.

In a traditional CICD pipeline, CD is an implementation extension powered by the continuous integration tooling to promote build artifacts to production. In the GitOps pipeline model, any change to production must be committed in source control (preferable via a pull request) prior to being applied on the cluster. This way rollback and audit logs are provided by Git. If the entire production state is under version control and described in a single Git repository, when disaster strikes, the whole infrastructure can be quickly restored from that repository.

Original Link

Getting Started with the OpenFaaS Kubernetes Operator on EKS

The OpenFaaS team recently released a Kubernetes operator for OpenFaaS.

For an overview of why and how we created the operator head over to Alex Ellis’ blog and read Introducing the OpenFaaS Operator for Serverless on Kubernetes.

The OpenFaaS Operator can be run with OpenFaaS on any Kubernetes service. In this post, I will show you step-by-step instructions on how to deploy to Amazon’s managed Kubernetes service (EKS).

The OpenFaaS Operator comes with an extension to the Kubernetes API that allows you to manage OpenFaaS functions in a declarative manner. The operator implements a control loop that tries to match the desired state of your OpenFaaS functions, defined as a collection of custom resources, with the actual state of your cluster.

Setup a Kubernetes Cluster with eksctl

In order to create an EKS cluster, you can use eksctl. eksctl is an open source command-line utility made by Weaveworks in collaboration with Amazon. It’s written in Go and is based on EKS CloudFormation templates.

On MacOS you can install eksctl with Homebrew:

brew install weaveworks/tap/eksctl

Create an EKS cluster with:

eksctl create cluster --name=openfaas \ --nodes=2 \ --region=us-west-2 \ --node-type=m5.xlarge \ --auto-kubeconfig

eksctl offers many options when creating a cluster:

$ eksctl create cluster --help


 eksctl create cluster [flags]


 --auto-kubeconfig save kubconfig file by cluster name, e.g. "/Users/stefan/.kube/eksctl/clusters/extravagant-wardrobe-1531126688" --aws-api-timeout duration number of seconds after which to timeout AWS API operations (default 20m0s) --full-ecr-access enable full access to ECR -h, --help help for cluster --kubeconfig string path to write kubeconfig (incompatible with --auto-kubeconfig) (default "/Users/aleph/.kube/config") -n, --name string EKS cluster name (generated if unspecified, e.g. "extravagant-wardrobe-1531126688") -t, --node-type string node instance type (default "m5.large") -N, --nodes int total number of nodes (for a static ASG) (default 2) -M, --nodes-max int maximum nodes in ASG -m, --nodes-min int minimum nodes in ASG -p, --profile string AWS creditials profile to use (overrides the AWS_PROFILE environment variable) -r, --region string AWS region (default "us-west-2") --set-kubeconfig-context if true then current-context will be set in kubeconfig; if a context is already set then it will be overwritten (default true) --ssh-public-key string SSH public key to use for nodes (import from local path, or use existing EC2 key pair) (default "~/.ssh/") --write-kubeconfig toggle writing of kubeconfig (default true)

Connect to the EKS cluster using the generated config file:

export KUBECONFIG=~/.kube/eksctl/clusters/openfaas
kubectl get nodes

You will be using Helm to install OpenFaaS. For Helm to work with EKS you need version 2.9.1 or newer.

Install Helm CLI with Homebrew:

brew install kubernetes-helm

Create a service account and a cluster role binding for Tiller:

kubectl -n kube-system create sa tiller kubectl create clusterrolebinding tiller-cluster-rule \ --clusterrole=cluster-admin \ --serviceaccount=kube-system:tiller 

Deploy Tiller on EKS:

helm init --skip-refresh --upgrade --service-account tiller

Install OpenFaaS with Helm

Create the OpenFaaS namespaces:

kubectl apply -f

Generate a random password and create an OpenFaaS credentials secret:

password=$(head -c 12 /dev/urandom | shasum | cut -d' ' -f1) kubectl -n openfaas create secret generic basic-auth \
--from-literal=basic-auth-user=admin \

Install OpenFaaS from the project helm repository:

helm repo add openfaas helm upgrade openfaas --install openfaas/openfaas \ --namespace openfaas \ --set functionNamespace=openfaas-fn \ --set serviceType=LoadBalancer \ --set basic_auth=true \ --set operator.create=true

Find the gateway address (it could take some time for the ELB to be online):

export OPENFAAS_URL=$(kubectl -n openfaas describe svc/gateway-external | grep Ingress | awk '{ print $NF }'):8080


echo http://$OPENFAAS_URL

to get the URL for the OpenFaaS UI portal.

Install the OpenFaaS CLI and use the same credentials to login:

curl -sL | sudo sh echo $password | faas-cli login -u admin --password-stdin

The credentials are stored in a YAML file at :


Manage OpenFaaS Functions with kubectl

Using the OpenFaaS CRD you can define functions as a Kubernetes custom resource:

kind: Function
metadata: name: certinfo namespace: openfaas-fn
spec: name: certinfo image: stefanprodan/certinfo:latest # translates to Kubernetes metadata.labels labels: # if you plan to use Kubernetes HPA v2 # delete the min/max labels and # set the factor to 0 to disable auto-scaling based on req/sec com.openfaas.scale.min: "2" com.openfaas.scale.max: "12" com.openfaas.scale.factor: "4" # translates to Kubernetes container.env environment: output: "verbose" debug: "true" # secrets are mounted as readonly files at /var/openfaas/secrets # if you use a private registry add your image pull secret to the list secrets: - my-key - my-token # translates to Kubernetes resources.limits limits: cpu: "1000m" memory: "128Mi" # translates to Kubernetes resources.requests requests: cpu: "10m" memory: "64Mi" # translates to Kubernetes nodeSelector constraints: - ""

Save the above resource as certinfo.yaml   and use kubectl to deploy the function:

kubectl -n openfaas-fn apply -f certinfo.yaml

Since certinfo requires the my-key and my-token secrets, the Operator will not be able to create a deployment but will keep retrying.

View the operator logs with:

kubectl -n openfaas logs deployment/gateway -c operator controller.go:215] error syncing 'openfaas-fn/certinfo': secret "my-key" not found

Let’s create the secrets:

kubectl -n openfaas-fn create secret generic my-key --from-literal=my-key=demo-key
kubectl -n openfaas-fn create secret generic my-token --from-literal=my-token=demo-token

Once the secrets are in place the Operator will proceed with the certinfo  deployment. You can get the status of the running functions with:

kubectl -n openfaas-fn get functions
certinfo 4m kubectl -n openfaas-fn get deployments
certinfo 1 1 1 1 1m

Test that secrets are available inside the certinfo pod at var/openfaas/ secrets:

export CERT_POD=$(kubectl get pods -n openfaas-fn -l "app=certinfo" -o jsonpath="{.items[0]}")
kubectl -n openfaas-fn exec -it $CERT_POD -- sh ~ $ cat /var/openfaas/secrets/my-key demo-key ~ $ cat /var/openfaas/secrets/my-token demo-token

You can delete a function with:

kubectl -n openfaas-fn delete function certinfo

Set up the OpenFaaS Gateway with Let’s Encrypt TLS

When exposing OpenFaaS on the internet you should enable HTTPS to encrypt all traffic.

To do that you’ll need the following tools:

Heptio Contour is an ingress controller based on Envoy reverse proxy that supports dynamic configuration updates.

Install Contour with:

kubectl apply -f

Find the Contour address with:

kubectl -n heptio-contour describe svc/contour | grep Ingress | awk '{ print $NF }'

Go to your DNS provider and create a `CNAME` record for OpenFaaS, something like:

$ host is an alias for has address has address

Install cert-manager with Helm:

helm install --name cert-manager \ --namespace kube-system \ stable/cert-manager

Create a cluster issuer definition (replace `EMAIL@DOMAIN.NAME` with a valid email address):

kind: ClusterIssuer
metadata: name: letsencrypt
spec: acme: email: EMAIL@DOMAIN.NAME http01: {} privateKeySecretRef: name: letsencrypt-cert server:

Save the above resource as `letsencrypt-issuer.yaml` and then apply it:

kubectl apply -f ./letsencrypt-issuer.yaml

Create an ingress definition for OpenFaaS (replace `` with your own domain name):

ingress: enabled: true annotations: "contour" "letsencrypt" "30s" "3" "gateway-error" hosts: - host: serviceName: gateway servicePort: 8080 path: / tls: - secretName: openfaas-cert hosts: -

Save the above resource as `ingress.yaml` and upgrade the OpenFaaS release with Helm:

helm upgrade --reuse-values -f ./ingress.yaml openfaas openfaas/openfaas

In a couple of seconds cert-manager should fetch a certificate from LE:

kubectl -n kube-system logs deployment/cert-manager-cert-manager successfully obtained certificate: cn="" altNames=[] url=""
Certificate issued successfully

Verify the certificate with `certinfo` function:

curl -d "" Host
Port 443
Issuer Let's Encrypt Authority X3
NotBefore 2018-07-08 09:41:15 +0000 UTC
NotAfter 2018-10-06 09:41:15 +0000 UTC
SANs []
TimeRemaining 2 months from now

Monitoring EKS and OpenFaaS with Weave Cloud

Now that you have an EKS cluster up and running you can use Weave Cloud to monitor it. You’ll need to get a Weave Could service token. If you don’t have already have a Weave token, go to Weave Cloud and sign up for a free trial account.

Deploy the Weave Cloud agents with Helm:

helm repo update && helm upgrade --install --wait weave-cloud \ --set token=YOUR-WEAVE-CLOUD-TOKEN \ --namespace weave \ stable/weave-cloud

Navigate to Weave Cloud Explore and inspect your cluster:

Weave Cloud extends Prometheus by providing a distributed, multi-tenant, horizontally scalable version of Prometheus. It hosts the scraped Prometheus metrics for you, so that you don’t have to worry about storage or backups. Weave Cloud also comes with canned dashboards and alerts for Kubernetes that you can use to monitor a specific namespace:

The dashboards can also detect OpenFaaS workloads and they show RED metrics stats as well as Golang internals.

Navigate to Weave Cloud Workloads, select   openfaas:deployment/gateway  and click on the OpenFaaS tab:

Navigate to Weave Cloud Workloads, select openfaas:deployment/gateway and click on the Go tab:

Set up CloudWatch Integration with Weave Cloud

Monitor your AWS ELB service by configuring Weave Cloud to work with CloudWatch. After logging into Weave Cloud, select the settings icon from the main menu and choose “AWS CloudWatch” beneath configure. Follow the instructions in the screens provided. You have two choices: you can use the AWS GUI or you can configure it with the AWSCLI.

Once metrics start getting pushed to Weave Cloud, you can monitor your ELB service from the AWS Cloudwatch dashboards by clicking Monitor -> AWS CloudWatch.


The OpenFaaS Operator offers more options for managing functions on top of Kubernetes. Besides the faas-cli and the OpenFaaS UI, you can now use kubectl, Helm charts, and Weave Flux to build your continuous deployment pipelines. Running OpenFaaS on EKS and Weave Cloud, you get a production-ready function-as-a-service platform with built-in continuous deployment, monitoring and alerting. If you have questions about the operator please join the “#kubernetes” channel on OpenFaaS Slack.


Thanks to Alex Ellis – for his review and feedback.

Thanks also to the OpenFaaS community and especially to the early adopters – for their comprehensive testing.

Original Link

A Deep Dive Into Cloud-Agnostic Container Deployments

This article is featured in the new DZone Guide to Containers: Development and Management. Get your free copy for more insightful articles, industry statistics, and more! 

Container technology has been evolving from its originally primitive perspective in the late 1970s to the Docker era which debuted in 2013 and now, it’s safe to say we’re now firmly ensconced in the age of Kubernetes. Since the deployment method’s inception — and Docker popularizing the practice — containers have grown in renown and have dramatically enhanced the development landscape. Their usage has drastically improved the manners in which developers can implement distributed applications.

The focus on consistent container deployments across various platforms has made them increasingly embraced by enterprises of all shapes and sizes — especially as the latest in container orchestration’s support efforts assist even the greenest developers through development to production and have led to more containers being deployed and managed than ever before. The urge for improved control has attracted various software options as answers to proper container orchestration.

Kubernetes and Docker Swarm remain the two major tools on the market and are used by prominent internet companies for container orchestration. Other players on the scene are Amazon  Elastic Container Service (Amazon ECS), Shippable, Apache Mesos, Marathon, and Azure Container Service (AKS), to name a few.

Kubernetes vs. Docker Swarm: Most Effective Deployment Method

Container orchestration refers to the automated organization, linkage, and management of software containers. These concepts are conventional for most of the tools mentioned above. This article aims to deep dive into a comparison between the two dominating players. Below are the features availed by both Kubernetes and Docker Swarm.

  • Clustering: For synchronized computing ability across multiple machines.

  • High availability: Run code all at once in various locations.

  • Fault tolerance: If a container fails, it can be relaunched automatically.

  • Secret management: Share secrets safely between different hosts.

Advantages of Docker Swarm

There are fundamental differences in the way Kubernetes and Docker swarm operate, though, which pose advantages for one platform over the other for different end users. Here are some pros of Docker Swarm vs. Kubernetes.

Easy, Fast Setup

It is easy to install and also configure Docker Swarm orchestration. You only need to deploy a node and request it to join a cluster. A swarm allows a node to join clusters either as a manager or a worker, thus providing more flexibility. For an idea of how simple this is, check out Creating a High-Availability Docker Swarm on AWS.

Works With Existing Docker APIs

The Swarm API derives most of its functionality from Docker itself. Kubernetes needs its own API, client, and YAML, which are all different from Docker’s standards.

Load Balancing

Docker Swarm provides automated load balancing via any node.  In any network that permits connection of any container through any node, all containers within a cluster can join.

Sharing of Data Volumes

Docker Swarm simplifies local data volume sharing. Volumes (directories) can be generated individually or together with the containers, then divided among multiple containers.

Disadvantages of Docker Swarm

Limited Functionality

Docker Swarm performance revolves around the provisions of the Docker API. If the Docker API lacks a specific operation, it can’t use Swarm.

Limited Fault Tolerance

Docker Swarm has finite fault resistance.

No Built-In Scalability

At the time of writing, scaling containers and the support infrastructure is achievable but very tricky to do.

Unstable Networking

Docker’s underlying raft network has shown many signs of instability and the result is that many production environments are having issues. There’s flaky networking underpinning everything.

Advantages of Kubernetes

Simple Service Assembly with Pods

Kubernetes Designates Container Pods as Services to Permit the Concept of Load Balancing. You Can Achieve Each Function by using a set of pods and policies to setup load balancing. This configuration does not use IP addresses.

High Service Availability Retention

Kubernetes monitors the system and clusters progressively to upkeep service health and maintains its availability.

High Scalability

Gain the ability to build clusters across different location and providers, with built-in auto-scaling at the cloud and container levels.

Highly Extensible

Kubernetes was designed to be extremely flexible and support many plugins. This has allowed for very large community growth, and as such, a massive selection of tools and plugins.

Here’s a small curated collection of some of the more popular tools for 2018.

  • Elevated data sharing: Kubernetes allows containers to share data within pods. It also allows external data volume managers to exchange data between pods.

  • Cloud integration: Kubernetes is highly extensible and its cloud integrations are awesome. A load balancer service will deploy the matching managed service in GCP, AWS, or Azure. Many other platforms — Docker included — have also added Kubernetes support.

Disadvantages of  Kubernetes

Potentially Overwhelming to Install and Configure Manually

Kubernetes necessitates a set of manual configurations to tie its components to the Docker engine. It comes with unique installations for every operating system. Before installation, Kubernetes requires information like node IP addresses, their roles, and numbers. There are many tools available to simplify the install and config process, though.

A New Level of Complex

Kubernetes is considered relatively white-box, i.e. you can get a lot more out of it, but you really need to have a deep understanding of what makes Kubernetes tick to achieve this. The platform is not designed for novices and the faint of heart to navigate.

Throughout the pros and cons of Docker Swarm, you can note that Swarm’s focus is on the ease of adoption and integration with Docker. Kubernetes, on the other hand, stands open and flexible. The K8s platform delivers top support for highly complex demands. This versatility is behind the preference for Kubernetes by many high-profile internet companies.

Three Major Kubernetes Object Management Techniques

There are three Kubernetes deployment techniques to make your work plan faster and smoother. Among the three, you should choose only one method at a time to manage a Kubernetes object. Use of multiple techniques can cause unstable performance. Here are the three types of Kubernetes object management techniques that you can use.

Imperative Commands

These are “easier-to-use” commands, and they’re simple to recall over and over again. The commands deliver a one-step change to a cluster, and you work directly on live objects. Type operations into the kubectl command line as flags or arguments. This technique does not provide any history of earlier configuration, though. Hence, previous commands can’t be used in change review processes. Imperative commands are ideally applicable in development projects.

Imperative Object Configuration

With imperative object configuration, the kubectl commands mainly focus on an operation like create, replace, and others. It also defines an optional flag and at least one file name. Alongside the file name, it gives a complete definition of an object in YAML or JSON format. This technique is preferred by many developers, as it allows commands storage in a source control system.

It is also useful in change review processes, as commands can be integrated. Furthermore, configurations can serve as templates for new objects.

Just as a captain steers a ship, Helm offers greater control for Kubernetes clusters. It can be thought of as a package manager that offers greater flexibility in the creation of Kubernetes definition YAMLs through a templating language and structure.

Declarative Objects Configuration

Declarative objects configuration allows you to operate on object setup files stored locally. However, you can’t specify the operations to carry on the files. This mandate rests with kubectl, and it can automatically create, update, or delete operations per object. The kubectl functionality enables working directories where you will need different processes for different objects.

Leverage Kubernetes Performance with Helm Charts

What Are Helm Charts?

Just as a captain steers a ship, Helm offers greater control for Kubernetes clusters. It can be thought of as a package manager that offers greater flexibility in the creation of Kubernetes definition YAMLs through a templating language and structure. There are both client-side and the server-side segments of Helm for Kubernetes (Helm is the package manager and the Tiller is the in-cluster component that works with the K8s API server). The client collaborates with the server to effect changes to the Kubernetes cluster.

In a standard Helm sequence, the user initiates the Helm install command. The Tiller server responds by setting up the relevant install package into the Kubernetes cluster. Such packages are known as charts, and they offer a convenient approach to distribute and install packages.

Helm fulfills the need to efficiently and reliably provision Kubernetes orchestration without all the individual configuration involved. Charts are the software equivalent of development templates. Therefore, you can quickly achieve installation, updates, and removal without any fuss. Helm charts will even help you override the Kubernetes disadvantages as addressed earlier. Your team can concentrate its focus on developing applications and improving productivity instead of deploying dev-test environments. Helm takes care of all of that for you.

In addition to these benefits, your team doesn’t need to preserve service tickets during Kubernetes deployments and you also eliminate the complexity of maintaining an App Catalogue too. GitHub/Kubernetes/Helm has a huge repository of communal charts in storage to be used in a matter of clicks. Among the most reliable Helm charts you could consider are ones for MySQL and MariaDB, MongoDB, and WordPress.


In conclusion, if containers are discouraging you at all, use Kubernetes to scale them. Kubernetes is highly extensible and comes with a lot of plugins to support your productivity. Furthermore, deploy all your containers and clusters using Helm and

Helm charts to further boost efficiency.

This article is featured in the new DZone Guide to Containers: Development and Management. Get your free copy for more insightful articles, industry statistics, and more! 

Original Link

Redis Enterprise Release Using Helm Charts

Helm is a tool that makes the installation and management of Kubernetes applications efficient. Helm helps you manage Kubernetes Charts. Charts are a collection of information and files needed to create an instance of a running Kubernetes application. There are three main concepts in Helm:

  1. Charts — Collections of files inside a directory used to create a Kubernetes application
  2. Config — Contains configuration information used to create a releasable object
  3. Release — A running instance of a Chart

Helm has two main components:

  • Client — Sends Charts to the server component for installation or upgrade of existing releases
  • Server — A component called tiller interacts with Kubernetes API server using gRPC

Why You Need Helm Charts

The manual deployment of the Kubernetes application, which may have many resources, can be prone to errors such as failure to deploy a resource or typing a wrong input when issuing the “kubectl” command(s). You can avoid these problems by automating the steps in a script. However, the problem with the home-grown automation script is that the logic of the script cannot be easily transferred to a Kubernetes cluster.

Introducing the Redis Enterprise Kubernetes Release

Redis Labs, home of Redis, has been working on a Kubernetes-based deployment of Redis Enterprise for the last few months. We have written our own Kubernetes controller which deploys a Redis Enterprise database service on a Kubernetes cluster. The Redis Enterprise release is made up of many Kubernetes resources, such as service, deployment, StatefulSet and Kubernetes secrets.

How Helm Charts Improve the Redis Enterprise Kubernetes Release

During the beta period of the product development, we used to deploy all the required Kubernetes resources manually, which was error prone. Synchronizing YAML files between Kubernetes clusters, managing configuration versions started to become a challenge. Helm Charts allow us to deploy the Redis Enterprise service using a single command to a Kubernetes namespace of your choice:

helm install --namespace redis -n 'production' ./redis-enterprise

How Do You Get Started?

  • You can download our Redis Enterprise Helm Charts
  • You can download our Redis Enterprise Docker container image
  • You can find the readme of our Helm Charts

It’s really that simple!

What’s Next?

If you would like to start experimenting with our Kubernetes release-candidate, please contact so that we can help you with your Redis needs.

Original Link

Getting Started with Node.js Applications on Microsoft’s Azure Kubernetes Service (AKS)


AKS is Microsoft’s managed container environment hosted on Azure. Using Kubernetes, AKS gives you the power of orchestration for your applications with provisioning, upgrading, and scaling. All of this without initial setup or ongoing maintenance.

Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.

You may be asking why it’s important to understand this infrastructure. With a lot of infrastructure moving towards containerized solutions it’s worth understanding, as a Node.js developer, how your applications run in a production environment.

Being able to replicate a production setup on your local machine can make debugging issues quicker and easier.

Understanding the build process once you’re ready to ship code can help avoid situations where developers don’t understand how applications are built and run in production.

Shared knowledge can help contribute towards a collaborative environment between development and operations teams.

This post walks you through deploying a Node.js application to AKS using Helm, beginning with replicating the setup locally and then transferring over to a hosted, production-ready environment.

Helm makes managing Kubernetes applications easier via customizable Charts which define application setup, provide repeatable installations and serve as a single point of authority.

Local Setup

Before diving into the Azure setup, let’s replicate the production setup locally. This provides you with a playground to test changes without the need to push to a hosted environment.

Here’s what the local setup looks like:

We’ll use macOS in this article. Developers on Windows have access to all the same functionality in the Edge version of Docker for Windows Desktop.

Initial Setup

  1. Install the Edge version of Docker for Mac. This version provides native Kubernetes functionality out of the box.
  2. Once installed open ‘Preferences’.
  3. Navigate to ‘Kubernetes’ and tick ‘Enable Kubernetes’. Once activated this lets you run a Kubernetes cluster locally.
  4. Run kubectl config get-contexts to ensure you’re using the ‘docker-for-desktop’ context. If you’ve used Minikube before, you might need to switch to ‘docker-for-desktop’ with:
     kubectl config use-context docker-for-desktop

  5. The final part of the initial setup is to run helm init to initialize Helm and Tiller in your cluster.

Build your Docker Image

  1. Set up a local Docker registry to store and retrieve images, this can be done with:
     docker run -d -p 5000:5000 --restart=always --name registry registry:2

    Further details on customizing your own registry can be found in the Docker documentation for deploying a registry server.

  2. Push your first image to your local registry. The example app for this article is react-pwa, a Progressive Web App (PWA) built in React.js with a Node.js backend. For more information on PWA’s check out an in-depth article from nearForm Building Progressive Web Apps. nearForm provides images via the Docker Hub for Node.js 8 LTS, 9 and 10. These images are updated within a very short time of any Node.js release or OS base image change. Commercial support is available for 8 LTS. There are images for CentOS, Red Hat Enterprise Linux and Alpine Linux. The Dockerfile being used is available at ./infrastructure/docker/app.Dockerfile, it uses nearForm’s Node.js Alpine image.
     FROM nearform/alpine3-s2i-nodejs:8 # Create app directory WORKDIR /opt/app-root/src COPY package*.json ./ RUN yarn install --production=false COPY . . RUN yarn run build EXPOSE 3000 CMD [ "yarn", "start" ]

  3. Build the image by running the following command:
     docker build . --tag react-pwa --file infrastructure/docker/app.Dockerfile

    Generally, you tag images with a specific version number but for local development, this is not necessary. Using the latest version allows you to run the same build command each time you need to rebuild the image. Once built see the image listed by running the command docker images.

Install Your Application Stack

To build your Kubernetes deployment, service and ingress run the following command:

helm install infrastructure/deployment/charts/app --name react-pwa -f infrastructure/deployment/charts/app/

A Helm chart is a collection of files describing Kubernetes resources using templates and configuration files. Templates use values from configuration files, the default being values.yaml or from overrides defined on the command line using the --set flag.

Rather than taking the values from the default values.yaml file pass in an override file called with specific settings for local deployment.

To ensure everything is up and running run kubectl get pods.

This gives you a list of pods and their status.

If for any reason any of the pods don’t have the ‘Running’ status then it’s likely that there was a problem with the installation. Run kubectl describe pod react-pwa-app-<generatedPodHash> to list the events that took place during installation.

Another useful command you can run against your pods is kubectl logs react-pwa-app-<generatedPodHash>. If your application is ‘Running’ then you can see the logs from your application initialization.

Once your pods are up and running set up an NGINX ingress controller daemon to act as a load balancing proxy for your applications. Run the following command:

helm install stable/nginx-ingress --name local-nginx

Rather than creating your own charts you can use charts from the Helm registry for this. The controller listens for deployments or changes to ingress resources, updates it’s NGINX config and reloads when required.

Finally, add a host entry to map localhost:80 to hn.nearform.local, you can do that with the following command:

echo " hn.nearform.local" | sudo tee -a /etc/hosts

Now navigate to hn.nearform.local in your browser to see your application running.

If you get a response but it’s not what you’re expecting then make sure there isn’t anything already running on port 80, you can check by running the following command:

sudo lsof -i :80 | grep LISTEN

If there is something else running you may be able to disable it with the following command:

sudo launchctl unload -w /System/Library/LaunchDaemons/org.apache.httpd.plist

Now you have a local setup close to production that can be ported over to Azure’s hosted solution!

To update your application in the cluster re-run the docker build command to rebuild the image and then run upgrade with Helm:

helm upgrade react-pwa infrastructure/deployment/charts/app -f infrastructure/deployment/charts/app/

AKS Setup

Now that you’re running locally it’s time to port it over to a hosted environment to create an externally accessible application.

Here’s what the setup looks like on AKS: as you can see it mirrors our local setup closely.

Initial Setup

Microsoft already has some very good documentation on setting up your Kubernetes cluster and a container registry on Azure. I’d highly recommend following the guide on setting up your Azure environment.

What isn’t covered here though, and what we’ll go over in this post, is the use of Helm for AKS deployments.

Push Your Docker Image

  1. Tag the Docker image created earlier so you can push it to the Azure Container Registry. Tag your image with a version number and the <acrName>, which is the name of the registry you created on Azure:
     docker tag react-pwa <acrName>/react-pwa:0.1.0

  2. Push your image to the remote Azure registry using:
     docker push <acrName>

  3. Check your image is in the Azure registry by using the Azure CLI:
     az acr repository list --name <acrName> --output table

Install Your Application Stack

  1. Ensure you’re running against the correct Kubernetes context by running the following command:
     kubectl config get-contexts

    This command lists your local cluster and the cluster deployed to Azure. You can switch clusters with kubectl config use-context <aksClusterName> where <aksClusterName> is the name of the cluster created on AKS.

  2. Install the application specific resources with Helm. You don’t need to pass in a specific file this time as you are using the default values.yaml configuration.
     helm install infrastructure/deployment/charts/app --name react-pwa --set ingress.hosts={<customHostname>} --set imgrepo=<hostedImageRepo>

    <customHostname> is your chosen hostname and <hostedImageRepo> is the path of your hosted registry instance name on Azure.

  3. Use kubectl describe and kubectl logs to inspect and debug deployments.
  4. Once those pods, services and ingress are deployed and running you can create the NGINX ingress controller daemon with:
     helm install stable/nginx-ingress --name nginx-controller

  5. As suggested by the output of the above command you can run:
     kubectl --namespace default get services -o wide -w nginx-controller-nginx-ingress-controller

    This command waits for the pending IP address to be created and notifies you when it’s done. Once you have the external IP address, requesting it returns the default 404 response from NGINX.

  6. The final step is to configure your DNS to create an A record pointing the domain name specified in your Helm chart at the external IP address provided. Once the DNS change propagates your application is running on your defined hostname. To set up HTTPS and automatic HTTP to HTTPS redirects in AKS there are detailed instructions provided by Microsoft which expand upon the NGINX ingress controller setup.


You’ve taken a Node.js application and deployed it to a modern infrastructure while avoiding some of the setup and maintenance costs associated with building a production ready, scalable, containerized solution.

The declarative nature of much of this setup lends itself well to automation. The next steps are to automate the process of upgrading applications via CI as well as scripting the initial setup to be replicable for any application.

Original Link

Kubernetes Namespaces Explained

Why Namespaces?

Kubernetes Namespaces can be used to divide and manage a cluster, segregating the cluster for use by different dev teams or for different purposes (e.g. dev and prod). Namespaces can be a focus for role-based access control, with the option to define roles as applying across a namespace. Resource quotas can be set as specific to a namespace, allowing specific namespaces to be given higher resource allowances. Sometimes one will choose to use distinct clusters for different purposes but if the purposes are related then namespaces can be a more convenient or appropriate option.

To understand namespaces we need to understand that they provide a scope for Kubernetes Names. Within the Namespace an Object can be referred to by a short name like ‘Captain’ and the Namespace adds further scope to identify the object, e.g. which ship the Captain is Captain of. Within the Namespace the name must be unique (there can only be one Captain per Namespace) but only the combination of Name and Namespace needs to be unique across Namespaces. This simplifies naming within Namespaces and means that it is possible to cross Namespace boundaries. Let’s explore this by example.

Namespaces Demo App

Our Demo App consists of two simple Spring Boot Web apps with simple UIs. Each app is a microservice which can be set to represent the role of ‘ captain ’, ‘ bridge ’ or ‘ science-officer ’, depending upon the value set for the property. We’ll build one instance of each role per project. The two project represent different ‘ships’ and are the same except that the UIs have different images to distinguish the ships.

Within each microservice, we can make a call to another, such as:

@GetMapping(value = "bridge")
public String callBridge(){ return restTemplate.postForEntity(bridgeUrl, appName, String.class).getBody();

The  bridgeUrl  is configured using a spring boot property ( The appName  is calling service’s, so that the responding service can acknowledge who it is replying to. Each service needs to be able to reply to such a call:

public String respond(@RequestBody String caller){ return String.format("yes %1$s, %2$s here",caller,appName);

So if the captain calls the bridge then the bridge responds with “Yes captain, bridge here”.

Each service contains a UI that retrieves the from the backend’s ‘app-name’ endpoint and uses it to set the title and decide which image to show:

$.get( "app-name", function( data ) { document.title=data; $('#pageTitle').text(data); $('#picture').attr('src',data + '.jpg'); $('#call-'+data).hide();
}).fail(function(error) { alert('cannot retrieve app config: '+error.responseText);

So the image could be bridge.jpg, captain.jpg or science-officer.jpg. The .hide()  function here ensures we don’t have a button for the captain to call himself, for example.

The embedded UI just provides buttons to call its backend to communicate across the ship:

Image title

Demo App in Minikube

The two apps are both set up with the Fabric8 Maven plugin so that we can build Docker images for each — the first is called ‘startrek/tos’ and the second is called ‘startrek/tng’.

To deploy the apps to minikube we have a Kubernetes deployment descriptor for each. The descriptors are almost the same. For the tos descriptor we first create a Namespace:

apiVersion: v1
kind: Namespace
metadata: name: tos labels: name: tos

Then a ConfigMap for configurations that we will apply to all of the services in the descriptor:

apiVersion: v1
kind: ConfigMap
metadata: name: tos-config namespace: tos
data: JAVA_OPTS: -Xmx64m -Xms64m BRIDGE_CALL_URL: "http://bridge:8080/call" CAPTAIN_CALL_URL: "http://captain:8080/call" SCIENCEOFFICER_CALL_URL: "http://science-officer:8080/call"

So we’ll call the captain using the name ‘ captain ’. There will be a captain available using that name because the descriptor includes a Kubernetes Service for it:

apiVersion: v1
kind: Service
metadata: name: captain namespace: tos
spec: selector: serviceType: captain ports: - port: 8080 targetPort: 8080 nodePort: 30081 type: NodePort

The Service looks for Pods labeled with ‘ serviceType: captain ’. There will be Pods matching this because the descriptor includes a Deployment to create them:

apiVersion: apps/v1beta1
kind: Deployment
metadata: name: captain namespace: tos labels: serviceType: captain
spec: replicas: 1 template: metadata: name: captain labels: serviceType: captain spec: containers: - name: captain image: startrek/tos:latest imagePullPolicy: Never ports: - containerPort: 8080 env: - name: SPRING_APPLICATION_NAME value: "captain" envFrom: - configMapRef: name: tos-config

These Pods use the startrek/tos Docker image and the tos-config ConfigMap. The setup is much the same for the bridge and science-officer except that different external ports are used on the Services. The tng descriptor instead uses the startrek/tng docker image.

With these descriptors, we can deploy to minikube. We can start minikube with:

minikube start --memory 4000 --cpus 3

Link our terminal to the minikube Docker registry with:

eval $(minikube docker-env)

And then build each application by running the following from its directory:

mvn clean install

And deploy with:

kubectl create --save-config -f ./tos/k8sdescriptor.yaml kubectl create --save-config -f ./tng/k8sdescriptor.yaml 

Then see all the services with:

open http://$(minikube ip):30080 open http://$(minikube ip):30081 open http://$(minikube ip):30082 open http://$(minikube ip):30083 open http://$(minikube ip):30084 open http://$(minikube ip):30085

Image title

Crossing the Namespace Divide

We can switch the tng bridge and captain to call to the tos science-officer. To do this we go to the k8sdescriptor for tng and replace the value of SCIENCE_OFFICER_CALL_URL (http://science-officer:8080/call) with http://science-officer.tos:8080/call. (It would also work to use the fully qualified name of http://science-officer.tos.svc.cluster.local:8080/call) Then we save the change and do:

kubectl apply -f ./tng/k8sdescriptor.yaml kubectl delete --all pods --namespace=tng 

This applies the change and deletes the tng Pods. The Deployment will automatically create new ones using the new config. (The ConfigMap change doesn’t trigger an update automatically.)

The difference isn’t very apparent as we just see the same reply we did before (‘ yes captain, science-officer here ’). If we want to be sure that the tng services really are calling the tos science-officer then we can delete the tos namespace with ‘ kubectl delete namespace tos ’ and see that the call now fails and that the longer name is in the error.

Image title

Now that we know that we can call across Namespaces by giving the Namespace name in the call, we might consider just including an extra name (in this case ‘ tos ’ or ‘ tng ’) in all of our Kubernetes Object names so that instead of calling the ‘ captain ’ we call ‘ tos-captain ’. If we’re interested in doing that then Helm will do it for us, and more. But before we look at that we should clean up everything we’ve created:

kubectl delete namespaces tos kubectl delete namespaces tng

The Objects in the namespaces will be removed automatically.

Sharing a Namespace and Helm

Namespaces provide distinct logical spaces for which we can control the permissions or resources. But if the logical separation we’re looking for is not about resources or permissions and is more about packaging and deployment then we might look at using a single namespace and packaging with helm.

Helm helps us to package up our Kubernetes applications so that they can be more easily shared and re-used. To achieve this we create a helm chart. A helm chart is like a parameterized deployment descriptor — actually it is a template that is used to generate deployment descriptors for each install.

The chart for this application in GitHub was created in two stages. First, a chart was created to represent a single microservice using ‘ helm create startrekcharacter ’ and for that the configurations were applied to set environment variables to be able to call other services and to default to the startrek/tos image. Then an umbrella chart was created called ‘ startrek ’ and the first chart was moved to be a sub chart of the umbrella chart. The umbrella chart re-uses the sub chart multiple times so as to include a bridge, captain and scienceofficer, with the option to skip any of these. The umbrella chart allows the image to be set for all of the sub charts by treating that variable as a global. It also allows the URL for another service (e.g. the bridge) to be overridden to cover the case where a component is skipped (meaning that the user chooses to point to an existing instance of the bridge rather than installing a bridge together with the captain and scienceofficer).

This means that we can install the components for tos into the default namespace with the command:

helm install --name=tos ./charts/startrek/

Then install the tng captain and scienceofficer (no tng bridge) in the same namespace, to point to the tos bridge with:

helm install --name=tng --set global.image.repository=startrek/tng,bridge.enabled=false, ./charts/startrek/

Test them with:

minikube service tos-bridge
minikube service tos-captain
minikube service tos-science-officer
minikube service tng-captain
minikube service tng-science-officer

And delete the tos release with:

helm del --purge tos

Again we can confirm that the tng services were pointing to the (now removed) tos bridge:

Image title

And we can remove tng with:

helm del --purge tng

Original Link

Hunting Treasure with Kubernetes ConfigMaps and Secrets

The Kubernetes documentation illustrates ConfigMapsusing configuration properties for a game like ‘enemies=aliens’, ‘lives=3’ and secret codes that grant extra lives. So let’s explore this by building a mini-game for ourselves, Dockerizing it and deploying it to Minikube. This will give us a deeper understanding of how we can create ConfigMaps and Secrets and use them in our applications.

Treasure Hunt Game Concept

The idea is that there is treasure hidden at certain x and y coordinates and the player tries to work out where they are by making guesses through a URL with parameters like ‘ /treasure?x=4&y=6 ’. We could support guesses with a web page that has inputTexts  and a button, but instead we’ll keep things simple with just a URL. The player has a maximum number of attempts and each request uses up an attempt. When the player uses up all their attempts they are told they have died. We’ll make a reset available to be able to start again.

We’ll first build this and then capture the hidden location of the treasure using Kubernetes Secrets and the number of attempts in Kubernetes ConfigMaps.

Building the Game

We start at the Spring Initializr to generate a Spring Boot app with the Web dependency:

Image title

The key to our app is going to be the treasure location. So let’s add a Spring Component for it:

public class Treasure { @Value("${treasure.location.x:1}") private Integer x; @Value("${treasure.location.y:0}") private Integer y; // if we wanted boot to randomize if not supplied then we could do // ${treasure.location.y:${}} for y and would be for x public Integer getX() { return x; } public Integer getY() { return y; }

Here the x and y coordinates of the treasure are set using Spring Boot properties that each default if not set. So we can expect these to later be set through environment variables of  TREASURE_LOCATION_X  and  TREASURE_LOCATION_Y . (We could use Boot to randomize the treasure location but we’re not doing that.)

Now we can Autowire this Treasure component into the main Controller that will do the job of handling web requests for the game. We’ll assume we’ve only got one user at a time as we’ve not got any authentication. So we can get started with a Controller:

public class TreasurehuntController { @Value("${treasurehunt.max.attempts:3}") private Integer maxAttempts; // use concurrent package to avoid cheating by simultaneous requests private AtomicInteger attemptsMade = new AtomicInteger(0); @Autowired private Treasure treasure; @GetMapping public String treasure(@RequestParam(required=true) Integer x, @RequestParam(required=true) Integer y){ //hit treasure and not already dead if(x == treasure.getX() && y == treasure.getY() && attemptsMade.intValue() < maxAttempts ){ return Graphics.treasure; } int attemptsLeft = maxAttempts - attemptsMade.incrementAndGet(); if (attemptsLeft > 0) { return String.format(Graphics.missed, attemptsLeft) ; } else { return Graphics.died; } }

The user is limited to a maximum number of attempts through a configurable property that defaults to three. They won’t get away with making two attempts at once as we’re counting attempts with an AtomicInteger. Normally we’d store this counter per user but to do that we’d have to add a way to identify the user and we’re not concerned with that in this example.

We handle requests like  /treasure?x=1&y=1  with the  @GetMapping . If the user hasn’t hit the maximum number of lives and their parameters match the treasure location, then they see the treasure. This is represented by a String from Otherwise, we record an attempt and check how many are left. If they’re not dead yet, then we show a “missed” graphic, into which we inject the number of attempts left so we can tell the user. Otherwise, we show the user that they’re dead.

The class just has a set of Strings to represent “screens.” These could be as simple as “You found the treasure” and “You died,” but we can make it feel more like an old-school game with ASCII art. This will make more sense if we walk through the screens.

If we start the game with ‘ mvn spring-boot:run ’ and make an unsuccessful attempt like ‘ localhost:8080/treasure?x=5&y=6 ’ then we see:

Image title

If we hit the treasure we see: 

Image title

And if we run out of attempts we see:

Image title

But how do we then play again? And shouldn’t we have a “landing page” screen to introduce the game? To allow for this we add a bit more to the Controller:

 @GetMapping(value = "")
public String home(){ return"<br/><br/><br/>Play by going to e.g. /treasure?x=1&y=1 ";
} @GetMapping(value="reset")
public String reset(){ attemptsMade.set(0); return home();

Now ‘ /reset ’ will let us start afresh and it returns the same “screen” as the introductory screen which shows the map from the Graphics:

Image title

At this point, the game is playable, but there are quite a lot of locations one could guess at on this map so the player doesn’t have much chance. In the next section, we’ll improve the playability by showing a clue. This doesn’t have anything to do with how we’ll configure the game to run in Kubernetes so feel free to skip to the following section on deploying to Minikube if you’re not worried about the game’s dynamics.

Improving Playability

We’ll display a clue to narrow the player’s options down to within a 2 by 2 grid. The grid needs to include the treasure location but we can’t simply start that grid at the treasure location as that would give the location away (the player would realize that the treasure was always at the bottom-left of the clue grid). So we need to randomize it. We can find a start position for the grid as within range of the treasure with:

int leftX = Math.max(treasureX - ThreadLocalRandom.current().nextInt(0, 2),0);
int bottomY = Math.max(treasureY - ThreadLocalRandom.current().nextInt(0, 2),0);

This randomly decides whether to go a spot left or below the treasure (or at the treasure), provided it doesn’t take us off the map (in which case we fall back on zero). We also don’t want to be off map in the other direction so we compensate for that too:

leftX = Math.min(leftX, Graphics.xMax - 1);
bottomY = Math.min(bottomY, Graphics.yMax - 1);

We could simply output this range as starting from  leftX,bottomY  and going to  leftX+1,bottomY+1  (i.e. the bottom-left and top-right of the box). That would provide a clue but it would be nice to make it visual. So we can create a ClueGenerator class that contains a  List  called ‘ mapRows ’ that represents the map as Strings:

//the map without scales shown
mapRows.add(" |~ ~~ ~~~ ~ ~ ~~~ ~ _____.----------._ ~~~ ~~~~ ~~ ~~ ~~~~~ ~~~~|");
mapRows.add(" | _ ~~ ~~ __,---'_ \" `. ~~~ _,--. ~~~~ __,---. ~~|");
mapRows.add(" | | \___ ~~ / ( ) \" \" `-.,' (') \~~ ~ ( / _\ \~~ |");
mapRows.add(" | \ \__/_ __(( _)_ ( \" \" (_\_) \___~ `-.___,' ~|");
mapRows.add(" |~~ \ ( )_(__)_|( )) \" )) \" | \" \ ~~ ~~~ _ ~~|");
mapRows.add(" | ~ \__ (( _( ( )) ) _) (( \" | \" \_____,' | ~|");
mapRows.add(" |~~ ~ \ ( ))(_)(_)_)| \" )) \" __,---._ \" \" \" /~~~|");
mapRows.add(" | ~~~ |(_ _)| | | | \" ( \" ,-'~~~ ~~~ `-. ___ /~ ~ |");
mapRows.add(" | ~~ | | | | _,--- ,--. _ \" (~~ ~~~~ ~~~ ) /___\ \~~ ~|");
mapRows.add(" | ~ ~~ / | _,----._,'`--'\.`-._ `._~~_~__~_,-' |H__| \ ~~|");
mapRows.add(" |~~ / \" _,-' / `\ ,' / _' \`.---.._ __ \" \~ |");
mapRows.add(" | ~~~ / / .-' , / ' _,'_ - _ '- _`._ `.`-._ _/- `--. \" \" \~|");
mapRows.add(" | ~ / / _-- `---,~.-' __ -- _,---. `-._ _,-'- / ` \ \_ \" |~|");
mapRows.add(" | ~ | | -- _ /~/ `-_- _ _,' ' \ \_`-._,-' / -- \ - \_ / |");
mapRows.add(" |~~ | \ - /~~| \" ,-'_ /- `_ ._`._`-...._____...._,--' /~~|");
mapRows.add(" | ~~\ \_ / /~~/ ___ `--- --- - - ' ,--. ___ |~ ~|");
mapRows.add(" |~ \ ,'~~| \" (o o) \" \" \" |~~~ \_,-' ~ `. ,'~~ |");
mapRows.add(" | ~~ ~|__,-'~~~~~\ \\"/ \" \" \" /~ ~~ O ~ ~~`-.__/~ ~~~|");
mapRows.add(" |~~~ ~~~ ~~~~~~~~`.______________________/ ~~~ | ~~~ ~~ ~ ~~~~|");
mapRows.add(" |____~jrei~__~_______~~_~____~~_____~~___~_~~___~\_|_/ ~_____~___~__|"); //we added rows from top down so now want to invert so that index zero is bottom

We have to invert it as the bottom is zero and we’ll want to count upwards but we added the top first.

Now the trick to drawing a box is just to take subStrings  from Strings  in the mapRows  List. We can use the list index to choose which rows. The index of the row can be mapped to the numbers on the map’s vertical scale and the positions for a subString  can be mapped to the numbers on the map’s horizontal scale. All we need for that is to know how many characters apart each of the numbers on the scales are. We can record that in the Graphics class so our logic to show a clue box a clue can be:

//on each row take the subString from position leftX to (leftX+1) and factor for scale
//take rows starting bottomY*yScale
List<String> clueRows = new ArrayList<>(); for( int i = (bottomY * Graphics.yScale) ; i < ((bottomY + 1) * Graphics.yScale); i++){ clueRows.add( mapRows.get(i).substring(leftX*Graphics.xScale, (leftX+1)*Graphics.xScale) );
} //need to reverse back again so that we print top-down
StringBuilder clueBuilder = new StringBuilder();
for(String row:clueRows){ clueBuilder.append(row).append("<br/>");
clueBuilder.append("which is "+leftX).append(",").append(bottomY).append(" to ").append(leftX+1).append(",").append(bottomY+1);
return clueBuilder.toString();

And we then use this in the home endpoint of our main Controller:

@GetMapping(value = "")
public String home(){ String homePage ="<br/>"; homePage+="Your clue is:<br/><br/>"+clueGenerator.getClue(treasure.getX(),treasure.getY()); homePage+="<br/><br/><br/>Play by going to e.g. /treasure?x=1&y=1 "; return homePage;

Which looks like:

Image title

Taking the Game to Minikube

Before we can deploy to Minikube we’ll need to create a Docker image for our app. We want to build the executable jar inside the Docker image and then start the Java app when the container starts. We can do this using a multi-stage Docker build. The Dockerfile is:

FROM maven:3.5-jdk-8 as BUILDTREASUREHUNT COPY src /usr/src/myapp/src
COPY pom.xml /usr/src/myapp
RUN mvn -f /usr/src/myapp/pom.xml clean package -DskipTests FROM openjdk:alpine COPY --from=BUILDTREASUREHUNT /usr/src/myapp/target/*.jar /maven/\
CMD java $JAVA_OPTS -jar maven/*.jar

Everything down to ‘ FROM openjdk:alpine ’ builds the JAR and then just the jar is copied over into a subsequent build stage based on the lightweight  openjdk:alpine  image. We start it with the  JAVA_OPTS  param exposed so that we’ve got the option to limit memory consumption (see this article about reducing memory consumption).

Then we can build an image using the command ” docker build . -t treasurehunt 

And we can deploy it by creating a Kubernetes deployment. We’ll split the Kubernetes deployment into multiple files for the different Kubernetes objects that we need.

First, we create a ‘treasurehunt’ subdirectory and in there let’s create a ConfigMap in config.yaml:

apiVersion: v1
kind: ConfigMap
metadata: name: treasurehunt-config namespace: default
data: | treasurehunt.max.attempts=5

This ConfigMap is called ‘ treasurehunt-config ’ and it contains an entry that represents an file. We create the contents of the file in-line in the definition of the ConfigMap. Because this config is a file, when we come to use this later in a Deployment we’ll mount the file as a volume, looking it up using the ConfigMap name and the key from the data section:

- name: application-config configMap: name: treasurehunt-config items: - key: path:

We can then mount it using a volumeMount :

- name: application-config mountPath: "/config" readOnly: true

Simply putting the file in the “ /config ” directory is enough for the Spring Boot to read it and use it to override properties set from within the Jar’s internal properties file. We’ll know it has worked as this increases the number of attempts allowed to 5.

The secrets.yaml file takes a different approach and has content that at first looks a little strange:

apiVersion: v1
kind: Secret
metadata: name: treasurehunt-secrets
type: Opaque
data: treasure.location.x: Mw== treasure.location.y: Mw==

There are two data elements as we’re using environment variables instead of a file. The values look a bit strange as they were generated by doing:

echo -n "3" | base64

So the value of “3” encoded in base64. (Note there are also websites that encode and decode.) This encoding is just required by Kubernetes for secrets. It’s not an encryption—it’s just an extra step.

When these get used in a Deployment it will look like:

- name: TREASURE_LOCATION_Y valueFrom: secretKeyRef: name: treasurehunt-secrets key: treasure.location.y

So the form is to specify the key (‘ treasure.location.y ’), which Secret it comes from (‘ treasurehunt-secrets ’) and what environment variable name to use for it (‘ TREASURE_LOCATION_Y ’). The environment variable name will be automatically mapped to the property treasure.location.y by Spring Boot.

We’ll need to create instances of Docker containers to service requests in the form of Pods. This will be done by our Deployment, created in deployment.yaml:

apiVersion: apps/v1beta1
kind: Deployment
metadata: name: treasurehunt labels: serviceType: treasurehunt
spec: replicas: 1 template: metadata: name: treasurehunt labels: serviceType: treasurehunt spec: containers: - name: treasurehunt image: treasurehunt:latest imagePullPolicy: Never ports: - containerPort: 8080 env: - name: JAVA_OPTS value: -Xmx64m -Xms64m - name: TREASURE_LOCATION_X valueFrom: secretKeyRef: name: treasurehunt-secrets key: treasure.location.x - name: TREASURE_LOCATION_Y valueFrom: secretKeyRef: name: treasurehunt-secrets key: treasure.location.y volumeMounts: - name: application-config mountPath: "/config" readOnly: true volumes: - name: application-config configMap: name: treasurehunt-config items: - key: path:

The bottom part of this specifies the volume mounting and environment variables for the ConfigMap and secrets. They are being injected into the pods that are created by the deployment—every container belonging to a pod created by this deployment will have a mounted volume and environment variables for the treasure location. The preceding sections say which image to use for the Docker container to go in the pod (‘ treasurehunt:latest ’), which port to use (‘ 8080 ’) and that we should limit Java memory. We also label all the pods with ‘ serviceType: treasurehunt ’. This is used by the service, defined in ‘ svc.yaml ’:

apiVersion: v1
kind: Service
metadata: name: treasurehunt-entrypoint namespace: default
spec: selector: serviceType: treasurehunt ports: - port: 8080 targetPort: 8080 nodePort: 30080 type: NodePort

This says that we should expose port 30080 as available to route requests to Pods meeting the label specification ‘ serviceType: treasurehunt ’ and that they use port 8080.

So now we can link our terminal session to Minikube with:

 eval $(minikube docker-env) 

Build the image with:

docker build . -t treasurehunt 

Deploy from the project’s top-level directory with:

 kubectl create -f ./treasurehunt 

And access by running:

minikube service treasurehunt-entrypoint 

We can tell that the treasurehunt.max.attempts from the ConfigMap is being applied as we get 5 attempts:

Image title

And that the treasure location from the secret is applied as we hit the treasure at ‘ treasure?x=3&y=3 ‘ (which also means we the app could leak the secret to the player if it chose to do so).

We can remove the game from Kubernetes if we want to with:

kubectl delete -f ./treasurehunt

Options with Kubernetes Secrets

ConfigMaps and Secrets are handled differently by Kubernetes—we can see this if we compare the result of two  kubectl describe  commands. First ‘ kubectl describe configmaps treasurehunt-config ’:

Image title

We see the contents of the ConfigMap. But we don’t see the contents of the secret when we do ‘ kubectl describe secret treasurehunt-secrets ’:

Image title

We just get shown the number of bytes. But we do see the encoded contents when we do ‘ kubectl get secret treasurehunt-secrets -o yaml ’ as that gives us the yaml description of the object.

If we do ‘ minikube dashboard ’ and go to the relevant pages under ‘Content and Storage’ then we see another difference as the ConfigMap data is shown immediately and the Secrets have a hide/show option:

Image title

This may not look very secure but it’s not a real-world setup. To make things more secure we could enable role-based access control to restrict access to resources and we might choose to restrict who can access the Kubernetes dashboard.

With Secrets we may also want to avoid putting the values into source control. And for both Secrets and ConfigMaps we might want to have a way to use different values for different environments in a CI/CD pipeline. This is less of a concern with treasurehunt  treasure locations than it would be with passwords for real systems. One way to achieve this might be to separate out the part of the deployment script that creates the Secret from the other deployment descriptors (perhaps by putting it in a different folder). Then the deployment script could look for the Secret and only create it if it doesn’t exist already. Or the script itself could create the Secret.

If the deployment script in our CI/CD is to create the Secret then it’ll need to do some pre-processing to modify the secrets.yaml file to contain the base64-encoded value for that environment or use a different version of the file for each environment. Alternatively, we could run a kubectl command to create a Secret from a file using the –from-file option. We’d then want to mount the Secret file much as we did with the ConfigMap. This would avoid the need for a CI to modify or use different versions of the Secret deployment yaml for different environments but at the expense of not using a yaml descriptor (since wed then be using ‘kubectl create secret’ instead).

There are various options in this space. There are ways to store encrypted secrets in SCM and have them decrypted at the Kubernetes level and there are many factors to consider if one goes that road. One could also imagine encrypting all the secret values before creating them by having a master secret and using it to decrypt other secrets at the application level (e.g. using jasypt and spring profiles for different environments). We can’t look at all these options here.

What we will look at are options that open up if we choose to package the application with Helm. If we use a Helm chart we could input different parameter values at deploy time or supply a different file for just the data that changes per-environment rather than changing deployment descriptors (or needing to manage that variation at the application level). If you’re unfamiliar with Helm charts then think of them as parameterized templates for Kubernetes deployment descriptors (have a look at the examples in the Helm docs as a primer).

Hunting Treasure with Helm

First we need to create a Helm chart for the game, basing it upon our deployment descriptors. To do this we create and move to a new ‘charts’ directory in our project and run:

helm create treasurehunt

This creates an initial Chart called ‘treasurehunt’ following the default structure. We’ll now modify this Chart. The modification steps to get us started are:

  1. Change description and name in Chart.yml to say this is treasurehunt
  2. The values.yml specifies default values passed into a Chart in each deploy. Change the entries in values.yaml for image.repository and image.tag to point us by default to treasurehunt:latest
  3. Change the defaults in values.yml for serviceType to NodePort and port to 30080 as we’re using minikube (but note we could if we wanted still override these at deploy time with parameters).
  4. In deployment.yaml change containerPort to 8080 as our spring boot apps run on 8080
  5. Remove the liveness and readiness probes as we’re not using them in this example
  6. Copy the ‘env’ and ‘volumes’ sections over from the kubernetes deployment.yaml to the helm deployment.yaml
  7.  Copy the config.yaml and secrets.yaml files over from the kubernetes deployment directory to /charts/treasurehunt/templates. In each put “{{ template “treasurehunt.fullname” . }}-” in front of the name in the metadata section. This will give them unique names if we deploy the game multiple times. Also prepend the same string to the points where these names are used in the /templates/deployment.yaml

At this point we have a working chart that will let us deploy with ‘ helm install --name=pet-parrot ./charts/treasurehunt/ ’. (Here ‘pet-parrot’ is the unique release name—it prefixes the Kubernetes objects so now if we wanted to launch the game it would be with ‘ minikube service pet-parrot-treasurehunt ’.) But the chart doesn’t yet do much that our deployment descriptors didn’t already do. So let’s first change the way we set the secrets to do something we couldn’t do before.

Go to the chart’s secrets.yaml file and change the data section to:

{{- if .Values.treasure.location.x }}
treasure.location.x: {{ .Values.treasure.location.x | toString | b64enc | quote }}
{{- else }}
treasure.location.x: {{ mod (randNumeric 1) 4 | toString | b64enc | quote }}
{{- end }}
{{- if .Values.treasure.location.y }}
treasure.location.y: {{ .Values.treasure.location.y | toString | b64enc | quote }}
{{- else }}
treasure.location.y: {{ mod (randNumeric 1) 5 | toString | b64enc | quote }}
{{- end }}

This will look for supplied parameters for the treasure x and y (which can provided on the ‘ helm install ’) and if none are found then it generates random integers (limited to the 0-3 and 0-4 ranges by the mod function) and encodes them in base64. (We’re using randNumeric as these are numbers but many real helm charts create random passwords with randAlphaNum.) We also declare these entries in the values.yaml file:

treasure: location: ## Defaults to a random location if not set x: "" y: ""

So we’ll get a random location if we deploy with:

helm install --name=pet-parrot ./charts/treasurehunt/

Or we can specify the location if we instead do:

helm install --name=pet-parrot --set treasure.location.x=3,treasure.location.y=2 ./charts/treasurehunt/

So we could take this approach to specify secret values in a deployment from CI. (Though it does currently have shortcomings from a security perspective. There’s also a question-mark about whether or not you’d want to generate secrets afresh on an upgrade. Those interested in the security question might want to look at the helm-secrets project and its use in Jenkins X.)

We could parameterize the value used from the ConfigMap in the same way with a default that the user can override. But instead, we’ll handle it as a file referenced from our chart. To do this we create a ‘files’ directory under ‘charts/treasurehunt’ and put the file from ‘src/main/resources’ in there. We’ll change the number of lives there to 4 so that we know if the file is getting used.

Now in the config.yaml for the chart change the whole data section to:

{{ (.Files.Glob "files/").AsConfig | indent 2 }}

And that’s enough for the file to be used by the ConfigMap when we install the app. There’s a very similar function available to do the same for secrets. If we were to take this approach then a CI job could could pull the source of the chart and would be able to replace the file that is being used (e.g. swapping out dev for prod). This is rather like the option the  --from-file  option for ‘ kubectl create ’ but here the file reference is actually part of the chart. It would be even nicer if the file location were a parameter that could be set when running ‘ helm install ’ —at the time of writing that’s not a helm feature but is under review.

So helm gives us more possibilities for configuring and setting ConfigMaps and Secrets. Which options are better for us will depend upon factors like how much data we want to make configurable, how we expect it to be changed and deployed, what security constraints we have and the capabilities of our CI. Because our helm chart prefixes the names of the Kubernetes objects, it also allows us to deploy multiple instances of our app into the same namespace. So we could run:

helm install --name=greedy-parrot --set treasure.location.x=1,treasure.location.y=2 ./charts/treasurehunt/
helm install --name=grumpy-parrot --set treasure.location.x=2,treasure.location.y=1 ./charts/treasurehunt/

And access both with:

minikube service greedy-parrot-treasurehunt minikube service grumpy-parrot-treasurehunt

Original Link

Easily Automate Your CI/CD Pipeline With Jenkins, Helm, and Kubernetes

Developers don’t want to think about infrastructure and why it takes so long to deploy their code to a real testing environment. They just want it up and running!

This 6-step workflow will easily automate your CI/CD pipeline for quick and easy deployments using Jenkins, Helm, and Kubernetes.

Nowadays it’s critical to get your releases out fast, which requires having an automated CI/CD pipeline that takes your code from text to binaries to a deployed environment. Implementing an automated pipeline in the past has been challenging, especially when dealing with legacy applications. This is where Kubernetes comes in. Kubernetes has revolutionized the way we deploy and manage our containerized applications. Using Helm together with Kubernetes, you gain simplified application deployment.

This article will show you how to prepare and configure your environment to achieve a complete automated CI/CD pipeline for your containerized applications using Jenkins, Helm, and Kubernetes. You will receive tips on how to optimize your pipeline and a working template for customizing your own pipeline.

In order to get familiar with the Kubernetes environment, I have mapped the traditional Jenkins pipeline with the main steps of my solution.

Image title

Note: This workflow is also applicable when implementing other tools or for partial implementations.

Setting Up the Environment

Configure the Software Components

Before you create your automated pipeline, you need to set up and configure your software components according to the following configuration:

Software Components

Recommended Configuration

A Kubernetes Cluster

  • Set up the cluster on your data center or on the cloud.

A Docker Registry

A Helm Repository

Isolated Environments

  • Create different namespaces or clusters for Development and Staging
  • Create a dedicated and isolated cluster for Production

Jenkins Master

  • Set up the master with a standard Jenkins configuration.
  • If you are not using slaves, the Jenkins master needs to be configured with Docker, Kubectl, and Helm.

Jenkins Slave(s)

  • It is recommended to run the Jenkins slave(s) in Kubernetes to be closer to the API server which promotes easier configuration.
  • Use the Jenkins Kubernetes plugin to spin up the slaves in your Kubernetes clusters.

Prepare Your Applications

Follow these guidelines when preparing your applications:

  • Package your applications in a Docker Image according to the Docker Best Practices.
  • To run the same Docker container in any of these environments: Development, Staging or Production, separate the processes and the configurations as follows:
    • For Development: Create a default configuration.
    • For Staging and Production: Create a non-default configuration using one or more:
      • Configuration files that can be mounted into the container during runtime.
      • Environment variables that are passed to the Docker container.

The 6-Step Automated CI/CD Pipeline in Kubernetes in Action

General Assumptions and Guidelines

  • These steps are aligned with the best practices when running Jenkins agent(s).
  • Assign a dedicated agent for building the App, and an additional agent for the deployment tasks. This is up to your good judgment.
  • Run the pipeline for every branch. To do so, use the Jenkins Multibranch pipeline job.
  1. Get code from Git – Developer pushes code to Git, which triggers a Jenkins build webhook.- Jenkins pulls the latest code changes.
  2. Run build and unit tests- Jenkins runs the build.- Application’s Docker image is created during the build.- Tests run against a running Docker container.
  3. Publish Docker image and Helm Chart- Application’s Docker image is pushed to the Docker registry.- Helm chart is packed and uploaded to the Helm repository.
  4. Deploy to Development- Application is deployed to the Kubernetes development cluster or namespace using the published Helm chart. – Tests run against the deployed application in Kubernetes development environment.
  5. Deploy to Staging- Application is deployed to Kubernetes staging cluster or namespace using the published Helm chart – Run tests against the deployed application in the Kubernetes staging environment.
  6. [Optional] Deploy to Production – The application is deployed to the production cluster if the application meets the defined criteria. Please note that you can set up as a manual approval step.- Sanity tests run against the deployed application.- If required, you can perform a rollback.

Create Your Own Automated CI/CD Pipeline

Feel free to build a similar implementation using the following sample framework that I have put together just for this purpose:

Bon voyage for your Kubernetes CI/CD voyage!

Original Link

Kubernetes Helm Accelerates Production-Ready Deployments

“Kubernetes” is a Greek word refers to the pilot of a ship. The term “helm” refers to the wheel itself, the tool used by a pilot to steer and control the ship. In much the same way that the helm enables a pilot to steer a ship, Helm, in the context of container orchestration, enables a Kubernetes operator to have greater control of his/her Kubernetes cluster (i.e. the ship).

Helm fills the need to quickly and reliably provision Kubernetes-orchestrated container applications through easy installation, update, and removal. It provides a vehicle for developers to package their applications and share them with the Kubernetes community and for software vendors to offer their containerized applications at “the push of a button.”

Helm was jointly created by Google and Deis using code from both Helm Classic (since deprecated) and Google Cloud Services (GCS) Deployment Manager. The project is managed by Cloud Native Computing Foundation (CNCF), which also manages the Kubernetes project.

Through a single command or few mouse clicks, users can install Kubernetes apps for dev-test or production environments.

What Are Helm Charts?

The term “Helm” is used to refer to both the client-side and the server-side Kubernetes components: the Helm client and Tiller/Helm Server. The client interacts with the server to perform changes within a Kubernetes cluster.

As shown above, when a user executes the helm install command, a Tiller Server receives the incoming request and installs the appropriate package (which contains the resource definitions to install a Kubernetes application) into the Kubernetes cluster. Such packages are referred to as Charts and are similar to RPM and DEB packages for Linux: they provide a convenient way for developers to distribute applications and for end users to install those applications.

Among over a hundred available online, stable Helm Charts available officially include:

  • MySQL and MariaDB: popular database servers used by Wikipedia, Facebook, and Google.
  • MongoDB: cross-platform document oriented NoSQL database.
  • WordPress: a publishing platform to build blogs and websites.

Why Helm Benefits Kubernetes

The following table walks through a few problems that Kubernetes Helm alleviates.

Challenge Description Kubernetes Helm Benefit
Impaired Developer productivity Developer productivity can suffer due to the time spent on deploying test environments which help developers test code and replicate customer issues. Developers can spend their time focusing on developing applications instead of spending it on deploying dev-test environments. Helm Charts such as MySQL, MongoDB, and Postgresql allow developers to get a working database quickly for their application. In addition, developers can author their own chart which automates deployment of their dev-test environment.
Deployment Complexity The learning curve can be steep for those new to Kubernetes-orchestrated container apps. As a result, the lead time to deploy production-grade apps on Kubernetes can be high. Helm Charts provide “push button” deployment and deletion of apps, thereby making adoption and development of Kubernetes apps easier in organizations which have little or no experience with containers or microservices. Apps deployed from Charts, as well as VM-based and other container apps, can then be leveraged together to meet a business need, such as CI/CD or blogging platforms.
Kubernetes-orchestrated container applications can be complex to deploy. Deployers can use incorrect inputs for configuration files or not have the expertise to roll-out these apps from YAML templates. Helm Charts are recipes for Kubernetes apps that can reduce deployment complexity in the following ways:
1. Helm Charts allow software vendors and developers to preconfigure their applications with sensible defaults.
2. Helm Charts allow users/deployers to change parameters (e.g. resource limits for CPU and memory) of the application/chart using a consistent interface.
3. A software vendor or developer may have already configured the chart to be HA out of the box—or the user can enable this by setting the correct configuration parameter (e.g. number of replicas).
Developers leveraging Helm Charts can incorporate production-ready packages while building applications in a Kubernetes environment. In doing so, they can eliminate deployment errors due to incorrect configuration file entries or mangled deployment recipes.
Production Readiness Kubernetes applications can consist of multiple components, including pods, namespaces, RBAC policies, and deployments. Deploying and maintaining these apps can be tedious and error prone.

Helm Charts reduce complexity of maintaining an App Catalog in a Kubernetes environment. Operations teams do not have to maintain service tickets during Kubernetes orchestrated app deployments, or curate Kubernetes app catalogs that are part of a self-service portal. (see screenshot below)

Platform9 Managed Kubernetes App Catalog leveraging Helm Charts

App Deployment Compared: With and Without Helm Charts

Without Helm Charts

The procedure outlined in this example will deploy WordPress and MySQL on Kubernetes without the use of Helm Charts. In order to follow through with these instructions, it is necessary to perform these steps manually:

  1. Download the necessary configuration files
  2. Create a Persistent Volume
  3. Create a Secret for the MySQL password
  4. Deploy MySQL using the copy-pasted “mysql-deployment.yaml” file
  5. Deploy WordPress using the copy-pasted “wordpress-deployment.yaml” file

With Helm Charts

In contrast, deploying the WordPress Chart using Helm can be done in a few seconds. On the Helm client, a deployer would execute the following command:

$ helm install --name my-release -f values.yaml stable/wordpress

The comparison above was meant to illustrate the value of Helm Charts in Kubernetes through a simple example. When we deploy multiple Helm Charts-based apps and have them serve a business need, the productivity gains can be more substantial.


Through the use of Charts, Helm provides the ability to leverage Kubernetes “packages” through the click of a button or a single CLI command while building and deploying applications. By making app deployment easy and standardized in a Kubernetes environment, Helm improves developer productivity, reduces operational complexity, and speeds up the adoption of cloud-native apps. Platform9 Managed Kubernetes is one such solution that is integrated with Helm and Helm Charts, and can be evaluated through a free trial.

Original Link

How to Use Kubernetes to Quickly Deploy Neo4j Clusters

As part of our work on the Neo4j Developer Relations team, we are interested in integrating Neo4j with other technologies and frameworks, ensuring that developers can always use Neo4j with their favorite technologies.

One of the technologies that we’ve seen gain a lot of traction over the last year or so is Kubernetes, an open-source system for automating deployment, scaling and management of containerized applications.

Neo4j and Kubernetes

Kubernetes was originally designed by Google and donated to the Cloud Native Computing Foundation. At the time of writing, there have been over 1,300 contributors to the project.

Neo4j on Kubernetes

Neo4j 3.1 introduced Causal Clustering — a brand-new architecture using the state-of-the-art Raft protocol — that enables support for ultra-large clusters and a wider range of cluster topologies for data center and cloud. Causal Clustering is safer, more intelligent, more scalable, and built for the future.

A Neo4j Causal Cluster

A Neo4j Causal Cluster

A Neo4j causal cluster is composed of servers playing two different roles: Core and Read replicas.

Core Servers

Core Servers‘ main responsibility is to safeguard data. The Core Servers do so by replicating all transactions using the Raft protocol.

In Kubernetes, we will deploy and scale core servers using StatefulSets. We use a stateful set because we want to have a stable and unique network identifier for each of our core servers.

Read Replicas

Read Replicas‘ main responsibility is to scale out graph workloads (Cypher queries, procedures, and so on). Read Replicas act like caches for the data that the Core Servers safeguard, but they are not simple key-value caches. In fact, Read Replicas are fully-fledged Neo4j databases capable of fulfilling arbitrary (read-only) graph queries and procedures.


In Kubernetes, we will deploy and scale read replicas using deployments.

We’ve created a set of Kubernetes templates in the kubernetes-neo4j repository, so if you just want to get up and running, head over there and try them out.

If you haven’t got a Kubernetes cluster running, you can create a single node cluster locally using minikube.

$ minikube start --memory 8192

Once that’s done, we can deploy a Neo4j cluster by executing the following command:

$ kubectl apply -f cores
service "neo4j" configured
statefulset "neo4j-core" created

We can check that Neo4j is up and running by checking the logs of our pods until we see the following line:

$ kubectl logs -l "app=neo4j"
2017-09-13 09:41:39.562+0000 INFO Remote interface available at

We can query the topology of the Neo4j cluster by running the following command:

$ kubectl exec neo4j-core-0 -- bin/cypher-shell --format verbose \ "CALL dbms.cluster.overview() YIELD id, role RETURN id, role"
| id | role |
| "719fa587-68e4-4194-bc61-8a35476a0af5" | "LEADER" |
| "bb057924-f304-4f6d-b726-b6368c8ac0f1" | "FOLLOWER" |
| "f84e7e0d-de6c-480e-8981-dad114de08cf" | "FOLLOWER" |

Note that security is disabled on these servers for demo purposes. If we’re using this in production, we wouldn’t want to leave servers unprotected.

Now let’s add some read replicas. We can do so by running the following command:

$ kubectl apply -f read-replicas
deployment "neo4j-replica" created

Now, let’s see what the topology looks like:

$ kubectl exec neo4j-core-0 -- bin/cypher-shell --format verbose \ "CALL dbms.cluster.overview() YIELD id, role RETURN id, role"
| id | role |
| "719fa587-68e4-4194-bc61-8a35476a0af5" | "LEADER" |
| "bb057924-f304-4f6d-b726-b6368c8ac0f1" | "FOLLOWER" |
| "f84e7e0d-de6c-480e-8981-dad114de08cf" | "FOLLOWER" |
| "8952d105-97a5-416b-9f61-b56ba44f3c02" | "READ_REPLICA" |

We can scale cores or read replicas, but we’ll look at how to do that in the section.

When we first created the Kubernetes templates, I wrote a blog post about it. In the comments, Yandry Pozo suggested that we should create a Helm package for Neo4j.

And 11 months later…

Neo4j on Helm

Helm is a tool that streamlines installing and managing Kubernetes applications. You can think of it as an App Store for Kubernetes.

Helm has two parts: a client (helm) and a server (tiller).


Tiller runs inside of your Kubernetes cluster and manages releases (installations) of your charts. Helm runs on your laptop, CI/CD, or wherever you want it to run.

In early September, the Neo4j Helm package was merged into the charts incubator, which means that if you’re running Helm on your Kubernetes cluster, you can easily deploy a Neo4j cluster.

Once we’ve downloaded the Helm client, we can install it on our Kubernetes cluster by running the following command:

$ helm init && kubectl rollout status -w deployment/tiller-deploy --namespace=kube-system

The first command installs Helm on the Kubernetes cluster and the second blocks until it’s been deployed.

We can check that it’s installed by running the following command:

$ kubectl get deployments -l 'app=helm' --all-namespaces
kube-system tiller-deploy 1 1 1 1 1m

We’re now ready to install Neo4j!

First, we need to add the incubator project to the Helm repository, which we can do by running the following command:

$ helm repo \ add incubator "incubator" has been added to your repositories

Let’s check that the Neo4j chart is there:

$ helm search incubator/neo4j
incubator/neo4j 0.1.0 Neo4j is the world's leading graph database

Looks good. Now, we can deploy our Neo4j cluster.

$ helm install incubator/neo4j --name neo-helm --wait --set authEnabled=false

This will deploy a cluster with three core servers and no read replicas. Again, note that we have auth disabled for demo purposes.

If we want to add read replicas, we can scale the deployment using the following command:

$ kubectl scale deployment neo-helm-neo4j-replica --replicas=3
deployment "neo-helm-neo4j-replica" scaled

We can check that this worked by running the same procedure that we used above:

$ kubectl exec neo-helm-neo4j-core-0 -- bin/cypher-shell --format verbose \ "CALL dbms.cluster.overview() YIELD id, role RETURN id, role"
| id | role |
| "32e6b76d-4f52-4aaa-ad3b-11bc4a3a5db6" | "LEADER" |
| "1070d088-cc5f-411d-9e64-f5669198f5b2" | "FOLLOWER" |
| "e2b0ef4c-6caf-4621-ab30-ba659e0f79a1" | "FOLLOWER" |
| "f79dd7e7-18e7-4d82-939a-1bf09f8c0f42" | "READ_REPLICA" |
| "b8f4620c-4232-498e-b39f-8d57a512fa0e" | "READ_REPLICA" |
| "74c9cb59-f400-4621-ac54-994333f0278f" | "READ_REPLICA" |

Finally, let’s put some data in our cluster by running the following command:

$ kubectl exec neo-helm-neo4j-core-0 -- bin/cypher-shell \ "UNWIND range(0, 1000) AS id CREATE (:Person {id: id}) RETURN COUNT(*)" COUNT(*)

And we can check that it reached the other cluster members, as well:

$ kubectl exec neo-helm-neo4j-core-2 -- bin/cypher-shell \ "MATCH (:Person) RETURN COUNT(*)"
$ kubectl exec neo-helm-neo4j-replica-3056392186-q0cr9 -- bin/cypher-shell \ "MATCH (:Person) RETURN COUNT(*)"

All good!


Please go give it a try and follow the steps above.

We would love to hear what you think about Neo4j and Kubernetes working together.

  • How does it work for you?
  • Did you run into any issues?
  • Do you have suggestions for improvements?

Original Link

Terraform vs. Helm for Kubernetes

I have been an avid user of Terraform and use it to do many things in my infrastructure, be it provisioning machines or setting up my whole stack. When I started working with Helm, my first impression was “Wow! This is like Terraform for Kubernetes resources!” What re-kindled these thoughts again was when, a few weeks ago, Hashicorp announced support for Kubernetes provider via Terraform. So one can use Terraform to provision their infrastructure as well as to manage Kubernetes resources. So I decided to take both for a test drive and see what works better in one vs. the other. Before we get to the meat, a quick recap of similarities and differences. For brevity in this blog post, when I mention Terraform, I am referring to the Terraform Kubernetes provider.

There are some key similarities:

  • Both allow you to describe and maintain your Kubernetes objects as code. Helm uses the standard manifests along with Go-templates, whereas terraform uses the JSON/HCL file format.
  • Both allow usage of variables and overwriting those variables at various levels such as file and command line, and Terraform additionally supports environment variables.
  • Both support modularity (Helm has sub-charts while Terraform has modules).
  • Both provide a curated list of packages (Helm has stable and incubator charts, while Terraform has recently started the Terraform Module Registry, though there are no Terraform modules in the registry that work on Kubernetes as of this post).
  • Both allow installation from multiple sources such as local directories and git repositories.
  • Both allow dry runs of actions before actually running them (helm has a –dry-run flag, while Terraform has the plan subcommand).

With this premise in mind, I set out to try and understand the differences between the two. I took a simple use case with following objectives:

  • Install a Kubernetes cluster (Possible with Terraform only)
  • Install GuestBook
  • Upgrade GuestBook
  • Roll back the upgrade

Setup: Provisioning the Kubernetes Cluster

In this step, we will create a Kubernetes cluster via Terraform using these steps: 

  • Clone this git repo
  • The kubectl, Terraform, ssh, and Helm binaries should be available in the shell you are working with.
  • Create a file called `terraform.tfvars` with the following content:
do_token = "YOUR_DigitaOcean_Access_Token"
# You will need a token for kubeadm and can be generated using following command:
# python -c 'import random; print "%0x.%0x" % (random.SystemRandom().getrandbits(3*8), random.SystemRandom().getrandbits(8*8))'
kubeadm_token = "TOKEN_FROM_ABOVE_PYTHON_COMMAND" # private_key_file = "/c/users/harshal/id_rsa"
private_key_file = "PATH_TO_YOUR_PRIVATE_KEY_FILE"
# public_key_file = "/c/users/harshal/"
public_key_file = "PATH_TO_YOUR_PUBLIC_KEY_FILE"

Now we will run a set of commands to provision the cluster:

  • terraform get so Terraform picks up all modules

  • terraform init so Terraform will pull all required plugins

  • terraform plan to validate whether everything shall run as expected.

  • terraform apply

This will create a 1-master, 3-worker Kubernetes cluster and copy a file called `admin.conf` to `${PWD}`, which can be used to interact with the cluster via kubectl. Run `export KUBECONFIG=${PWD}/admin.conf` to use this file for this session. You can also copy this file to `~/.kube/config` if required. Ensure your cluster is ready by running `kubectl get nodes`. The output should show all nodes in Ready status.

master Ready 4m v1.8.2
node1 Ready 2m v1.8.2
node2 Ready 2m v1.8.2
node3 Ready 2m v1.8.2

Terraform Kubernetes Provider

Install GuestBook

Once you run terraform apply, verify that all pods and services are created by running kubectl get all. You should now be able to access GuestBook on node port 31080. You will notice that we have implemented GuestBook using Replication Controllers and not Deployments. That is because the Kubernetes provider in Terraform does not support beta resources. More discussion on this can be found here. Under the hood, we are using simple declaration files and mainly and files in the gb-module directory, which should be self-explanatory.

Update GuestBook

Since the application is deployed via Replication Controllers, changing the image is not enough. We would need to scale down old pods and scale up new pods. So we will scale down the RC to 0 pods and then scale it up again with the new image. Run: 

terraform apply -var 'fe_replicas=0' &amp;&amp; terraform apply -var 'fe_image=harshals/gb-frontend:1.0' -var 'fe_replicas=3'

Verify the updated application at node port 31080.

Roll Back the Application

Again, without deployments, rolling back RC is a little more tedious. We scale down the RC to 0 and then bring back the old image. Run:

terraform apply -var 'fe_replicas=0' &amp;&amp; terraform apply

This will bring the pods back to their default version and replica count.


Install GuestBook

Now we will perform the installation of GuestBook on the same cluster in a different namespace using Helm. Ensure you are pointing to the correct cluster by running `export KUBECONFIG=${PWD}/admin.conf`

Since we are running Kubernetes 1.8.2 with RBAC, run the following commands to give tiller the required privileges and initialize Helm:

kubectl -n kube-system create sa tiller
kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
helm init --service-account tiller --upgrade

(Courtesy: )

Run the following command to install GuestBook on namespace “helm-gb”:

helm install --name helm-gb-chart --namespace helm-gb ./helm_chart_guestbook

Verify that all pods and services are created by running helm status helm-gb-chart

Upgrade GuestBook

In order to perform a similar upgrade via Helm, run following command:

helm upgrade helm-gb-chart --set frontend.image=harshals/gb-frontend:1.0

Since the application is using Deployments, the upgrade is a lot easier. To view the upgrade taking place and old pods being replaced with new ones, run:

helm status helm-gb-chart


The revision history of the chart can be viewed via helm history helm-gb-chart

Run the following command to perform the rollback:

helm rollback helm-gb-chart 1

Run helm history helm-gb-chart to get rollback confirmation as shown below:

Image title


Be sure to clean up the cluster by running terraform destroy.

Pros and Cons

We already saw the similarities between helm and terraform pertaining to the management of Kubernetes resources. Now let’s look at what works well and doesn’t with each of them.

Terraform Pros

  • Use of the same tool and code base for infrastructure as well as cluster management, including the Kubernetes resources. So a team already comfortable with Terraform can easily extend it to be used with Kubernetes.
  • Terraform does not install any component inside the Kubernetes cluster, whereas Helm installs tiller. This can be seen as a positive, but tiller does some real-time management of running pods ,which we will talk in a bit.

Terraform Cons

  • No support for beta resources. While many implementations are already working with beta resources such as Deployment, Daemonset, and StatefulSet, not having these available via Terraform reduces the incentive for working with it.
  • In case of a scenario where there is a dependency between two providers (Module do-k8s-cluster creates admin.conf and the Kubernetes provider of module gb-app refers to it), a single Terraform action will not work. The order of execution has to be maintained and run in the following format: `terraform apply && terraform apply -target=gb-app`. This becomes tedious and hard to manage when there are multiple modules that could have provider-based dependencies. This is still an open issue within Terraform, and more details can be found here.
  • Terraform’s Kubernetes provider is still fairly new.

Helm Pros

  • Since Helm makes API calls to the tiller, all Kubernetes resources are supported. Moreover, Helm templates have advanced constructs such as flow control and pipelines, resulting in a lot more flexible deployment template.
  • Upgrades and rollbacks are very well-implemented and easy to handle in Helm. Also, running tiller inside the cluster managed the runtime resources effectively.
  • The Helm charts repository has a lot of useful charts for various applications.

Helm Cons

  • Helm becomes an additional tool to be maintained, apart from existing tools for infrastructure creation and configuration management.


In terms of sheer capabilities, Helm is far more mature as of today and makes it really simple for the end user to adopt and use it. A great variety of charts also give you a head start, and you don’t have to re-invent the wheel. Helm’s tiller component provides a lot of capabilities at runtime that aren’t present in Terraform due to inherent nature of the way it is used.

On the other hand, Terraform can provision machines, clusters, and seamlessly manage resources, making it a single tool to learn and manage all of your infrastructure. That being said, for managing the applications/resources inside a Kubernetes cluster, you have to do a lot of work — and lack of support for beta objects makes it all the more impossible.

I would personally go about using Terraform for provisioning cloud resources and Kubernetes and Helm for deploying applications. Time will tell if Terraform gets better at application/resources provisioning in Kubernetes clusters.

Original Link

Deploying Apps to Kubernetes on the IBM Cloud With Helm

Helm is the package manager for Kubernetes. With Helm, you can very easily deploy applications, tools, and databases like MongoDB, PostgreSQL, WordPress, and Apache Spark into your own Kubernetes clusters. Below is a brief description of how to use Helm for the IBM Cloud Container service.

“Helm helps you manage Kubernetes applications. Helm Charts helps you define, install, and upgrade even the most complex Kubernetes application. Charts are easy to create, version, share, and publish, so start using Helm and stop the copy-and-paste madness. The latest version of Helm is maintained by the CNCF.”

You can easily install applications by invoking commands like ‘helm install stable/mongodb’. You can also configure applications before installing them via YAML configuration files.

The Kubernetes community provides a curated catalog of stable Helm Charts. Additionally, IBM provides charts for Db2, MQ, and more.

Below is a quick example for how to deploy MongoDB to Kubernetes on the IBM Cloud.

First, you need to configure the Bluemix CLI to work against your Kubernetes cluster, and you need to install Helm on your development machine.

bx login -a
bx target --cf
bx cs init
bx cs cluster-config mycluster
set environment variable: export KUBECONFIG=...
bx cr login
helm init
helm repo add stable

Next, you can install Kubernetes applications with the following command:

helm install --name my-tag stable/mongodb

If you want to delete everything later, run ‘helm delete my-tag’.

To find out the IP address and port, run these commands:

bx cs workers mycluster
kubectl get svc
kubectl get svc my-service

If you have a paid account, this is all you have to do.

The free account does not support persistent volumes. As a workaround (not for production) you can use disk space on worker nodes. Run ‘kubectl create -f config.yaml’ with the following content in config.yaml for MongoDB.

kind: PersistentVolume
apiVersion: v1
metadata: name: mongo-simple-mongodb namespace: default spec: capacity: storage: 10Gi accessModes: - ReadWriteOnce hostPath: path: "/tmp/data"

After this, you can see everything working on the Kubernetes dashboard (‘kubectl proxy’).


Original Link