ALU

microservice

How to Build Hybrid Cloud Confidence

Software complexity has grown dramatically over the past decade, and enterprises are looking to hybrid cloud technologies to help power their applications and critical DevOps pipelines. But with so many moving pieces, how can you gain confidence in your hybrid cloud investment?

The hybrid cloud is not a new concept. Way back in 2010, AppDynamics founder Jyoti Bansal had an interesting take on hybrid cloud. The issues Jyoti discussed more than eight years ago are just as challenging today, particularly with architectures becoming more distributed and complex. Today’s enterprises must run myriad open source and commercial products. And new projects — some game-changers — keep sprouting up for companies to adopt. Vertical technologies like container orchestrators are going through rapid evolution as well. As they garner momentum, new software platforms are emerging to take advantage of these capabilities, requiring enterprises to double down on container management strategies.

Original Link

Creating a Docker Overlay Network

Image title

Summary

When we get started using Docker, the typical configuration is to create a standalone application on our desktop.

For the most part, it’s usually not practical to run all your applications on a single machine and when it’s not, you’ll need an approach for distributing the applications across many machines. This is where a Docker Swarm comes in.

Docker Swarm provides capabilities for clustering, scalability, discovery, and security, to name a few. In this article, we’ll create a basic Swarm configuration and perform some experiments to illustrate discovery and connectivity.

In this demo, we’ll create a Swarm overlay cluster that will consist of a Swarm manager and a worker. For convenience, it will be running in AWS.

Architecture

Our target Architecture will consist of a couple of Docker containers running inside AWS AMI images on different EC2 hosts. The purpose of these examples is to demonstrate the concepts of how a Docker swarm can be used to discover services running on different host machines and communicate with one another.

Image title

In our hypothetical network above, we depict the interconnections of a Docker swarm manager and a couple of swarm workers. In the examples which follow we’ll use a single manager and a single worker to keep complexity and costs low. Keep in mind that your real configurations will likely consist of many swarm workers.

Here’s an example of what a potential Use Case may look like. An AWS load balancer configured to distribute load to a Docker swarm running on 2 or more EC2 instances.

Image title

We’ll show in the examples below how you can create a Docker swarm overlay network that will allow DNS discovery of members and allow members to communicate with one another.

Prerequisites

We assume you’re somewhat familiar with Docker and have some familiarity setting up EC2 instances in AWS.

If you’re not confident with AWS or would like a little refresher, please review the following articles:

Some AWS services will incur charges, so be sure to stop and/or terminate any services you aren’t using. Additionally, consider setting up billing alerts to warn you of charges exceeding a threshold that may cause you concern.

Configuration

Begin by creating two (2) EC2 instances (free tier should be fine), and install Docker on each EC2 instance. Refer to the Docker Supported platforms section for Docker installation guidance and instructions for your instance.

Here are the AWS ports to open to support Docker Swarm and our port connection test:

Open ports in AWS Mule SG

Type

Protocol

Port Range

Source

Description

Custom TCP Rule

TCP

2377

10.193.142.0/24

Docker swarm management

Custom TCP Rule

TCP

7946

10.193.142.0/24

Container network discovery

Custom UDP Rule

UDP

4789

10.193.142.0/24

Container ingress network

Custom TCP Rule

TCP

8083

10.193.142.0/24

Demo port for machine to machine communications

For our examples, we’ll use the following IP addresses to represent Node 1 and Node2:

  • Node 1: 10.193.142.248
  • Node 2: 10.193.142.246

Before getting started, let’s take a look at the existing Docker networks.

Docker Networks

docker network ls

The output of the network list should look at least like the listing below if you’ve never added a network or initialized a swarm on this Docker daemon. Other networks may be shown as well.

Results of Docker Network Listing:

NETWORK ID

NAME

DRIVER

SCOPE

fa977e47b9f3

bridge

bridge

local

705fc078c278

host

host

local

bd4caf6c1751

none

null

local

From Node 1, let’s begin by initializing the swarm.

Create the Swarm Master Node

docker swarm init --advertise-addr=10.193.142.248

You should get a response that looks like the one below. We’ll use the token provided to join our other node to the swarm.

Results of swarm init

Swarm initialized: current node (v9c2un5lqf7iapnv96uobag00) is now a manager. To add a worker to this swarm, run the following command: docker swarm join --token SWMTKN-1-5bbh9ksinfmajdqnsuef7y5ypbwj5d9jazt47urenz3ksuw9lk-227dtheygwbxt8dau8ul791a7 10.193.142.248:2377 To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

It takes a minute or two for the Root CA Certificate to synchronize through the swarm, so if you get an error, give it a few minutes and try again.

If you happen to misplace the token, you can use the  join-tokenargument to list tokens for manager and workers. For example, on Node 1, run the following:

Manager Token for Node 1

docker swarm join-token manager

Next, let’s join the swarm from Node 2.

Node 2 Joins Swarm

docker swarm join --token SWMTKN-1-5bbh9ksinfmajdqnsuef7y5ypbwj5d9jazt47urenz3ksuw9lk-227dtheygwbxt8dau8ul791a7 10.193.142.248:2377
This node joined a swarm as a worker.

From Node 1, the swarm master, we can now look at the connected nodes

On Master, List All Nodes

docker node ls

Results of Listing Nodes

ID HOSTNAME STATUS AVALIABILITY ENGINE VERSION

2quenyegseco1w0e5n1qe58r3

ip-10-193-142-248

Ready

Active

Active 18.03.1-ce

wrjk02g909c6fnuxlepmksuz4

ip-10-193-142-246

Ready

Active

Active 18.03.1-ce

Also, notice that an Ingress network has been created, this provides an entry point for our swarm network.

Results of Docker Network Listing

NETWORK ID

NAME

DRIVER

SCOPE

fa977e47b9f3

bridge

bridge

local

705fc078c278

host

host

local

bd4caf6c1751

none

null

local

qrppfipdu098

ingress

overlay

swarm

Let’s go ahead and create our Overlay network for standalone containers.

Overlay Network Creation on Node 1

docker network create --driver=overlay --attachable my-overlay-net docker network ls

Results of Docker Network Listing

NETWORK ID

NAME

DRIVER

SCOPE

fa977e47b9f3

bridge

bridge

local

705fc078c278

host

host

local

bd4caf6c1751

none

null

local

qrppfipdu098

ingress

overlay

swarm

vn12jyorp1ey

my-overlay-net

overlay

swarm

Note the addition of our new overlay network to the swarm. Now we join the overlay network from Node 1.

Run Our Container, Join the Overlay Net

docker run -it --name alpine1 --network my-overlay-net alpine

Join the overlay network from Node 2, we’ll open port _8083_ to test connectivity into our running container.

Run Our Container, Join the Overlay Net

docker run -it --name alpine2 -p 8083:8083 --network my-overlay-net alpine

Verify Our Overlay Network Connectivity

With our containers running we can test that we can discover our hosts using DNS configured by the swarm. From Node 2, let’s ping the Nod 1 container.

Node 2 Pings Node 1, Listens on Port 8083

ip addr # show our ip address
ping -c 2 alpine1 # create listener on 8083
nc -l -p 8083

From Node 1LetsPing the Node 2 Container and Connect toIt’sOpen Listener on Port8083

Node 1 Pings Node 2, Connect to Node 2 Listener on Port 8083

ip addr # show our ip address
ping -c 2 alpine2 # connect to alpine2 listener on 8083
nc alpine2 8083
Hello Alpine2
^C

There you have it, you created a tcp connection from Node 1 to Node 2 and sent a message. Similarly, your services can connect with and exchange data when running in the Docker overlay cluster.

With these fundamental building blocks in place, you’re ready to apply these principles to real-world designs.

Cleanup

With our testing complete we can tear down the swarm configuration.

Remove Node 2 Swarm

docker container stop alpine2
docker container rm alpine2 docker swarm leave

Remove Node 1 Swarm

docker container stop alpine1
docker container rm alpine1 docker swarm leave --force

This concludes our brief examples with creating Docker Overlay Networks. With these fundamental building blocks in place, you now have the essential pieces necessary for building larger, more complex Docker container interactions.

Be sure to remove any AWS assets you may have used in these examples so you don’t incur any ongoing costs.

I hope you enjoyed reading this article as much as I have enjoyed writing it, I’m looking forward to your feedback!

Original Link

Implementing Cloud-Native Enterprise Applications with Open-Source Software

This article is featured in the new DZone Guide to Containers: Development and Management. Get your free copy for more insightful articles, industry statistics, and more! 

Linux container technologies such as kernel namespaces, cgroups, chroot, AppArmor, and SELinux policies have been in development since 1979. In 2013, an organization called dotCloud built a complete ecosystem for making Linux containers extremely usable by introducing a better interface, a REST API, a CLI, and a layered container image format — and called it Docker. This exploded the interest in Linux containers and began to revolutionize the way software is being designed and deployed to achieve optimal infrastructure resource usage, scalability, and maintenance. Google, who was contributing to cgroups, LMCTFY, and other related Linux kernel features, initiated the Kubernetes project in 2014. With their experience of running containers at scale over a decade, Google was well-positioned to introduce this open-source container cluster manager. This made the next major milestone of container technologies, which lead to the inception of newer architectural patterns, distributed service management frameworks, serverless technologies, observability tools, and, most importantly, the Cloud Native Computing Foundation (CNCF).

Today, enterprises are rapidly adopting these technologies for implementing production systems using containers at different scales. CNCF is now taking the lead on standardizing the cloud-native technology stack by categorizing the spectrum, defining specifications, improving interoperability, allowing technology leaders to collaborate, and building an open-source, vendor-neutral ecosystem that is portable to public, private, and hybrid clouds.

What Is Cloud-native?

“Cloud-native” is nothing new, but it’s a new term to define the concepts used for building and running applications on any cloud platform without having to change the application code. This approach may involve adopting microservices architecture, containerizing application components, and dynamically orchestrating containers using a cloud-agnostic container cluster manager — including tools for managing services and observing the deployments.

A Reference Architecture for Implementing Cloud-native  Enterprise Systems

Image title

Figure 1: A reference architecture for cloud-native enterprise systems.

The above diagram illustrates a reference architecture for implementing cloud-native enterprise systems using container-based technologies. According to the current state of the ecosystem, microservices, serverless functions, integration services, and managed APIs are ready to be deployed on containers handling production workloads. Those components can be deployed on private, public,  and hybrid cloud environments using a cloud-agnostic container orchestrator. Nevertheless, stateful, complex distributed systems such as database management systems, analytics platforms, message brokers, and business process servers may need more maturity at the container cluster manager and at the application level for natively supporting completely automated deployments.

Container Orchestration

Image title

Figure 2: A reference architecture for a container cluster manager.

Today, Kubernetes is considered to be the most compelling open-source platform for orchestrating containers. It is now at Version 1.10 and currently being used in production to manage hundreds (if not thousands) of container hosts running millions of containers. At its core, it provides features for container grouping, self-healing, service discovery, load balancing, autoscaling, running daemons, managing stateful components, managing configurations, credentials, and persistent volumes. Moreover, it provides extension points to implement custom resources and controllers for advanced orchestration requirements needed by complex systems such as big data analytics, databases, and message brokers. Kubernetes can be installed on any virtualization platform without requiring any special tools, and can be spawned on AWS, Google Cloud, Azure, IBM Cloud, and Oracle Cloud as managed services by only paying for the virtual machines required for running the workloads.

Alternatively, organizations can also use RedHat OpenShift,  Mesosphere DC/OS, Hashicorp Nomad, and Docker Swarm for container orchestration. OpenShift is a Kubernetes distribution which provides additional application lifecycle management and security features. It is available as CentOS-based open-source distribution and RHEL-based enterprise distribution. DC/OS has been implemented using Apache Mesos, Marathon, and Metronome by Mesosphere, and it’s specifically optimized for running big data analytics systems such as Apache Spark, Cassandra, Kafka, HDFS, etc. It also has an open-source distribution and an enterprise edition. Some key features such as user management and credential management are missing in the open-source version. Docker Swarm is another container cluster manager implemented by Docker which is bundled into the Docker runtime. By design, it integrates well with Docker and provides a simpler deployment model compared to other systems. Nevertheless, Swarm has not been adopted much in the industry in comparison to other cluster managers.

Microservices For Better Agility, Speed, And Cost

Image title

Figure 3: A reference implementation of microservices architecture.

Microservices architectural style proposes that an application should be implemented as a collection of independently manageable, lightweight services — in contrast to its opposite, monolith architecture, in which an application is implemented as a single unit. The microservices approach allows each service to have a single focus, loosely coupled, lightweight, highly scalable modular architecture to achieve better resource usages, optimized deployment models, fewer maintenance costs, and faster delivery times.  Such services can be implemented in any programming language that supports REST- and RPC-based services.

Spring Boot, Dropwizard, and Spark are the most widely used open-source microservices frameworks for Java. Out of these, Spring Boot provides the advantages of Spring’s dependency injection, data access, batch processing, security, and integration inclinations. Services that are mission-critical and require the merest latencies can be implemented with Golang using Echo, Iris, or Go kit. Otherwise, if the developers’ preference is more toward JavaScript Express, Feathers and LoopBack would be striking options. LoopBack stands out for exposing CRUD APIs with OAuth2 security with a few lines of code. Besides the above, Flask, Sanic, and Tornado would be attractive alternatives for Python developers.

Optimal Governance for Microservices with Service Mesh Architecture

Image title

Figure4: Usage of a service mesh in microservices architecture.

Once a large system is decomposed into hundreds and thousands of smaller services according to microservices architecture (MSA), managing inter-service communications, service identity, authorization, monitoring, logging, and obtaining telemetry data might become challenging tasks. Last year, IBM, Google, and Lyft joined together to implement a solution for this problem with the Istio project, by combining IBM’s Amalgam8 project, Google’s Service Control implementation, and Lyft’s Envoy proxy. Istio might be today’s most comprehensive service mesh platform that can provide traffic management, security, policy enforcement, and telemetry data extraction at the application deployment time without having to implement the code in the services. Istio does this by injecting  Envoy proxy into the service pods and dynamically intercepting the communication between services for controlling traffic using a central management layer. Service security can be managed with Istio using Mutual TLS. Role-Based Access Control (RBAC) monitoring is provided with Prometheus, Grafana, Heapster, and native GCP and AWS monitoring tools, and distributed tracing is provided with Zipkin and Jaeger. Due to the popularity of Istio, NGINX implemented another service mesh based on Istio called nginMesh by using NGINX as the sidecar proxy.

Linkerd is another popular open-source service mesh platform implemented using Finagle and Netty (by Buoyant and later donated to CNCF). Linkerd uses proxy daemons on each container host for intercepting inter-service communication unlike proxy sidecars in Istio. This model requires services to route requests specifically to the proxies using additional configurations. Buoyant has improved this architecture and produced Conduit, targeting Kubernetes by incorporating a Kubernetes object injection model similar to Istio.

Using Serverless Functions for Event-Driven Executions

Image title

Figure 5: Usage of serverless functions in microservices architecture.

One of the key aspects of MSA is its ability to reduce the infrastructure resource usage by allocating resources at a granular level according to the actual service resource requirements. Nevertheless, at any given time, it would need to run at least one container per service. The serverless architecture attempts to further optimize this by decomposing the deployable unit up to functions and running functions only when needed. Serverless functions became popular when AWS introduced the AWS Lambda platform. Today, almost all public cloud vendors provide a similar offering, such as Google Cloud Functions, Azure Functions, and IBM Cloud Functions. Most of these platforms support programming languages such as Node.js, Java, and Python — except for Google Functions, which only supports Node.js. On the above public cloud offerings, users only get billed for the number of function invocations, considering the amount of infrastructure resources and time required for executing each.

Modern enterprises are now adopting microservices architecture for implementing highly scalable, cloud-agnostic applications that achieve better agility, speed, and lower cost.

Today, Apache OpenWhisk is one of the most widely used serverless frameworks for implementing on-premise serverless systems. It was initially developed by IBM and later donated to ASF for wider community adoption. OpenWhisk was designed using a highly extensible architecture to enable adding new languages and event triggers without much effort. Moreover, it supports creating a chain of functions for implementing a sequence of business operations. One of the key design decisions OpenWhisk has made for optimizing resource usage is to create containers on demand and preserve them for a given period of time.

Fission is another popular serverless platform specifically designed for Kubernetes. In contrast to OpenWhisk, Fission uses a configurable pool of containers for reducing the cold start time of functions and provides function composition capabilities. In terms of deployment, Fission can be integrated with Istio for incorporating service mesh features and function autoscaling based on Kubernetes Horizontal Pod Autoscalers. Kubeless is a similar platform developed by Bitnami for hosting serverless functions on Kubernetes. It uses a custom Kubernetes resource for deploying code, and as a result, functions can be managed using the standard Kubernetes CLI.

Image title

Figure 6: Usage of integration services in microservices architecture.

Implementing integrations can be achieved through microservices using standard programming constructs. If the system grows over time, it would require a considerable amount of effort and repetitive work by introducing a considerable amount of integrations. Ballerina is a new programming language purposely built by WSO2 to fill this gap in the container native ecosystem. It provides integration constructs and connectors for implementing distributed system integrations with distributed transactions, reliable messaging, stream processing, and workflows. It provides native support for Kubernetes, Prometheus, and Jaeger.

Distributed Observability Tools for Immeasurable Insights

Observability mainly divides into three categories: monitoring, logging, and tracing. Monitoring involves observing the health of the applications, including socket status, resource usage, request counts, latencies, etc., and generating alerts for the operations teams to take actions on actual system failures (excluding false positives). Prometheus is one of the most widely used tools available today for monitoring distributed systems. It was initially developed as an open-source project by ex-Googlers working at SoundCloud and later donated to CNCF. Prometheus provides features for active scraping, storing, querying, graphing, and alerting based on time series data.

Centralized logging is the second crucial aspect of distributed systems for investigating issues in production environments. Fluentd is one of the main open-source projects of this segment. It provides a unified logging system for connecting various sources of log data to various destination systems. Fluentd was initially developed at Treasure Data and later donated to CNCF. It can be integrated with other open-source monitoring tools, such as Elasticsearch and Kibana, to implement a complete solution for monitoring service logs. Moreover, it can be used for collecting data from a wide variety of systems (including lightweight IoT devices) and building data analytics systems.

Distributed tracing is the third key aspect. Distributed tracing helps provide better insights on analyzing latency bottlenecks, root-cause analysis of errors, resource utilization issues, etc., for applications that are built using a composition of services. Jaeger, Zipkin, and  AppDash are three popular open-source projects inspired by Google’s distributed tracing platform Dapper. Out of these three, Jaeger and Zipkin are more popular, and Jaeger has better support for OpenTracing-compatible clients.

Conclusion

Modern enterprises are now adopting microservices architecture for implementing highly scalable, cloud-agnostic applications that achieve better agility, speed, and lower cost. At a high level, designing such systems may require technologies for container orchestration, implementing microservices, serverless functions, integration services, APIs, service management, and observability. Today, CNCF is taking the lead in providing a vendor-neutral ecosystem for implementing such cloud-native applications using open-source technologies that empower state-of-the-art patterns and practices. Over the last few years, container orchestration features required for hosting stateless applications have matured and are now used in production by many organizations. The mechanics required to run complex stateful applications on containers, such as distributed databases and big data analytics systems, are now being supported. Over time, almost all software applications may run on container platforms incorporating the above technologies. Therefore, organizations should plan for the future by considering the reference architecture explained in this article.

References

This article is featured in the new DZone Guide to Containers: Development and Management. Get your free copy for more insightful articles, industry statistics, and more! 

Original Link

Why Is Swagger JSON Better Than Swagger Java Client?

  • It’s the old way of creating web-based REST API documents through the Swagger Java library.

  • It’s easy for Java developers to code. 

  • All API description of endpoints will be added in the Java annotations parameters.

  • Swagger API dependency has to be added to the Maven configuration file POM.xml.

  • It creates overhead on the performance because of extra processing time for creating Swagger GUI files (CSS, HTML, JS etc). Also, parsing the annotation logic on the controller classes creates overhead on the performance, as well. It makes the build a little heavy to deploy on microservices, where build size should be smaller.

  • The code looks dirty because the extra code has to be added to the Spring MVC Controller classes through the Spring annotations. Sometimes, if the description of the API contract is too long, then it makes code unreadable and maintainable.

  • Any change in an API contract requires Java to build changes and re-deployment, even if it’s only simple text changes, like API definition text.

  • The biggest challenge is to share with the clients/QA/BA teams before the actual development and to make frequent amendments. The service consumers may change their requirements frequently. Then, it’s very difficult to make these changes in code and create the Swagger GUI HTML pages by redeploying and sharing the updated Swagger dashboard on the actual deployed dev/QA env.  

  • You can copy and paste swagger_api_doc.json JSON file content on https://editor.swagger.io/. It will help you modify content and create an HTML page like the following.  Swagger GUI will provide the web-based interface like Postman. 

    Original Link

    Up and Running with Alibaba Cloud Container Registry

    Let’s say you are a container microservices developer. You have a lot of container images, each with multiple versions, and all you are looking for is a fast, reliable, secure, and private container registry. You also want to instantly upload and retrieve images, and deploy them as a part of your uninterrupted integration and continuous delivery of services. Well, look no more! This article is for you.

    This article introduces you to the Alibaba Cloud Container Registry service and its abundance of features. You can use it to build images in the cloud and deploy them in your Alibaba Cloud Docker cluster or premises. After reading this article, you should be able to deploy your own Alibaba Cloud Container Registry.

    What is Alibaba Cloud Container Registry?

    Alibaba Cloud Container Registry (ACR) is a scalable server application that builds and stores container images, and enables you to distribute Docker images. With ACR, you have full control over your stored images. ACR has a number of features, including integration with GitHub, Bitbucket, and self-built GitLab. It can also automatically build new images after the compile and test from source code to applications.

    In this tutorial, we will build and deploy containerized images using Alibaba Cloud Container Registry.

    Step 1: Activating Alibaba Cloud Container Registry

    You should have an Alibaba Cloud account set up. If you don’t have one, you can sign up for an account and try over 40 products for free. Read this tutorial to learn more.

    The first thing you need to do is to activate the Alibaba Cloud Container Registry. Go to the product page and click on Get it Free.

    It will take you to the Container Registry Console where you can configure and deploy the service.

    Step 2: Configuring Alibaba Cloud Container Registry

    Create a Name Space

    A namespace is a collection of repositories and repository is a collection of images. I recommend creating one namespace for each application and one repository for each service image.

    Picture1

    After creating a namespace, you can set it up as public read or private in the settings.

    Create and Upload a Local Repository

    A repository (repo) is a collection of images. I suggest you collect all versions of the image of one service in one repository. Click Create Repo and fill out the information in the page. Select Local Repository. After a short while, a new repository will be created, which has its own Repository URL. You can see it on the image list page.

    You can now upload your locally built image to this repository.

    Picture2

    Step 3: Connecting to Container Registry with Docker Client

    In order to connect to any container registry from Docker client, you first need to set Docker login password in the ACR console. You will use this password on your Docker client to login onto the registry.

    Picture3

    Next, on the Image List page, click on Admin in front of the repository you want to connect. Here you can find all the necessary information and commands to allow Docker client access the repository. You can see Image Name, Image Type, Internet and intranet addresses of the repository. You can use the internet address to access the repository from anywhere in the world. If you want to use the repository with your Alibaba Cloud container cluster, you should use the internet address because it will be much faster.

    Copy the Login, push and pull commands. You will need it later.

    Picture4

    Start up the Docker client in your local machine. You can refer to docker.io to install a Docker client on to your computer. On MAC, run the docker.app application to start the Docker client.

    Login as user on Docker client.

    docker login --username=random_name@163.com registry-intl.ap-southeast-1.aliyuncs.com
    

    Note: Replace random_name with the actual username.

    You will see a login successful message after your enter the password and hit enter. At this point, you are authenticated and connected to the Alibaba Cloud Container Registry.

    Picture5

    Step 4: Building an Image Locally and Pushing to ACR

    Let’s write a Dockerfile to build an image. The following is a sample Dockerfile; you can choose to write your own Dockerfie:

    ######################
    # This is the first image for the static site.
    #####################
    FROM nginx
    #A name can be given to a new build stage by adding AS name to the FROM instruction.
    #ARG VERSION=0.0.0
    LABEL NAME = static-Nginx-image START_TIME = 2018.03.10 FOR="Alibaba Community" AUTHOR = "Fouad"
    LABEL DESCRIPTION = "This image is built for static site on DOCKER"
    LABEL VERSION = 0.0.0
    #RUN mkdir -p /var/www/
    ADD /public /usr/share/nginx/html/
    EXPOSE 80
    RUN service nginx restart</code></pre> Run the Docker build command to build the image. In order to later push the image to the repository, you need to tag the new image with the registry: <pre><code>docker build -t registry-intl-internal.ap-southeast-1.aliyuncs.com/fouad-space/ati-image .
    

    Picture6

    Once the build is complete, it will be tagged with repository name already. You can see the new image in by using the command:

    Docker image ls
    

    Picture7

    Push the image to ACR repository with the command:

    docker push registry-intl.ap-southeast-1.aliyuncs.com/fouad-space/ati-image:latest
    

    Picture8

    To verify that the image is pushed successfully, see it in the Container Registry console. Click on Admin in front of the repository name and then click Image version.

    Picture9

    Pull the image and create a container. Run the docker pull command:

    docker pull registry-intl.ap-southeast-1.aliyuncs.com/fouad-space/ati-image:latest
    

    Picture10

    Since I have already pulled the image to my local computer, the message says image is up to date.

    Create a new container using this image:

    docker run -ti -p 80:80 registry-intl.ap-southeast-1.aliyuncs.com/fouad-space/ati-image bash
    

    Picture11

    Step 5: Building an Image Repo with GitHub

    With Alibaba Cloud Container Registry, you can build images in the cloud as well as push them directly to the registry. Besides this, Container Repository supports automatically triggering the build when the code changes.

    If Automatically create an image when the code changes is selected in Build Settings, the image can be automatically built after you submit the code, without requiring you to manually trigger the build. This saves manual work and keeps the images up-to-date.

    Create a GitHub repo and upload your Docker file to the repo.

    Picture12

    Then, return to the Container Registry console to create a repo. Select GitHub repo path and complete the repository creation steps.

    Picture13

    Once the repository is created, go to Image List and click on Admin on the repo name, click Build, and finally click Build Now.

    You can see the build progress in the menu and the complete logs of the build process.

    Picture14

    You can also see all the build logs. Isn’t it neat?

    Picture15

    Once the build is complete, your image is ready to be deployed. You can pull it to the local Docker engine or deploy this image on Alibaba Cloud Container Service.

    Step 6: Creating a Webhook Trigger

    Webhook is a type of trigger. If you configure this, it will push a notification when an image is built and therefore set up a continuous integration pipeline.

    How does it work? Well, suppose you have set a Container Service trigger for Webhook. When an image is built or rebuilt, the applications in Container Service are automatically triggered to pull the latest image and re-deployed.

    To create a webhook, you first need to go to container service and get the application web URL.

    Picture16

    Now use this URL to configure a hook. Every time the image in the container registry is updated, this application will be re-deployed with the new image. Be very careful though, incorrect setup can bring down the whole application. But rollback is possible in the container service so no big worries.

    Picture17

    Summary

    In this article, you should have learned the following:

    • What is Alibaba Cloud Container Registry service and how you can implement it.
    • How to create a name space and repository to host Docker images.
    • How to build a Docker image locally and push it to ACR.
    • How to pull a Docker image from ACR and instantiate a new scontainer with it.
    • Building an image in Container Registry with GitHub source code.
    • How to automatically trigger the pull request for the latest image and re-deploy the service.

    Original Link

    Are Developers Ready for Cloud Platforms?

    Introduction

    IT infrastructures are aggressively moving to cloud environments. Most of the organizations have set up a new vision for moving to cloud platforms. These aggressive changes are pushed from top-level executives (CTO, CEO, and CDO), while bottom level architects talking about SaaS, PaaS, and microservices for cloud-native applications. Either side will make developers’ hands dirty and struggle to visualize the cloud magic box to find out what will and will not work.

    Cloud applications have unique functionalities such as support for distributed architecture with high scalability and flexibility to move across multiple cloud environments (vendor-agnostic approach). From a cloud perspective application development to production support, entirely different tools and techniques need to be followed for utilizing the complete advantage of a cloud platform. Giving training for the developers in the right tools to develop, debug, and test cloud-based is a key challenge. This article talks about new approaches and tools for cloud environments.

    Serverless Computing

    Suppose a customer wants to develop new services with a highly scalable infrastructure to support IoT and ybig data platforms. Functional programming is one of the best options that needs to be considered. Top cloud provider support function programmings (AWS Lambda, Azure functions, Google Cloud functions, IBM whisk). Openfaas is another interesting open source project for experimenting with functional programming.

    Choosing a Programming Language

    Startup time, memory efficiency, binary size, and concurrency are key factors while developing a microservice architecture on a cloud platform.

    Golang – A cloud startup trying to decide what language they want to use to explore cloud architecture shoudl consider Go. Go is a good choice which includes features such as concurrency, lightweight, statically typed and compiled language. One of the UK banks (Monzo) has completely build full banking architecture using microservices with Go language.

    Java – Most products are developed in Java and have large developer communities available. Spring Boot and Java modules (from JDK-9.0) are good options for cloud-native architecture. This is a good start for migrating legacy systems to cloud platform.

    .NET core – As we know, Microsoft was not supporting for open source community a long time. This was one of the main reasons .NET was not adopted by many companies, but Microsoft always gives bug-free development tools, easy syntax, and good tutorials. Microsoft lately recognized that open source options provide more innovation and more business for Azure cloud. As a result, .NET core is attracted by open source community. One of the best option for the Azure cloud platform.

    R Math – As many of you noticed, data science fever hits across computer world irrespective of industries but if you look closely, there is no new language invented for solving data science puzzles (statistical and mathematics). Since cloud provides massive computer processing power with low cost, the industry trying to solve AI puzzles using old techniques and tools. R is an implementation of S programming language. S was created in 1976, and R library implements statistical and mathematical functions.

    Python – Python supports multiple programming paradigms and strongly types checked. It is easy to learn and has powerful analytics library. It was strongly supported by open source community. These are the reasons for data scientist attracted by Python

    Choosing Storage

    Massively scaling up the frontend service and trying to communicate with RDBMS database using a connection pooling will not fulfill the actual use case. Cloud-centric database needs to be selected for building a strong storage platform.

    Amazon DynamoDB – It provides single-digit-millisecond latency at any scale. Data stored on NoSQL format and supports documents, key-value store models and also building graph database.

    Azure Cosmos DB – It supports global distributed database with horizontal scaling. Data stored on NoSQL format and assured single-digit-millisecond latency at the 99 percent. It not only supports document, graph, key-value, table, and column-family data models, API support extended for multiple languages for more details please refer to the documentation.

    MongoDB – MongoDB is one of the early provides of NoSQL DB. It is a very good open source and cost-effective model for customers.

    IBM Cloudera DB – Cassandra is an underlying database for Cloudera. It supports Java-based APIs for communicating to NoSQL database.

    Oracle NoSQL DB – Finally, Ooracle also joined with NoSQL DB and supports for load balancing and scaling nodes horizontally.

    Service Mesh

    Microservices architecture has brought new challenges for handling failure, routing, and service discovery. ServiceMesh needs to be considered while building cloud-centric services on a large scale.

    What is Service Mesh?

    Service Mesh was described by buoyant.io as: “…a dedicated infrastructure layer for handling service-to-service communication. It’s responsible for the reliable delivery of requests through the complex topology of services that comprise a modern, cloud-native application. In practice, the service mesh is typically implemented as an array of lightweight network proxies that are deployed alongside application code, without the application needing to be aware.” In a simple term, it acts as a proxy layer for communicating with microservices.

    Linkerd – It communicates between services and provides an abstract layer for accessing microservices. The key features are service discovery, load balancing, circuit breaking, dynamic request routing and distributed tracing.

    Envoy – Originally built at lift for internal use, it has been open sourced as a service mesh platform. This is not designed for the Kubernetes platform. Istio is trying to address these problems.

    Istio – It creates the network of deployed services with load balancing service to service authentication. Service monitoring is one of key feature it supports. In the future, we may get professional service support from big vendors such as Redhat.

    Messaging Layer

    IoT is another growth area across all the industries. I’m sure many of you heard the phrase “data is a new oil.” Autonomous vehicles, mobile devices, and many more devices pump massive data to cloud platform going forward. Event sourcing is another area which captures complete online user activity. For example, if you log in and browse some mobile and added to cart, remove cart and added another brand, this data is captured as an event source and stored for future usage.Data streaming tools trying to address these problems.

    Kafka – As many of you knew about topics (refer message broker concepts), Kafka topic is a stream of records. Kafka producer API and Kafka consumer API supports for interacting with Kafka topic. Kafka cluster has inbuilt feature for creating many broker and servers.

    Kinesis – Amazon Kinesis stream has one or more shards. A shard is a uniquely identified group of records in the stream. Amazon claims terabytes per hour can be captured from IoT devices. Kinesis can be integrated via kinesis consumer with other Amazon products such as S3, Redshift, etc.

    Containers and Infrastructure as Code

    Containerization is a bundle of everything for running software on a cloud environment. Every bundle has a code, environment variable, library and more. These bundles just swim in any cloud environment and give flexibility for moving to a different cloud environment on large scale. I just want to talk about two products, Docker and Kubernetes.

    Docker – It provides an open standard for packaging and distributing containerized applications. Docker engine allows to build and run containers. Docker images stored into Docker hub like Maven jars. 

    Kubernetes – It provides an underlying platform for running multiple containers seamlessly. It supports the orchestrating, distributing, and scaling of containers, and Docker images run in a Kubernetes environment.

    Conclusion

    These tools and technologies have been discovered through my experience. This is just the beginning for cloud platform and may change based on specific contexts and use cases. Large enterprise companies need to give more importance for developer tools and technologies while building cloud platform. Separate roadmaps needs to be created for application development, storage, security, logging and debugging, monitoring and testing during cloud migration depending on company landscape not just plain roadmap for a cloud. This will give clear idea for developers to increase productivity and achieve the goals.

    Original Link

    Introduction to Spring Cloud: Config (Part 1)

    Spring Cloud provides tools for developers to quickly build some of the common patterns in distributed systems (e.g. configuration management, service discovery, circuit breakers, intelligent routing, micro-proxy, control bus, one-time tokens, global locks, leadership election, distributed sessions, and cluster state).

    It helps manage the complexity involved in building a distributed system.

    In this tutorial series, we’ll be using some of these patterns.

    Microservices

    Microservices are a software development architectural style that decomposes the application into a collection of loosely coupled services.

    It improves modularity, thus making the application easier to develop, test, and deploy.

    It also makes the development process more efficient by parallelizing small teams to work on different services.

    There are also various difficulties regarding communication between services, managing configurations, etc. in a microservice architecture.

    One should go through the Twelve-Factor App Manifesto to solve many of the problems arising with a Microservice architecture.

    Spring Cloud Config

    Spring Cloud Config provides server- and client-side support for externalized configuration in a distributed system.

    It has two components, the Config Server and the Config Client.

    The Config Server is a central place to manage external properties for applications across all environments. We could also version the configuration files using Git. It exposes REST APIs for clients to connect and get the required configuration. We can also leverage Spring Profiles to manage different configuration files for different profiles (environments).

    For eg: we may decide to use an embedded H2 in our local dev profile, but use PostgreSQL in our prod profile.

    The Config Client binds to the Config Server and initializes Spring Environment with remote property sources.

    Dependencies

    We’ll use Gradle to build our projects. I recommend using Spring Initializr for bootstrapping your projects.

    Config Server

    We’ll use:

    • Spring Boot 2
    • Spring Cloud Config Server
    buildscript { ext { springBootVersion = '2.0.1.RELEASE' } ...
    } ext { springCloudVersion = 'Finchley.M9'
    } dependencies { compile('org.springframework.cloud:spring-cloud-config-server') ...
    }
    

    Config Client

    We’ll use:

    • Spring Boot 2
    • Spring Boot Actuator
    • Spring Boot Webflux
    • Spring Cloud Starter Config
    buildscript { ext { springBootVersion = '2.0.1.RELEASE' } ...
    } ext { springCloudVersion = 'Finchley.M9'
    } dependencies { compile('org.springframework.boot:spring-boot-starter-actuator') compile('org.springframework.boot:spring-boot-starter-webflux') compile('org.springframework.cloud:spring-cloud-starter-config') ...
    }
    

    Auto-Configuration

    We’ll leave Spring Boot to automatically configure our application based on the dependencies added and the properties specified.

    Config Server

    @SpringBootApplication
    @EnableConfigServer
    public class ConfigServerApplication { public static void main(String[] args) { SpringApplication.run(ConfigServerApplication.class, args); }
    }
    

    We’ll also have to specify the Git repository where the configurations are stored.

    server.port=8888
    spring.cloud.config.server.git.uri=https://github.com/mohitsinha/spring-cloud-configuration-repo
    

    spring.cloud.config.server.git.uri specifies the Git repository where the configurations are stored.

    You can also pass the user credentials to access the Repository by passing the username and password.

    spring.cloud.config.server.git.username spring.cloud.config.server.git.password
    

    Config Client

    @SpringBootApplication
    public class LibraryServiceApplication { public static void main(String[] args) { SpringApplication.run(LibraryServiceApplication.class, args); }
    }
    
    spring.application.name=library-service
    spring.cloud.config.uri=http://localhost:8888
    

    spring.application.name is used to fetch the correct configuration file from the Git repository. It’s a very important property when used with Spring Cloud projects. We’ll see this later in this tutorial series.

    The bootstrap properties are added with higher precedence, hence they cannot be overridden by local configuration. You can read more about it here.

    management.endpoints.web.exposure.include=refresh
    

    management.endpoints.web.exposure.include=refresh exposes the refresh actuator endpoint. We’ll look at it in a while.

    REST API

    Let’s look at some of the REST APIs that are automatically created to manage and monitor these services.

    Config Server

    Let’s look at the configuration values for the application libary-service.

    curl http://localhost:8888/library-service/default

    The output obtained will look like this:

    { "name": "library-service", "profiles": [ "default" ], "label": null, "version": "4df9520f00d65722bf79bfe5ece03c5a18c5c1f1", "state": null, "propertySources": [ { "name": "https://github.com/mohitsinha/spring-cloud-configuration-repo/library-service.properties", "source": { "library.name": "Spring Developers Library" } } ]
    }
    

    It gives details about the Git repository for the configuration files, the configuration values, etc. Here, we can see the value for the property library.name.

    Config Client

    We’ll add a web endpoint that will use the property library.name defined in the configuration in the Git repository.

    @RestController
    @RefreshScope
    class LibraryController{ @Value("${library.name}") private String libraryName; @GetMapping("/details") public Mono<String> details(){ return Mono.just(libraryName); }
    }
    

    A Bean marked with the annotation RefreshScope will be recreated when a configuration change occurs and a RefreshScopeRefreshedEvent is triggered.

    Whenever a configuration change occurs in the Git repository, we can trigger a RefreshScopeRefreshedEvent by hitting the Spring Boot Actuator Refresh endpoint. The refresh endpoint will have to be enabled.

    management.endpoints.web.exposure.include=refresh

    The cURL command for the Actuator Refresh endpoint:

    curl -X POST http://localhost:8080/actuator/refresh \ -H 'content-type: application/json' \ -d '{}'
    

    This will update the configuration values on the Config Client.

    We can now check the endpoint and verify if the new configuration value is being reflected or not.

    curl http://localhost:8080/details

    What if there are multiple instances of the Client running, and we want to refresh the configuration values in all of them?

    This can be achieved by using Spring Cloud Bus. It links the nodes of a distributed system with a lightweight message broker. You can read more about it here.

    Conclusion

    I have tried explaining, with a simple example, how to manage externalized configurations using Spring Cloud Config. In the next tutorial, we’ll look at Service Discovery.

    You can find the complete example for the Config Server and Library Service on GitHub.

    Original Link

    Service Mesh: The Best Way to Scale Enterprise Apps

    Microservices are great for DevOps, but the service-to-service communication these architectures depend on are complex to run and manage at production scale. Enter service mesh: the best way for enterprises to scale, secure and monitor apps. A service mesh is a dedicated infrastructure layer enabling service-to-service communication to be quick, secure, and reliable. If you’re building cloud native applications, you need a service mesh.

    After talking to development and operations teams it became clear that microservices are great for development velocity, but the complexity and risk in these architectures lies in the service-to-service communication that microservices depend on. We have taken an application first approach to provide a communication fabric for microservices, called a service mesh. With our supported service mesh DevOps teams have the flexibility and autonomy they desire while providing the policy, visibility and insights into their microservice environment that operations teams demand for production-grade applications.

    With this in mind, Aspen Mesh is building an enterprise-grade service mesh because we believe a robust microservice communication fabric is the best possible path to scaling containerized apps whether in the data center or in the cloud (or both). But we also understand the needs and complexity of enterprise production environments. A service-mesh needs to do more than just scale apps; it also needs to monitor and secure them. To that end, we’re building Aspen Mesh on the Istio project, and providing a supported service mesh infrastructure that allows DevOps teams the flexibility and autonomy they desire while providing the policy, visibility and insights into microservices that operations teams demand for production-grade applications.

    Advantages of a Service Mesh

    Think about your plans for microservices. Maybe you plan to have 10, 50, 100 or 1000’s of services running in your Kubernetes cluster. How do you get all of those services in your new microservice and container environments in an efficient, uniform way?

    Do you know who is talking to who and if they are allowed to? Is that communication secure? How do you debug something when it goes down? How do you add tracing or logging without touching all of your applications? Do you know what the performance or quality impacts of releasing a new version of one of those services is on the upstream and downstream services?

    A service mesh helps answer those questions. As a transparent infrastructure layer that is inserted between your microservice and the network a service mesh gives you a single point in the communication path of your applications to insert services and gather telemetry. You can do this without requiring changes to your applications.

    How Do I Get Started Using an Enterprise Grade Service Mesh?

    The concept of service mesh is brand new. In fact, until 2018 was declared “The Year of the Service Mesh” at KubeCon in December 2017, most people had never heard of a service mesh. But, we have been working on this concept in different ways for a while now and are able to offer early access to Aspen Mesh for interested customers.

    We are looking for teams on their container journey who are looking to solve real problems with their applications. We need partners who are excited to work with us and understand the value of a strong relationship.

    We are currently in early access mode for Aspen Mesh and welcome customers who are interested in working with us in that process. We anticipate full product availability in late 2018.

    Join our early access program today.

    Original Link

    A Development Workflow for Kubernetes Services

    A basic development workflow for Kubernetes services lets a developer write some code, commit it, and get it running on Kubernetes. It’s also important that your development environment be as similar as possible to production, since having two different environments will inevitably introduce bugs. In this tutorial, we’ll walk through a basic development workflow that is built around Kubernetes, Docker, and Envoy/Ambassador.

    Your Cloud Infrastructure

    This tutorial relies on two components in the cloud, Kubernetes, and Ambassador. If you haven’t already, go ahead and set them up.

    A Development Environment for Kubernetes Services

    You need a development environment for Kubernetes services. We recommend the following approach:

    • A containerized build/runtime environment, where your service is always run and built. Containerizing your environment helps ensure environmental parity across different development and production environments. It also simplifies the onboarding process for new developers.
    • Developing your microservice locally, outside of the cluster. You want a fast code/build/test cycle. If you develop remotely, the additional step of deploying to a Kubernetes cluster introduces significant latency.
    • Deploying your service into Kubernetes once you need to share your service with others (e.g., canary testing, internal development, etc.).

    You’ll need the following tools installed on your laptop:

    • git, for source control
    • Docker, to build and run your containers
    • kubectl, to manage your deployment
    • Forge, for deploying your service into Kubernetes
    • Telepresence, for locally developing your service

    Go ahead and install them now, if you haven’t already.

    Deploy a Service to Kubernetes

    In a traditional application, the release/operations team manages the deployment of application updates to production. In a microservices architecture, the team is responsible for deploying service updates to production.

    We’re going to deploy and publish a microservice, from source, into Kubernetes.

    1. We’ve created a simple Python microservice that you can use as a template for your service. This template includes:
    • a Dockerfile that specifies how your development environment and runtime environment are configured and built.
    • a service.yaml file that customizes deployments for different scenarios (e.g., production, canary, development).
    • a Kubernetes manifest (k8s/deployment.yaml) that defines how the service is run in Kubernetes. It also contains the annotations necessary to configure Ambassador for the given service.
    git clone https://github.com/datawire/hello-world-python
    

    2. We’re going to use Forge to automate and template-ize the deployment process. Run the Forge configuration process:

    forge setup
    

    3. The process of getting a service running on a Kubernetes cluster involves a number of steps: building a Docker image, pushing the image to a repository, instantiating a Kubernetes manifest to point to the image, and applying the manifest to the cluster. Forge automates this entire process of deployment:

    cd hello-world-python
    forge deploy
    

    4. Now, we’re going to test the service. Get the external IP address of Ambassador:

    kubectl get services ambassador
    NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    ambassador 10.11.250.208 35.190.189.139 80:31622/TCP 4d
    

    5. Access the service via Ambassador:

    curl 35.190.180.139/hello/
    Hello World (Python)! (up 0:03:13)
    

    Live Coding

    When developing, you want a fast feedback cycle. You’d like to make a code change, and immediately be able to build and test your code. The deployment process we just went through adds latency into the process, since building and deploying a container with your latest changes takes time. Yet, running a service in Kubernetes lets that service access other cloud resources (e.g., other services, databases, etc.).

    Telepresence lets you develop your service locally, while creating a bi-directional proxy to a remote Kubernetes cluster.

    1. You’d like for your development environment to be identical to your runtime environment. We’re going to do that by using the exact same Dockerfile we use for production to build a development image. Make sure you’re in the hello-world-python directory, and type:

    docker build . -t hello-world-dev
    

    2. Now, we can swap the existing hello-world service on Kubernetes for a version of the same service, running in a local container.

    telepresence --swap-deployment hello-world-stable --docker-run \ --rm -it -v $(pwd):/service hello-world-dev:latest
    

    (Note that Forge has automatically appended a stable suffix to the deployment name to indicate that the service has been deployed with the stable profile specified in the service.yaml.)

    3. Telepresence invokes docker run to start the container. It also mounts the local filesystem containing the Python source tree into the container. Change the “Hello World” message in app.pyto a different value:

    def root(): return "Hello World via Telepresence! (up %s)\n" % elapsed()
    

    4. Now, if we test our service via Ambassador, we’ll see that we’re now routing to the modified version of our service.

    curl 35.190.189.139/hello/
    Hello World via Telepresence! (up 0:04:13)
    

    Want to Learn More?

    This article originally appeared in Datawire’sCode Faster Guides. Check out the other tutorials in this series:

    Or try out the open source projects mentioned in this tutorial for yourself:

    If you have any questions, reach out to us on Gitter.

    Original Link