CI/CD With Kubernetes and Helm

In this blog, I will be discussing the implementation of CI/CD pipeline for microservices which are running as containers and being managed by Kubernetes and Helm charts

Note: Basic understanding of Docker, Kubernetes, Helm, and Jenkins is required. I will discuss the approach but will not go deep into its implementation. Please refer to the original documentation for a deeper understanding of these technologies.

Original Link

How to Build True Pipelines With Jenkins and Maven

The essence of creating a pipeline is breaking up a single build process into smaller steps, each having its own responsibility. In this way, faster and more specific feedback can be returned. Let’s define a true pipeline as a pipeline that is strictly associated with a single revision within a version control system. This makes sense. Ideally, we want the build server to return full and provide accurate feedback for every single revision.

As new revisions can be committed at any time, it is natural that multiple pipelines actually get executed next to each other. If needed, it is even possible to allow concurrent executions of the same build step for different pipelines. However, some measurements need to be taken in order to guarantee that all steps executed within one pipeline are actually based on the same revision.

Original Link

Onboarding a Repository Manager

So, you’ve decided to add a binary repository manager into your SDLC/CI (Software Development Life Cycle). Here are some key points and high-level aspects you need to consider before getting started.

Top five areas to think about prior to and during implementation:

  1. Infrastructure and Topology

  2. Content and Tools

  3. Utilization in the CI/CD Process

  4. Leveraging Added Value in your Dev Processes

  5. Management and Maintenance Procedures

1. Infrastructure and Topology

Here are the first two aspects for your foundation:

  1. System requirements of the machine that will run the new application. This include, resources, network, and storage.

  2. Physical location of the application in your organization.

Some questions you will probably want to ask yourself:

  • How many servers do you need?
  • Where will they be located?
  • How will they communicate with each other and the outside world? Are they within an isolated environment (Air-Gap)? Are they physically distributed (multi-site topology)?

2. Content and Tools

To get started, you’ll need to connect your dev-stations and existing resources, including package managers and CI servers (such as Jenkins CI for example), to work with the application.

If you don’t have a CI integration or your organizational procedures require you to configure package managers and build tools on individual workstations, some applications do provide you with easy to use setup features, such as build-tool configuration snippets/templates for example. In most cases, you’ll need to consider scripting or other ways to automate the on-boarding of workstations to work with the new tool. It is important to ensure a seamless integration with minimal interference for your end-users as possible.

Many tools can integrate into your build ecosystem and allow you to gain visibility into your artifacts, dependencies and information on your build environment.

3. Utilization in the CI/CD Process

Introducing a binary repository manager into your CI/CD pipeline will most likely take the longest time to plan and implement. You’ll want to do this with as least interference with your end-users and system as possible. Here are some tips to keep in mind:

  • Choose your most agile team (i.e. the team that will be most open to changes).

  • Choose a project maintained by this team. Preferably a new project so that you will not need to modify an existing build/job but rather begin the integration of repository manager from a clean slate.

  • Keep in mind that in most cases teams are not isolated. Meaning that each dev-team is both creating and consuming content from/for other dev-teams. In turn, this means that you need to make sure that introducing a repository manager (or any system for that matter) will not affect this bi-directional content “transfer.”

Here’s an example of a common process:

  1. Set up your new binary repository alongside your current system.

  2. Configure your builds to start pushing content, including builds and artifacts, to the binary repository while still maintaining your old system. i.e retrieve/deploy to your legacy system but in parallel also deploy to the new binary repository manager.

  3. Start deploying and retrieving content directly to/from your binary repository. But at the same time continue to deploy your content to the legacy system, for the consumption of other teams/projects.

  4. Once you are ready and confident with your binary repository integration, start migrating your projects one by one or in bulk.

Onboarding Binary Repository

Note: The above process assumes that the newly introduced binary repository manager is universal and will act as your single source of truth for your binaries. Also, these are very high-level abstractions and will probably need to be adjusted for your specific use-case.

4. Leveraging the Added Value

Some of the inherent added value of introducing a binary repository manager into your environment is improving the build process. For example, utilizing your build info and metadata will allow you to have traceable builds and easily configured retention procedures.

Having traceable builds will prevent the delay of the build process by stages like compliance, security and release management. This will allow you to separate the development process from the post-development stages such as QA, compliance, and security. And yet maintain a bidirectional connection between all of these post-development steps.

5. Management and Maintenance

In general, a binary repository manager will reduce the ongoing management of your entire build processes/pipeline.

Five maintenance domains you’ll need to plan for:

  1. Retention Policies for your artifacts and Builds information (BOM manifests or other build-related meta-information).

  2. Storage limitations.

  3. Database management (in case your binary repository manager uses an external DB).

  4. Ongoing system monitoring.

  5. Upgrades and scheduled maintenance operations.

The five aspects we covered in this post should be considered when introducing a binary repository manager. They can also be applied to any other DevOps automation tools for that matter in your dev environment. This summary of suggestions is based on real use-cases, personal interactions, and discussions I’ve had with many users throughout my years as a Solutions-Engineer @JFrog.

Original Link

DevOps Pipeline Managing PCF App and Resources

There are many tools available to do DevOps for PCF. Automating the deployment of artifacts to PCF is very easy and many articles have been published about it. Now, you will be asking, what different aspects this article is going to cover?

In my current project, I have observed that developers keep deploying applications on PCF with little control so the resources are piling up and leading to a huge bill to the DevOps team who manages the platform. After analyzing the issue, I found that teams are building the applications and deploying to a PCF test environment but they are sitting idle 80% of the time without being used. This is a huge waste of a test environment, as IaaS charges based on the consumption of memory/storage.

To address this waste, I have come up with a DevOps process which will not only just deploy the application to PCF, but also automate the provisioning and de-provisioning of the dependencies around it. This will ensure that all the resources are used optimally and not sitting idle. The idea is that you create the resources when you need them and delete them as soon as your work is finished. The below solution will first create the Org/Space for PCF, then create dependent backing services, deploy the application, test it with automation, clean up the resources after testing completion, and delete the Org/Space itself. There will be no environment sitting idle and adding to your bill.

For this pipeline, I have used Bamboo, but this can be implemented with any other pipeline, like Jenkins, GoCD, etc.


  1. Bamboo Pipeline setup

  2. A Spring Boot application

  3. PCF CLI and Maven plugins for Bamboo

  4. Basic understanding of Bamboo pipeline

Stage 1 — Create Build and Analysis

This first step will checkout the code and integrate it with SonarQube. The Sonarqube dashboard will show the Application Analysis results based on the widgets available.

Image title

  1. First, checkout the code from Git.

  2. The next two steps are for copying the build and SonarQube scripts and retrieving the Git user and branch details for running the build.

  3. The third step is for the Maven build. I have disabled the Gradle script as my application is using the Maven pom for the build.

  4. The last step is to run the Sonar scan.

Stage 2 — Secure Code Scanning

This step is pretty standard and you can use many tools available, like Ccheckmarx, Coverity, SourceClear, etc. These tools do static code scans from a security point of view and generate log reports.

Image title

Stage 3 — Deploy Artifact to Repository (Nexus)

This is going to push the build artifact (jar, war file) to the repository, like Nexus or JFrog.

Image title

Stage 4 — Create the PCF Environment and Deploy the App

This is the most important part of this article. This step is going to create the PCF environment and then deploy the app.

  1. Copy the manifest file from the source code and make any changes through scripts, as required.

  2. Log into PCF using the PCF CLI plugin or a bash script.

  3. Create the Org/Space for the application and backing services where it will be deployed. Target to the new org/space.

  4. Create the service instances for each backing service required using the cf CLI command create-service.

  5. Push the application downloaded from the repository to PCF. The manifest file takes care of the service binding before starting the app.

  6. Log out of PCF.

Image title

All the above steps can be implemented in two ways.

  1. Write a shell script and keep it in the source code repository. This script can be imported into a Bamboo task and executed. 

  2. Bamboo has a PCF CLI plugin so a task can be created for each command to log in, create services, deploy app, etc…

I have used a mix of both the approaches to showcase them (disabled tasks are for the second approach).

Now we have configured and provisioned everything required by the application to run, so it would be easy to de-provision it when the job is completed.

Step 5 — Run Automated Tests

This step is also a key point. Unless the testing is automated, you would need your app up and running to do manual testing, and that leads the app to sit idle for most of the time. So, automate most of the testing steps to reduce the idle time.

Image title

Step 6 — Delete the Resources and App

Once the testing is completed, all the resources and app can be de-provisioned.

Again, this can be either by script or the separate task for each command.

Image title

If we look at it before and after running this pipeline, you won’t see any new space/app/services in PCF, but still, you fulfilled your purpose of using PCF to deploy and test the app.

Miscellaneous Use Case

The above strategy works very well for a dev environment where developers will keep playing with a lot of resources. For other environments, this might not be the case. For that, we may need to follow a different strategy. Let me explain that as well.

Let’s take an example of a UAT environment where developers will be pushing the app and users will be doing the manual testing (now, don’t argue with me that it should also be automated. There is always one thing or another which the user would like to see and test by himself before approving it to go to production). In that scenario, you would need to keep the app up and running for a certain period. In that case, you would need a pipeline which can just run Step 6 to clean up the resources. You can keep that pipeline aside to do this job in an automated way rather than doing a manual job.

Image title

That’s all for this article. I hope you find it useful to minimize your bills.

Please do share your ideas on how to minimize the resource waste and the bill on the PCF platform. Share your views through comments.

Original Link

Creating a Positive Developer Experience for Container-Based Applications: From Soup to Nuts

This article is featured in the new DZone Guide to Containers: Development and Management. Get your free copy for more insightful articles, industry statistics, and more!

Nearly every engineering team that is working on a web-based application realizes that they would benefit from a deployment and runtime platform — and ideally, some kind of self-service platform (much like a PaaS) — but as the joke goes, the only requirement is that “it has to be built by them.” Open-source and commercial PaaS-es are now coming of age, particularly with the emergence of Kubernetes as the de facto abstraction layer over low-level compute and network fabric. Commercially packaged versions of the Cloud Foundry Foundation’s Cloud Foundry distribution (such as that by Pivotal), alongside Red Hat’s OpenShift, are growing in popularity across the industry. However, not every team wants to work with such fully featured PaaSes, and now, you can also find a variety of pluggable components that remove much of the pain of assembling your own bespoke platform. However, one topic that often gets overlooked when assembling your own platform is the associated developer workflow and developer experience (DevEx) that should drive the selection of tooling. This article explores this topic in more detail.

Infrastructure, Platform and Workflow

In a previous TheNewStack article, Kubernetes and PaaS: The Force of Developer Experience and Workflow, I introduced how the Datawire team often talks with customers about three high-level foundational concepts of modern software delivery — infrastructure, platform, and workflow — and how this impacts both technical platform decisions and the delivery of value to stakeholders.

Image title

If Kubernetes Is the Platform, What’s the Workflow?

One of the many advantages with deploying systems using Kubernetes is that it provides just enough platform features to abstract away most of the infrastructure — at least to the development team. Ideally, you will have a dedicated platform team to manage the infrastructure and Kubernetes cluster, or perhaps use a fully managed offering like Google Container Engine (GKE), Azure Container Service (AKS), or the soon-to-be-released Amazon Elastic Container Service (EKS). Developers will still benefit from learning about the underlying infrastructure and how the platform interfaces with this (cultivating “mechanical sympathy”), but fundamentally, their interaction with the platform should largely be driven by self-service dashboards, tooling, and SDKs.

Self-service is not only about reducing the development friction between an idea to delivered (deployed and observed) value. It’s also about allowing different parts of the organization to pick and choose their workflow and tooling, and ultimately make an informed trade-off against velocity and stability (or increase both velocity and stability). There are several key areas of the software delivery life cycle where this applies:

  • Structuring code and automating (container) build and deployment.

  • Local development, potentially against a remote cluster, due to local resource constraints or the need to interface with remote services.

  • Post-production testing and validation, such as shadowing traffic and canary testing (and the associated creation of metrics to support hypothesis testing).

Several tools and frameworks are emerging within this space, and they each offer various opinions and present trade-offs that you must consider.

Emerging Developer Workflow Frameworks

There is a lot of activity in the space of Kubernetes developer workflow tooling. Shahidh K. Muhammed recently wrote an excellent Medium post, Draft vs. Gitkube vs. Helm vs. Ksonnet vs. Metaparticle vs. Skaffold, which offered a comparison of tools that help developers build and deploy their apps on Kubernetes (although he did miss Forge!). Matt Farina has also written a very useful blog post for engineers looking to understand application artifacts, package management, and deployment options within this space: and Kubernetes: Where Helm and Related Tools Sit.

Learning from these sources is essential, but it is also often worth looking a little bit deeper into your development process itself, and then selecting appropriate tooling. Some questions to explore include:

Image title

  • Do you have an opinion on code repository structure?

    • Using a monorepo can bring many benefits and challenges. Coordination of integration and testing across services is generally easier, and so is service dependency management. For example, developer workflow tooling such as Forge can automatically re-deploy dependent services when a code change is made to another related service. However, one of the challenges associated with using a monorepo is developing the workflow discipline to avoid code “merge hell” across service boundaries.

    • The multi-repo VCS option also has pros and cons. There can be clearer ownership, and it is often easier to initialize and orchestrate services for local running and debugging. However, ensuring code-level standardization (and understandability) across repos can be challenging, as can managing integration and coordinating deployment to environments. Consumer-driven contract tooling such as Pact and Spring Cloud Contract provide options for testing integration at the interface level, and frameworks like Helm (and Draft for a slicker developer experience) and Ksonnet can be used to manage service dependencies across your system.

  • Do you want to implement “guide rails” for your development teams?

    • Larger teams and enterprises often want to provide comprehensive guide rails for development teams; these constrain the workflow and toolset being used. Doing this has many advantages, such as the reduction of friction when moving engineers across projects, and the creation of integrated debug tooling and auditing is easier. The key trade-off is the limited flexibility associated with the establishment of workflows required for exceptional circumstances, such as when a project requires a custom build and deployment or differing test tooling. Red Hat’s OpenShift and Pivotal Cloud Foundry offer PaaS-es that are popular within many enterprise organizations.

    • Startups and small/medium enterprises (SMEs) may instead value team independence, where each team chooses the most appropriate workflow and developer tooling for them. My colleague, Rafael Schloming, has spoken about the associated benefits and challenges at QCon San Francisco: Patterns for Microservice Developer Workflows and Deployment. Teams embracing this approach often operate a Kubernetes cluster via a cloud vendor, such as Google’s GKE or Azure’s AKS, and utilize a combination of vendor services and open-source tooling.

    • A hybrid approach, such as that espoused by Netflix, is to provide a centralized platform team and approved/managed tooling, but allow any service team the freedom to implement their own workflow and associated tooling that they will also have the responsibility for managing. My summary of Yunong Xiao’s QCon New York talk provides more insight to the ideas: The “Paved Road” PaaS for Microservices at Netflix. This hybrid approach is the style we favor at Datawire, and we are building open-source tooling to support this.

Image title


This article has provided several questions that you and your team must ask when adopting Kubernetes as your platform of choice. Kubernetes and container technology offer fantastic opportunity, but to fully take advantage of this, you will most likely need to change your workflow. Every software development organization needs a platform and associated workflow and developer experience, but the key question is: How much of this do you want to build yourself? Any technical leader will benefit from understanding the value proposition of PaaS and the trade-offs and benefits of assembling key components of a platform yourself.

This article is featured in the new DZone Guide to Containers: Development and Management. Get your free copy for more insightful articles, industry statistics, and more!

Original Link

Pipeline Analytics and Insights

In a recent episode of Continuous Discussions (#c9d9), we were joined by expert panelists who discussed how to best collect and leverage the data that is generated from the software delivery pipeline.

The panel included: Juni Mukherjee, author of Continuous Delivery Pipeline – Where Does It Choke? and The Power Of Continuous Delivery In DevOps; Manuel Pais, DevOps and CD consultant; Mirco Hering, a passionate Agile and DevOps change agent trying to make software delivery a more humane place to be; Paula Thrasher, director of digital services at CSRA; Torsten Volk, managing research director for hybrid cloud at EMA; and Electric Cloud’s Sam Fell and Anders Wallgren.

Continue reading for some of their top takeaways!

It’s important to discuss metrics before jumping in and starting to measure, suggests Thrasher: “When I’m talking to teams about what to measure I have some standard skits, and I like to hear from the team about what problem they’re trying to solve before jumping in with, ‘You’ve got to measure all these things.’ You can go absolutely bonkers with numbers and create chaos without actually solving for things.”

Focus metrics around bottlenecks as well, advises Pais: “What I recommend is besides those core metrics that give you an overview of the state of the delivery of your software operations, also look at the value stream. Look at the bottlenecks and come up then with a metric, which acts as a proxy to evaluate if you are getting better, are you reducing that bottleneck, and working to remove it.”

Put metrics in place that will help you find defects earlier, says Wallgren: “Earlier is usually better for finding defects. The root cause part is pretty important because otherwise, you’re just playing whack-a-mole with the release side.”

The most important metric according to Mukherjee?: “’Check-in to go live’ is how long does it take for a check-in to go live in the hands of a customer? That is pretty much the basis of everything. Check-in to go live is a subset of feature lead time, which is how long it takes to get a feature out. And then feature lead time’s a subset of concept to cash, which means how long it takes for a concept to actually make money. But again, going back the heart of it is really check-in to go live.”

Everyone in the organization should be clued into what is being measured, per Volk: “Keep each other honest. Have the presales guys, the operations guys and the sales guys in the meetings where we discuss the gates. That is absolutely critical no matter what the metrics are.”

It’s important to evaluate the technical metrics, too, says Thrasher “There’s cycle time metrics that can tell you something about how the team’s performing in the work, but there’s also some value in the tech metrics of time to test, test efficiency, cyclomatic complexity, etc. All of those things can be really valuable in teasing out a core problem that you’re trying to solve for.”

Visibility and understanding of the metrics that matter most to the business is key, says Pais: “Regardless of the dashboard, the most important thing is to have a shared physical view of the core metrics that everyone gets to look at it and discuss. Understanding the business metrics and having a visible dashboard sparks conversations and important discussions.”

While dashboards can provide good visibility into data, it’s important to interpret it accurately and keep the metrics that matter current, suggests Hering: “Because you have dashboards you basically have an executive view where you put the metrics that currently matter. On a day-to-day basis, you’re looking at different things, and things can change. You’re looking at what is the current bottleneck. But that previous bottleneck might well become the bottleneck again in a months’ time.”

Often times the C-suite doesn’t immediately see the value in tracking technical metrics, so bake business metrics into your pipeline as well, advises Mukherjee: “I like to trend the business KPI’s along with tech metrics. So, if somebody’s bothered about a number of downloads of a game, put it on the same canvas. Make this data available for everybody.”

With disparate teams working with disparate systems, it’s critical to keep metrics easily visible and explained, per Volk: “If there is no culture of DevOps, then everybody wants to guard themselves and wants to hide as much as they can. They won’t necessarily integrate their systems with their overall system because all the dirty laundry comes out. So, no exceptions allowed. If the metrics go to the dashboard there’s no explanation needed from somebody.”

Best practices on test metrics, from Hering: “Everyone is talking about automated tests and code coverage. As this increases, what happens to our defects that we find in integration testing or in production? We can actually see at what percentage, an increase doesn’t make an impact anymore. This allows you to make better economic decisions.”

It can be time-consuming to get all your systems to talk to one another, but ElectricFlow can help, per Wallgren: “One of the things we do in ElectricFlow with DevOps Insight is collect all of the data, not just what see in the pipeline but talking to Jira, Git, etc. so that you can correlate and then hopefully find some causation.”

Watch the full episode.

Original Link

What Can Microservices Bring to DevOps?

The prompt for this blog post was taken from DZone’s Bounty Board. Check it out to see how you can write articles to win prizes!


Some organizations historically have shied away from periodic investments in architecture decoupling and modernized techniques, and consequently are now stuck with monolithic product architectures that make releases unpredictable.

A monolith is like a “big ball of mud” that rolls down the hill and collects dirt on its way down. There are several disadvantages to monoliths:

  1. If and when this unwieldy, densely-packed mass reaches Production, the probability that something will blow up is higher than it would be if it were of a more manageable size.
  2. Given the sheer density of the monolith, accountability is spread across a large team, and sometimes across multiple teams. Shared accountability usually means no one is accountable.
  3. To make matters worse, this large mass has to be built, packaged, and tested every time there is a change, even if the change is minute and constitutes only a wee percentage of the entire monolith.

If a monolith faces a Production issue that requires an urgent fix, it may not give the teams any more comfort to know that the fixed version of that same monolith has started to slowly roll down that same perilous hill. If the first fix does not work, we would go back to square one, and 1-peat, 2-peat, 3-peat and repeat till the product performs as expected. Sounds pretty downhill, right? It is.

Enter Microservices and DevOps

There isn’t a one-liner that summarizes the intent of all the diverse tech companies across the globe. However, there is one universal truth.

Organizations all over the map try to release quality products frequently and predictably to enhance customer delight.

And to that extent, we will review three ways in which:

  • Microservices benefit DevOps, and
  • How their coexistence improves speed, quality, and predictability of product releases.

i. Microservices Architecture and DevOps Empower Decentralized Teams to Control Their Own Destiny

Centralization allows a single group of people to make decisions on the tech stack, tools, and standards that the rest of the organization has to adhere to. While this sounds like a streamlined approach to reduce duplication and overhead, it could lead to suffocation and low morale. To foster innovation, we must empower small teams to innovate at their own pace and to control their own destiny.

Microservices architecture is all about doing one thing well, and this is a paradigm shift from designing monoliths that are conglomerates of many “services” lumped into one. Strangulation of monoliths gives birth to smaller microservices and facilitates breaking down the large bulky team that developed the monolith into multiple smaller (and more nimble) teams.

Since microservices are organized around independent business capabilities, they could be developed with the help of different programming languages. The key concept here is to institute well-defined sharp interfaces, which determine interactions between these modular polyglot services. This frees up the smaller teams to choose their own standards and success metrics, and also honor organizational KPIs (key performance indicators). The interfaces determine how the services (and hence the teams) interact, thereby letting architecture decide organization, and not the other way round. This is also the essence of Conway’s Law:

“Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization’s communication structure.”

Additionally, decentralization plays right into the core principles of DevOps and narrows the gaps between:

  1. UI, middle-tier and backend specialists, who tend to operate in their own silos, and
  2. Business, Product Management, Dev, QA, Release, Security, Operations, and whatever you have, who tend to operate in their own departments.

The bottom line is that both microservices architecture and DevOps favor the product model versus the project model, whereby 5-7 member teams design, build, test, release, monitor and maintain their applications on Dev/Test, Stage, and Production.

ii. Compartmentalized Services Can Be Released as Independently Deployable Artifacts That Are Not Tied at Their Hips for Success

Most organizations bet on designing and implementing resilient Continuous Delivery pipelines that help them:

  • Experiment with new features in a safe, secure, and auditable manner, and
  • Recover from failures quickly without disturbing their customers.

Continuous Delivery pipelines are fully automated solutions that promote independently deployable versioned artifacts from lower environments like Dev/Test and Stage to higher environments like Production. Pipelines have little to no tolerance for handoffs between silos since handoffs are non-value adding “waste”, as can be determined by a VSM (Value Stream Map).

If the system is assembled from multiple subsystems and can be released only as a whole, cliche, as it may sound, “A chain is only as strong as its weakest link” rings a resounding bell. On the contrary, microservices architecture is essentially a suite of services that have a clean boundary. Typically, each microservice translates to an independently deployable (versioned) artifact that generates a linear pipeline anatomy. Each modular service is organized around a business capability that could (and should) be released independently of its neighbors to improve developer productivity and team velocity. These compartmentalized (or componentized) services are not tied at their hips for success and enable faster teams to go ahead of the slower ones.

Albeit monolithic architectural patterns can be successful, the modularity offered by microservices enables releases to happen rapidly in incremental batches. DevOps favors small batch sizes too and allows small teams to own the services and ship them as well.

Yet again, we see microservices and DevOps playing in harmony to help organizations scale.

iii. Microservices and DevOps Improve Test Cycle Time, and Hence Time2Market

Organizations like to outmaneuver competition or at the very least stay close on their heels. They try to build sustainable business models whereby they can reduce shelf time of new ideas without burning out the team. It is possible to achieve this with cumbersome monoliths, however, less probable than it is with the granular microservices. Here’s why.

A monolith, or a “big ball of mud” often leads to a “big ball of tests” that:

  • Were designed and implemented over a significant period of time, during which team members may have churned multiple times.
  • May not allow parallel execution of individual tests, since test cases, test data, and test config were not designed for independent and idempotent execution.
  • Inflates with every new release due to the addition of new test cases. Sometimes there is slow (or no) archival of dilapidated tests, even for features that are no longer active in Production. There could be fear and uncertainty over the test archival process, since there may not be a single person who fully understands the system architecture.

All tests, relevant or otherwise, execute with each change to the monolith to ensure that the densely packed mass has not inadvertently regressed. This exponentially inflates the organization’s test cycle time, which is the test execution time to decide whether to proceed with the release or abort. Test cycle time is directly proportional to FeatureLeadTime, which is the time required to release a feature to Production. That, in turn, blows up Time2Market for new business features, which makes the organization susceptible to disruption.

Microservices, being more granular, are released to Production via independently deployable versioned artifacts that are validated separately. Microservices interact with each other, to provide specific customer use cases, and thus require smart integration tests. During integration tests, some of these neighboring services are represented by their test doubles with sharply defined contracts. The test doubles and contracts should be treated like first-class citizens and should be owned by small teams who own, ship, and maintain the real services and the interfaces.

The bottom line is that we should avoid a highly integrated validation environment where all subsystems are assembled into a system or several mini-systems, and then validated and released as a whole. DevOps advocates small and incremental batch sizes, and the microservices architecture helps us do just that – develop, test, and release services in a granular fashion. It avoids the composition (or assembly) pattern and executes selective test suites for isolated changes instead of taking an all-or-none approach.

In Summary

The key takeaways are that microservices and DevOps:

  1. Complement each other
  2. Fuel experimentation, and
  3. Accelerate adoption.

It would have been a shame if they were in conflict in some way, and if organizations were forced to choose one over the other. Moreover, microservices architecture and DevOps feed into simplification of the software delivery methodologies, like Continuous Delivery and Deployment, and help generate delivery pipelines that scale.

The prompt for this blog post was taken from DZone’s Bounty Board. Check it out to see how you can write articles to win prizes!

Original Link

Exploring DevOps: Easing the Transition

These articles can help you start the transition to DevOps by helping you plan and teaching you essential DevOps skills like CI/CD, pipeline workflows, and Jenkins. Plus, check out our free DZone resources for a deeper look at DevOps concepts, and job opportunities for those in the DevOps and testing field.

5 Trending DevOps Articles on DZone

  1. Twelve-Factor Multi-Cloud DevOps Pipeline, by Abhay Diwan. Learn the twelve steps of a build in a CI/CD DevOps pipeline that employs multiple cloud environments, from source code to monitoring.

  2. End-to-End Tutorial for Continuous Integration and Delivery by Dockerizing a Jenkins Pipeline, by Hüseyin Akdoğan. Learn how to implement container technologies with your Jenkins CI/CD workflows to make them easier to manage in this tutorial.

  3. Functional Testing for Container-Based Applications, by Chris Riley. An application’s infrastructure changes the way it is tested. Learn about how containers can be used to benefit testing, especially functional testing.

  4. QA Automation Pipeline – Learn How to Build Your Own, by Yuri Bushnev. Continuous delivery is driving the shift towards automation in software delivery; see how to set up an automated pipeline for your QA processes.

  5. DevOps: The Next Evolution – GitOps, by Danielle Safar. Is it time for DevOps to evolve again? In this post, we take a look at a potential evolution: GitOps. Come find out what it’s all about.

You can get in on this action by contributing your own knowledge to DZone! Check out our new Bounty Board, where you can claim writing prompts to win prizes! 

DevOps Around the Web

  1. Chef Extends OpsWorks Capabilities in AWS, Helen Beal, December 6, 2017. See how Chef can help you manage your application lifecycle with continuous automation.

  2. GitLab Tells Us About Auto DevOps, Richard Harris, November 15, 2017. GitLab can help you improve your applications’ security with better use of automation in your workflow.

  3. DevOps Chat: Chef Habitat Project Continues to Mature, Alan Shimel, December 7, 2017. See what’s in the future of the ambitious Chef Habitat project for enabling a cohesive DevOps process.

Dive Deeper Into DevOps

  1. DZone’s Guide to Automated Testing: Improving Application Speed and Quality: a free ebook download.

  2. Getting Started With Kubernetes: DZone’s updated Refcard on the open-source orchestration system for managing containerized applications across multiple hosts.

Who’s Hiring?

Here you can find a few opportunities from our Jobs community. See if any match your skills and apply online today!

Senior Software Engineer
Location: Remote
Experience: Master’s degree or equivalent in Computer Science, IT, or a closely related field and 2 years of experience as a Software Engineer, Programmer Analyst, or in a related position.

Software Engineer – Test
Location: Hamburg, Germany
Experience: B.S. in Computer Science, related degree, or equivalent experience. 3+ years experience in coding, DevOps, systems engineering, or test automation.

Original Link

What Is CI/CD?

The adoption of CI/CD has changed how developers and testers ship software.  This is a first in a series of blog posts about this transition and these blog posts will provide insights into different tools and process changes which could help developers be more successful with CI/CD.

Image title

First it was Waterfall, next it was Agile, and now its DevOps.  This is how modern developers approach building great products.  With the rise of DevOps has come the new methods of Continuous Integration, Continuous Delivery, (CI/CD) and Continuous Deployment.  Conventional software development and delivery methods are rapidly becoming obsolete. Historically, in the agile age, most companies would deploy/ship software in monthly, quarterly, bi-annual, or even annual releases (remember those days?). Now, however, in the DevOps era, weekly, daily, and even multiple times a day is the norm.  This is especially true as SaaS is taking over the world and you can easily update applications on the fly without forcing customers to download new components.  Often times, they won’t even realize things are changing.  

Development teams have adapted to the shortened delivery cycles by embracing automation across their software delivery pipeline. Most teams have automated processes to check in code and deploy to new environments.  This has been coupled with a focus on automating the testing process along the way as well, but that we’ll cover in a future article.  Instead, today, we’ll cover CI/CD/CD, what this is and how modern software companies are using tools to automate the process of shipping new code.  

Continuous integration focuses on blending the work products of individual developers together into a repository. Often, this is done several times each day, and the primary purpose is to enable early detection of integration bugs, which should eventually result in tighter cohesion and more development collaboration. The aim of continuous delivery is to minimize the friction points that are inherent in the deployment or release processes. Typically, the implementation involves automating each of the steps for build deployments such that a safe code release can be done—ideally—at any moment in time. Continuous deployment is a higher degree of automation, in which a build/deployment occurs automatically whenever a major change is made to the code.

Each of these stages is part of a delivery pipeline. In their book, Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment AutomationHumble and Farley explain that “Every change to your software goes through a complex process on its way to being released. That process involves building the software, followed by the progress of these builds through multiple stages of testing and deployment. This, in turn, requires collaboration between many individuals, and perhaps several teams. The deployment pipeline models this process, and its incarnation in a continuous integration and release management tool is what allows you to see and control the progress of each change as it moves from version control through various sets of tests and deployments to release to users.”

A basic deployment pipeline.

Continuous Integration (CI)

With continuous integration, developers frequently integrate their code into a main branch of a common repository. Rather than building features in isolation and submitting each of them at the end of the cycle, a developer will strive to contribute software work products to the repository several times on any given day.

The big idea here is to reduce integration costs by having developers do it sooner and more frequently. In practice, a developer will often discover boundary conflicts between new and existing code at the time of integration. If it’s done early and often, the expectation is that such conflict resolutions will be easier and less costly to perform.

Of course, there are tradeoffs. This process change does not provide any additional quality assurances. Indeed, many organizations find that such integration becomes more costly since they rely on manual procedures to ensure that new code doesn’t introduce new bugs, and doesn’t break existing code. To reduce friction during integration tasks, continuous integration relies on test suites and an automated test execution. It’s important, however, to realize that automated testing is quite different from continuous testing, as we near the end of this article.

The goal of CI is to refine integration into a simple, easily-repeatable everyday development task that will serve to reduce overall build costs and reveal defects early in the cycle. Success in CI will depend on changes to the culture of the development team so that there is incentive for readiness, frequent and iterative builds, and eagerness to deal with bugs when they are found much earlier.

Continuous delivery is actually an extension of CI, in which the software delivery process is automated further to enable easy and confident deployments into production —at any time. A mature continuous delivery process exhibits a codebase that is always deployable—on the spot. With CD, software release becomes a routine event without emotion or urgency. Teams proceed with daily development tasks in the confidence that they can build a production-grade release—any old time they please—without elaborate orchestration or special late-game testing.

CD depends centrally on a deployment pipeline by which the team automates the testing and deployment processes. This pipeline is an automated system that executes a progressive set of test suites against the build. CD is highly automatable—and in some cloud-computing environments—easily configurable.

In each segment in the pipeline, the build may fail a critical test and alert the team. Otherwise, it continues on to the next test suite, and successive test passes will result in automatic promotion to the next segment in the pipeline. The last segment in the pipeline will deploy the build to a production-equivalent environment. This is a comprehensive activity, since the build, the deployment, and the environment are all exercised and tested together. The result is a build that is deployable and verifiable in an actual production environment.

A solid exhibit of a modern CI/CD pipeline is available on AWS. Amazon is one of the cloud-computing providers that offers an impressive CI/CD pipeline environment and provides a walk-through procedure in which you can choose from among its many development resources and link them together in a pipeline that is readily configurable and easily monitored.

Many consider continuous delivery to be attractive primarily because it automates all of the steps that progress from submitting code into the repository through to releasing fully-tested, properly-functional builds that are ready for production. This is elaborate automation of the build and testing processes, but decisions about how and what should be released remains a manual process. Continuous deployment can improve and automate those activities.

Continuous Deployment (CD)

Continuous deployment extends continuous delivery so that the software build will automatically deploy if it passes all tests. In such a process, there is no need for a person to decide when and what goes into production. The last step in a CI/CD system will automatically deploy whatever build components/packages successfully exit the delivery pipeline. Such automatic deployments can be configured to quickly distribute components, features, and fixes to customers, and provide clarity on precisely what has is presently in production.

Organizations that employ continuous deployment will likely benefit from very quick user feedback on new deployments. Features are quickly delivered to users, and any defects that become evident can be handled promptly.  Quick user response on unhelpful or misunderstood features will help the team refocus and avoid devoting more effort into to functional area that is unlikely to produce a good return on that investment.

With the movement to DevOps, there’s also been a rise of new automation tools to help with the CI/CD pipeline.  These tools typically integrate with various developer tools including code repository systems like Github and bug tracking systems like Jira.  In addition, as SaaS has become a more popular delivery model, many of these tools are running in the cloud, similar to where modern developers are running their apps, like GCP and AWS.  

The most popular automation tool is Jenkins (formerly Hudson), which is an open source project supported by hundreds of contributors as well as a commercial company, Cloudbees.  Cloudbees even hired the founder of Jenkins and offers several different Jenkins training programs and product add-ons. Beyond open source, there are also several more modern, commercial, software products available including CircleCI, Codeship, and Shippable.  These products have several different advantages and disadvantages to one another.  In order to really understand them, I’d encourage trying each of them specifically within your developer workflow to see how they work in your environment.  How do they work with your tools, your cloud platform, your container system, etc

At mabl, we’re building on Google Cloud Platform, so we were looking for a product that was compatible with, and preferably integrated with, GCP.  We took a look at CircleCI, Codeship, and Shippable and below is a short table highlighting some details for each:

We ultimately landed on Codeship and couldn’t be happier with our decision and the support we get from the Codeship team.  

What’s Next?

Once you’ve deployed a modern CI/CD pipeline, you’ll likely realize that your existing tools and processes in your developer workflow will also need to be modernized.  One area that you’ll likely very quickly realize will need a lot of attention is testing.  If your deployment frequency is daily or even several times a day, your current testing practices which probably take hours or perhaps run overnight won’t keep up.  This is a problem that mabl is solving using machine learning.

Original Link

Trust Your Pipeline: Automatically Testing an End-to-End Java Application

Disclaimer: This post is part of the same presentation given by myself and Bruno Souza at the JavaOne Conference 2017 in San Francisco.

Although extremely important, we are not talking here about unit testing and integration testing, assuming you already know what it is and already apply it, as a developer concerned about the quality of your code.

Test Automation Pyramid and Agile Testing Quadrants

Creating tests is the easiest and quickest way to test your application and ensure that if any bug appears, you will find it before your client.

Currently, we cannot talk about testing without talking about test automation. The application of the test automation gives a quick feedback to the team and maintains the execution of the regression tests continuously.

The approach we can use to take a greater speed in automating, executing, and having a rapid feedback of the tests is the application of the Test Pyramid, which is a guide for the application of automation in at least three levels : unit, services , and UI (user interface).

The services layer is divided into three parts: component tests, integration tests, and API tests.

The unit testing layer is the most important layer of our application because we will create tests for the code that we are developing and guarantee its work as expected, even after future maintenance (recommending the use of TDD — Test Driven Development). In this layer, we can apply code coverage analysis and static analysis practices in order to intensify the rapid feedback against a defect that may appear.

The service layer (component, integration and API) is extremely important nowadays, with a large focus on API testing. Here we apply mocks, stubs and fakes to give speed in the execution of microservice tests. We also need separate test environment servers, where this can be the closest to a production environment.

The UI (User Interface) layer is also important, especially from a mobile testing perspective where, when a customer encounters an error in an app, it usually removes it. The most important techniques here are automated testing in UI and Visual Regression Testing [3]. In the web part we need browsers to execute the same test in different ones (IE, Chrome, Firefox, Safari). We need the same for mobile automation testing: iOS and Android devices to ensure the compatibility of our app on these two platforms.

The Agile Testing Quadrant, created by Brian Merick and widely disseminated by Lisa Crispin and Janet Gregory in his book “Agile Testing — A Pratical Guide for Testers and Agile Teams” are some practices that can be applied during activity-focused development of tests. The quadrant is a guide: it is not necessary to perform all the existing practices in it, you can choose one or more depending of your context.

Continuous Delivery and Testing Pipeline

There is no way to talk about DevOps without talking about Continuous Delivery (CD). Without it, we could not even talk about DevOps culture. In Continuous Delivery, one of the foundations is Continuous Testing, where we must test all the stages of our development (pipeline) with an initial recommendation applied to unit tests and automated acceptance.

Continuous Delivery enables joint roles between Development, QA, and Operations. An example of evidence-focused collaboration for these roles is:

  • Development + QA: Build, Deploy and Test automation at various levels

  • QA + Operations: Test Automation and Continuous Feedback through test executions, as well as sanity test executions

  • Operations + Development: Automated provisioning of machines/containers required for testing at any level.

Now we continue with the pipeline focused on tests, which can be applied in whole or in parts, being:

  • Unit: Unit Tests

  • Integration: Integration Tests

    • We can create mocks/fakes/stubs to remove the dependencies and accelerate the test executions.

  • Service: Test on service layer (SOAP, REST)

    • Smoke: small execution subset to guarantee that all API’s are working (at least return status different from HTTP 404 or HTTP 500).

    • Contract: a collection of agreements (tests) between a client (consumer) and an API.

    • Functional: tests that want to guarantee the operation against different business rules (happy path, sad path, and alternative flows).

    • Acceptance: evaluate the system’s compliance with the business requirements and assess whether it is acceptable for delivery.

  • Acceptance: acceptance tests (those that focus on end-user usage, commonly called E2E).

    • Smoke: main test suite that will guarantee your business running.

  • Functional: tests that will ensure operation against different business rules (happy path, flow of exception, and alternative flows).

    • Smoke: main test suites of happy path, sad path and alternative flows.

From Integration to Functional Testing (end of the pipeline) we have to worry about non-functional tests. Examples of tests are functional: performance, load, security, etc, and it’s extremely necessary to create an entire automated test architecture to support the continuous, automated, and least-maintenance run possible with:

  • Screenshots to evidence the execution of each test, or evidence when an error occurs.

  • Logs to analyse and see any error occurred.

  • Reports to show up a feedback about the test execution.

  • Data management for the sensitive data on a test script.

  • Parameterize commonly changed data like URL’s, endpoints, etc…

Toolbox for Automated API, Web, and Mobile Tests

In order to automate an API, a web page, and a mobile front-end, there are open source tools that will help you to quickly and easily build and run tests.

Rest Assured

A tool for creating automated tests for an API (REST and XML). Rest Assured uses an easily and understood DSL based on Gherkin (Given-When-Then).

In the example below, it is possible to see the API through a local endpoint (simulating a production environment) and a mock endpoint created with Java Spark. Creating a mock API by developing the API fixed data returns can give even greater speed in the execution and validation of the different aspects that secure the tests for microservices, especially about contract tests.

public void getPersonById() { int personID = given() .contentType(ContentType.JSON) .body(new Person("Elias Nogueira", "RS", "Automate tests")). when(). post("person"). then(). extract(). path("id"); when(). get("person/{id}", personID). then(). contentType("application/json").and(). body("name", equalTo("Elias Nogueira")).and(). body("address", equalTo("RS")).and(). body("hobbies", equalTo("Automate tests")).and(). statusCode(200);

Selenium WebDriver

Selenium is the best-known tool for automation of a web page. It also has an easy DSL and is based on four steps for automation:

  • Navigation: actions like access a web page, forward, back, and refresh.

  • Interrogation: ways to find web elements like by id, name, cssSelectors, and other locators.

  • Manipulation: a way to interact with an element like click, fill (sendKeys), clear, and get text.

  • Synchronization: ways to wait for some asynchronous actions, like an element that appears after some seconds.

It is a W3C standard and performs actions in web browsers simulating a real browser. For this to be possible, it is necessary to use the browsers drivers.

public void addPersonSuccessfully() { System.setProperty("", "/Users/eliasnogueira/Selenium/chromedriver"); WebDriver driver = new ChromeDriver(); WebDriverWait wait = new WebDriverWait(driver, 10); driver.get("http://localhost:8888/javaone"); By addButton ="add"); wait.until(ExpectedConditions.presenceOfElementLocated(addButton)); driver.findElement(addButton).click(); wait.until(ExpectedConditions.presenceOfElementLocated("back"))); driver.findElement("name")).sendKeys("Daenerys Targaryen"); driver.findElement("address")).sendKeys("Dragonstone"); driver.findElement(By.cssSelector("input[ng-model='post.hobbies']")).sendKeys("Break Chains"); driver.findElement(By.cssSelector(".w3-btn.w3-teal")).click(); wait.until(ExpectedConditions.presenceOfElementLocated("address"))); String dataOnPage = driver.getPageSource(); assertTrue(dataOnPage.contains("Daenerys Targaryen")); assertTrue(dataOnPage.contains("Dragonstone")); assertTrue(dataOnPage.contains("Break Chains")); driver.quit();


Appium is an open source tool with the same Selenium DSL, but for automation in native or hybrid mobile device apps for iOS or Android.

It supports execution on emulators, real device or test lab (cloud), and, in conjunction with Selenium Grid, gives the possibility of creating an internal device grid.

public void addPerson_Successfully() throws MalformedURLException { File app = new File("src/main/resources/app/workshop.apk"); DesiredCapabilities capabilities = new DesiredCapabilities(); capabilities.setCapability(MobileCapabilityType.PLATFORM_NAME, MobilePlatform.ANDROID); capabilities.setCapability(MobileCapabilityType.DEVICE_NAME, "Android Emulator"); capabilities.setCapability(AndroidMobileCapabilityType.APP_PACKAGE, "com.eliasnogueira.workshop"); capabilities.setCapability(AndroidMobileCapabilityType.APP_ACTIVITY, "activities.ListActivity"); AndroidDriver<MobileElement> driver = new AndroidDriver<>(new URL(""), capabilities); WebDriverWait wait = new WebDriverWait(driver, 20); wait.until(ExpectedConditions.presenceOfElementLocated("com.eliasnogueira.workshop:id/fab"))); driver.findElement("com.eliasnogueira.workshop:id/fab")).click(); driver.findElement("com.eliasnogueira.workshop:id/txt_nome")).sendKeys("Jon Snow"); driver.findElement("com.eliasnogueira.workshop:id/txt_endereco")).sendKeys("The wall"); driver.findElement("com.eliasnogueira.workshop:id/txt_hobbies")).sendKeys("Know nothing"); driver.findElement("com.eliasnogueira.workshop:id/button")).click(); wait.until(ExpectedConditions.presenceOfElementLocated("android:id/search_button"))); driver.findElement("android:id/search_button")).click(); driver.findElement("android:id/search_src_text")).sendKeys("Jon Snow"); String texto = driver.findElement("android:id/text1")).getText(); assertEquals("Jon Snow", texto); driver.quit();

Applied Pipeline GitHub Repo

The code for all projects can be found in this repository. There are test suites that do the internal sequencing of the pipeline at each test level. For example: for API tests, there are three sequential tests: smoke, functional in mock and functional in production.

Original Link