ALU

jenkins

CI/CD With Kubernetes and Helm

In this blog, I will be discussing the implementation of CI/CD pipeline for microservices which are running as containers and being managed by Kubernetes and Helm charts

Note: Basic understanding of Docker, Kubernetes, Helm, and Jenkins is required. I will discuss the approach but will not go deep into its implementation. Please refer to the original documentation for a deeper understanding of these technologies.

Original Link

Intro to Jenkins Pipelines and Publishing Over SSH

In many projects, things that seem very small come to be decisive factors to continue on the current path or find a better one. Starting from simple text editors to tools used for a long period of time, we all have different flavors for each tool in hand. Merging these ideas sometimes comes to be a to-do, and while this happens for any kind of work done in a group, there are also some other factors that shape the path to it.

This time, we came across an issue which let us think about how to proceed. Our project is being developed as an integration of many standalone microservices. This led us to use different resource files for our remote development and production environments. After considering different options, we finally decided to deploy these files through SSH (from our build server to where the application server is). Since we are using Jenkins for CI/CD, we had to use an ssh plugin.

Original Link

Using Jenkins-X UpdateBot

Jenkins-X UpdateBot is a tool for automating the update of dependency versions within project source code. Say you’re building two projects, A and B, such that B uses A as a dependency. The release process for A could use UpdateBot to update the source for project B to use a new version of A. With UpdateBot this would result in a pull request so that the change could be tested and reviewed or automatically merged.

Within pipelines on the Jenkins-X platform, UpdateBot is automatically present and invoked by updatebot commands in Jenkinsfiles. But UpdateBot can also be used outside of Jenkins-X and running it alone can help to understand what it can do and test out version replacements. So let’s try it out with a simple tester project.

Original Link

The DevOps Road Map — A Guide for Programmers

DevOps is really hot at the moment and most of my friends, colleagues, and senior developers I know are working hard to become a DevOps engineer and project themselves as DevOps champion in their organization.

While I truly acknowledge the benefits of DevOps, which is directly linked to improved software development and deployment, from my limited experience I can say that it’s not an easy job. It’s very difficult to choose the right path in the middle of so many tools and practices.

Original Link

How to Deploy a Jenkins Cluster on AWS in a Fully Automated CI/CD Platform

A few months ago, I gave a talk at Nexus User Conference 2018 on how to build a fully automated CI/CD platform on AWS using Terraform, Packer, and Ansible.

The session illustrated how concepts like infrastructure as code, immutable infrastructure, serverless, cluster discovery, etc. can be used to build a highly available and cost-effective pipeline.

Original Link

The Payara Platform’s Journey to Jakarta EE 8 Compatibility

At Java One last year, Oracle announced that they had made the monumental decision to open source Java EE and move it to the Eclipse Foundation. As Oracle Code One (the successor conference to Java One) comes around, I thought it would be good to reflect on where we are and how far we still have to go.

For the Payara team, this has been a very interesting year. Payara has become strategic members of the Eclipse Foundation in order to drive both MicroProfile and Jakarta EE forward. I personally have become a director of the Eclipse Foundation and many of our team members have become committers and project leads on many of the Jakarta EE projects. As a team, we have had to rapidly become familiar with the Eclipse way of working in a multi-vendor collaborative organization with committees, paperwork, global conference calls at odd hours, as well as tracking and responding on many different mailing lists. These are the necessary evils that come with organizational collaboration, and at times, it can be frustrating and seem a little fruitless. On the flip side, I have also found all the organizations involved open and welcoming and everyone involved is acting in good faith for the best interests of developers and end users that want a free, open, and standards-based platform on which to develop business applications. We have all been feeling our way in this process, but now, things are gathering pace more rapidly.

Original Link

Adding a GitHub Webhook in Your Jenkins Pipeline

Have you ever tried adding GitHub webhook in Jenkins? In this blog, I will be demonstrating the easiest way to add a webhook in your pipeline.

First, what is a webhook? The concept of a webhook is simple. A webhook is an HTTP callback, an HTTP POST that occurs when something happens through a simple event-notification via HTTP POST.

Original Link

Structure of a Jenkins Pipeline

In this blog, I will be discussing the basic structure of a Jenkins Pipeline, what a pipeline consists of, and the components used in the pipeline.

You can see the basic overview of a pipeline here, on GitHub.

Original Link

Conferences and Tutorials — Fall 2018 DevOps Season

Hey, DevOps fans! This month, I want to get you ready for fall. That means learning about the best upcoming conferences and hot topics that will help you in your career. Let’s start with the latest hit articles in the DZone DevOps Zone.


Hot DevOps Articles on DZone

  1. Ansible: An Effective IT Automation Tool, Anmol Nagpal. Learn about Ansible, a tool for automating application deployments, configuration management, and more in a DevOps environment.

    Original Link

Jenkins in a Nutshell

In many projects, the product development workflow has three main concerns: building, testing, and deployment. Each change to the code means something could accidentally go wrong, so in order to prevent this from happening developers adopt many strategies to diminish incidents and bugs. Jenkins, and other continuous integration (CI) tools are used together with a source version software (such as GIT) to test and quickly evaluate the updated code.

In this article, we will talk about Jenkins, applicable scenarios, and alternatives to automated testing, deployment, and delivering solutions.

Original Link

Hot Shot 011 – Jenkins on AWS (Part 1) [Podcast]

Hello there once again, welcome to another hot shot. My name is Peter Pilgrim.

I have been a DevOps specialist, welcome to another episode. This is hotshot 11 Jenkins n AWS part one, I have been a platform engineer and I am a Java Champion.

Original Link

Running Jenkins Server With Configuration-as-Code

Some days ago, I came across a newly created Jenkins plugin called Configuration as Code (JcasC). This plugin allows you to define Jenkins configuration in  a very popular format — YAML notation. It is interesting that such a plugin has not been created before, but better late than never. Of course, we could have used some other Jenkins plugins, like Job DSL Plugin, but it is based on the Groovy language.

If you have any experience with Jenkins, you probably know how many plugins and other configuration settings it requires to have in order to work in your organization as the main CI server. With the JcasC plugin, you can store such configuration in human-readable declarative YAML files.

Original Link

Tips on Jenkins — a Decade Later — for Continuous Delivery

My first encounter with Jenkins (actually, Hudson — Jenkins was forked from Hudson many years ago) was almost a decade ago when I had to use it for Continuous Integration (CI). A few days ago, I had to work with it again. This time, it was for the task of Continuous Delivery (CD). Jenkins has undergone many changes over the years and seems to have evolved into quite a powerful tool with a lot of plugins.

My task was quite simple. I had to download a package in ZIP format from a repository and install it on a specific node. For this, I did the following:

Original Link

Why DevOps Engineers Suck at the Mobile Ecosystem

The latest TechRepublic post revealed that as per the Stack Overflow Developer Survey 2018, DevOps Specialist is the highest paid programming job in the IT industry. There is no shortcut to becoming a DevOps Specialist; it comes from hard work, continuous learning and years of experience. As DevOps engineers roles require knowledge of millions of tools and technologies, it also requires soft skills to deal with different teams. The typical DevOps Engineer’s toolbox includes Jenkins, Docker, Ansible, Chef, Puppet, AWS, Azure, Terraform, Kubernetes, and many more. It turns out most DevOps engineers are very good at implementing these tools in organizations to deploy web applications. However, when it comes to mobile applications, this toolkit doesn’t always work, especially for Apple apps. In this post, we will see what skills traditional DevOps engineers lack to become a full stack DevOps engineer and how mobile DevOps is becoming key in recent days.

How Mobile DevOps Is Different

Managing infrastructure for web and mobile are completely separate activities. There are some unique challenges when it comes to managing mobile infrastructure. These challenges are platform- and operating system-specific. Most of the web is currently running on Linux, however, building iOS apps requires infrastructure to manage macOS servers. We will see the major differences between web and mobile DevOps.

Original Link

Jenkins Configuration as Code: Migration

This blog post is the last article in a six-part series on Jenkins configuration as code.

Using Jenkins configuration as code, one can manage the Jenkins master configuration with simple, declarative YAML files and manage them as code. Looks promising, so how do you start?

Original Link

Jenkins Configuration as Code: Documentation

This blog post is the fifth article in a six-part series on Jenkins configuration as code.

Using Jenkins configuration as code, one can manage the Jenkins master configuration with simple, declarative YAML files and manage them as code. How do you write such YAML files?

Original Link

Jenkins Configuration as Code: Need for Speed!

This blog post is the fourth in the Jenkins configuration as code series.

Using Jenkins configuration as code one can manage the Jenkins master configuration with simple, declarative YAML files, and manage them as code. But one can argue this was already feasible, by managing Jenkins’ XML configuration files as code and storing them in a Git repository.

Original Link

What to Expect at DevOps World | Jenkins World 2018

DevOps World | Jenkins World 2018 is around the corner on September 16-19 in San Francisco. In addition to being the largest gathering of Jenkins users and DevOps practitioners, there’s a lot of reasons that make DevOps World | Jenkins World the essential DevOps event to attend each year. With keynotes and speaker sessions from industry-leading DevOps experts, an expo where you can check out the latest and greatest from our friends in the software industry or the themed after-party, there are many reasons people travel from all over the country to attend DevOps World | Jenkins World.

Whether you are already preparing for the show or still considering attending and may need some extra convincing, here’s what you can expect this year:

Original Link

Jenkins Configuration as Code: Plugins

Using configuration as code, one can manage Jenkins master’s configuration with simple, declarative YAML files, and manage them as code. But managing Jenkins is not only about jenkins-core, it’s also about the many plugins one needs to select and install to provide useful features.

Supporting Plugins

A recurring question on the Jenkins Configuration as Code (JCasC) functionality is how to support plugins.

Original Link

Jenkins Configuration as Code: Sensitive Data

This blog post is the second of a six-part Configuration as Code series.

Using Configuration as Code, one can manage the configuration of a Jenkins master with simple, declarative YAML files, and manage them as code in SCM. But this doesn’t mean you have to commit passwords and other sensitive information in Git.

Original Link

Jenkins Configuration as Code: Look Ma, No Hands

Jenkins is highly flexible and is today the de facto standard for implementing continuous integration and continuous delivery (CI/CD). There is an active community that maintains plugins for almost any combination of tools and use cases. But flexibility has a cost: in addition to Jenkins core, many plugins require some system-level configuration to be set so they can do their job.

In some circumstances, a Jenkins administrator is a full-time position. One person is responsible for both maintaining the infrastructure, and also pampering a huge Jenkins master with hundreds of installed plugins and thousands of hosted jobs. Maintaining up-to-date plugins version is a challenge and failover is a nightmare.

Original Link

Q-and-A With CloudBees CEO Sacha Labourey and Jenkins Community Evangelist Tyler Croy Ahead of DevOps World | Jenkins World (Part 2)

Just recently, I shot over some questions to Sacha Labourey, CEO of CloudBees, in lieu of the DeOps World | Jenkins World conferences to hear more about what we can expect to see at the shows. As I couldn’t stop myself from writing questions, this interview spilled into a two-parter (part 1 is here), with Sacha joined by Tyler Croy, Jenkins Community Evangelist, to speak more specifically to the product questions I had about Jenkins.

Hope you enjoy the interview!

Original Link

Q-and-A With CloudBees CEO Sacha Labourey on the Upcoming DevOps World | Jenkins World (Part 1)

I recently had the opportunity to send a few questions over to Sacha Labourey, CEO of CloudBees, ahead of the upcoming DeOps World | Jenkins World conferences to learn a little bit more about what’s been going on with Cloudbees and what we can expect to see at the show.

Hope you enjoy the interview!

Original Link

Continuous Deployment Through Jenkins

Releasing software isn’t an art, but it is an engineering discipline. Continuous deployment can be thought of as an extension to continuous integration, which lets us catch defects earlier.

In this article on continuous deployment, we will go through the following topics:

Original Link

Using the Configuration-as-Code Plugin to Declaratively Configure a Jenkins Instance

Introduction

Note: If you already know what configuration management is, you can skip the introduction.

For many years now, there has been a steady trend of configuring everything we can using code. This makes it much easier to avoid snowflake systems, making it orders of magnitude easier to reprovision an environment from scratch if needed.

Developers have been configuring their Jenkins instances following this principle for a long time, generally using a neat combination of init.groovy.d scripts, file manipulations, Docker images and so on, as well as many other technologies.

Original Link

Learn How to Setup a CI/CD Pipeline From Scratch

A CI/CD Pipeline implementation, or Continuous Integration/Continuous Deployment, is the backbone of the modern DevOps environment. It bridges the gap between development and operations teams by automating the building, testing, and deployment of applications. In this blog, we will learn what a CI/CD pipeline is and how it works.

Before moving onto the CI/CD pipeline, let’s start by understanding DevOps.

Original Link

Building a Continuous Delivery Pipeline Using Jenkins

Continuous Delivery is a process, where code changes are automatically built, tested, and prepared for a release to production. I hope you have enjoyed my previous blogs on Jenkins. Here, I will talk about the following topics:

  • What is Continuous Delivery?
  • Types of Software Testing
  • Difference Between Continuous Integration, Delivery, and Deployment
  • What is the need for Continuous Delivery?
  • Hands-on Using Jenkins and Tomcat

Let us quickly understand how Continuous Delivery works.

Original Link

Hot Shot 008 – Jenkins Pipelines [Podcast]

This are my verbatim notes to the PEAT UK podcast:

Hello there once again, and welcome to another hot shot. My name is Peter Pilgrim, platform engineer and DevOps specialist, and Java Champion.

Original Link

How to Install Jenkins on the Apache Tomcat Server

Jenkins is a powerful open source tool that enables you to automate tests and deployment. Apache Tomcat is a powerful servlet Java container for running web applications. If you are running your apps in Tomcat, or wish to do so, you might also want to run Jenkins in it. This blog post will explain how to do it.

If you are looking to install Jenkins in other ways, read how to install Jenkins on Windows, Ubuntu and with a WAR file.

Original Link

The CI/CD Infrastructure: My Recommended Tools

In my last article, I detailed CI/CD best practices for improving your code quality. Now, I want to explain how to set up a CI/CD pipeline: choosing tools, installation, and execution. In this blog post, I will recommend tools for the pipeline. Next time, I will explain about installation and execution. Let’s get started.

CI/CD Infrastructure Tools

When setting up a CI/CD pipeline, the first thing we need to do is to create our infrastructure. This blog post will explain which types of tools you need to install and the tools I recommend for each type. Let’s get started.

Original Link

How to Use Basic Jenkins Pipelines

Building a Jenkins Pipeline

With the introduction of the Pipeline, Jenkins added an embedded Groovy engine, making Groovy the scripting language in the Pipeline’s DSL.

Here are the steps you need to take to set up a Jenkins Pipeline. You have to install a plugin, "Pipeline Plugin."

Original Link

How to Build True Pipelines With Jenkins and Maven

The essence of creating a pipeline is breaking up a single build process into smaller steps, each having its own responsibility. In this way, faster and more specific feedback can be returned. Let’s define a true pipeline as a pipeline that is strictly associated with a single revision within a version control system. This makes sense. Ideally, we want the build server to return full and provide accurate feedback for every single revision.

As new revisions can be committed at any time, it is natural that multiple pipelines actually get executed next to each other. If needed, it is even possible to allow concurrent executions of the same build step for different pipelines. However, some measurements need to be taken in order to guarantee that all steps executed within one pipeline are actually based on the same revision.

Original Link

How to Install Jenkins With a WAR File

Open source Jenkins is a popular tool for test automation and implementing shifting left through continuous integration and continuous deployment. Followed by two blogs introducing how to install Jenkins on Windows (click here) and on Ubuntu (click here), a third way is to install Jenkins with a WAR (Web application ARchive) file version of Jenkins. This option can be used on any Java-supporting platform or OS.

First, you will need to install JDK. Jenkins supports Java 8. If you don’t know how to install Java, look at this blog. Now that Java is installed, we can proceed with Jenkins.

Installing Jenkins

1. Click here to download the latest Jenkins WAR file.

2. Copy the jenkins.war file to the folder you want. I created a folder named Jenkins in the path C:\Program Files (x86) and copied the jenkins.war file to it.

installing jenkins with war

3. Open the command prompt window and browse to the directory where the jenkins.war file is present, through the command cd C:\Program Files (x86)\Jenkins.

jenkins war installation

4. Run the command java -jar jenkins.war.

jenkins installation tutorial

5. Wait until the process is complete.

jenkins installation guide

6. Take a look at the line “INFO: Jenkins is fully up and running” that indicates that Jenkins is running. Write down the random password that is auto-generated. We will use this password for the initial login later.

jenkins, installation, war file, open source, standalone machine

7. Paste the URL http://localhost:8080 in your browser to browse to the Jenkins page.

jenkins configuration

8. To unlock Jenkins, paste the random password that you wrote down in the previous step or copy it from the user account folder, from the file C:\Users\<your_user_account_folder_name>\.jenkins\secrets\initialAdminPassword. Paste it in the Administrator password field and click on the “Continue” button.

setting up jenkins

9. Install the suggested plugins or customize plugins installation according to your needs. We will install the default suggested plugins.

getting started with jenkins

10. Wait for the plugins to complete the installation.

jenkins plugin installation

11. Create an admin user and click “Save and Continue.”

jenkins, admin user creation

12. Click “Save and Finish” to complete the process.

installing jenkins with a war file read how

13. Finally! Click “Start using Jenkins” to start Jenkins.

start using jenkins

14. Here is the default Jenkins page. You can create your first job!

jenkins in 1 2 3

Great! Everything is ready! Now, you can start creating your workflows. To learn how to use Jenkins, check out these resources:

Original Link

How to Install Jenkins on Windows

Jenkins is one of the most popular tools for continuous integration and continuous delivery on any platform. A Java application, Jenkins has many plugins for automating almost everything at the infrastructure level. The use of Jenkins has widely increased rapidly due to a rich set of functionalities, which it provides in the form of plugins. In this tutorial, we will show a step by step guide on how to install Jenkins on a Windows platform.

Let’s get started.

First, you need to install JDK. Jenkins currently only supports JDK8. Once Java is running, you can install Jenkins. Click here to download the latest Jenkins package for Windows (currently it is version 2.130). Unzip the file to a folder and click on the Jenkins exe file.

Click “Next” to start the installation.

Click the “Change…” button if you want to install Jenkins in another folder. In this example I will keep the default option and click on the “Next” button.

Click the “Install” button to start the installation process.

The installation is processing.

When done, click the “Finish” button to complete the installation process.

You will automatically be redirected to a local Jenkins page, or you can paste the URL http://localhost:8080 in a browser.

To unlock Jenkins, copy the password from the file at C:\Program Files (x86)\Jenkins\secrets\initialAdminPassword and paste it in the Administratorpassword field. Then, click the “Continue” button.

You can install either the suggested plugins or selected plugins you choose. To keep it simple, we will install the suggested plugins.

Wait until the plugins are completely installed.

The next thing we should do is create an admin user for Jenkins. Put in your details and click “Save and Continue.”

Click “Save and Finish” to complete the Jenkins installation.

Now, click “Start using Jenkins” to start Jenkins.

Finally, here is the default Jenkins page.

You can now start creating your continuous integration pipeline!

Original Link

Continuous Integration With Jenkins, Artifactory, and Spring Cloud Contract

Contract Driven Contract (CDC) testing is a method that allows you to verify integration between applications within your system. The number of such interactions may be really large especially if you maintain microservices-based architecture. Assuming that every microservice is developed by different teams or sometimes even different vendors, it is important to automate the whole testing process. As usual, we can use the Jenkins server for running contract tests within our Continuous Integration (CI) process.

The sample scenario has been visualized on the picture below. We have one application (person-service) that exposes API leveraged by three different applications. Each application is implementing by a different development team. Consequently, every application is stored in a separated Git repository and has a dedicated pipeline in Jenkins for building, testing, and deploying.

The source code of sample applications is available on GitHub in the repository sample-spring-cloud-contract-ci. I placed all the sample microservices in the single Git repository only for our demo simplification. We will still treat them as a separated microservices, developed and built independently.

In this article, I used Spring Cloud Contract for CDC implementation. It is the first choice solution for JVM applications written in Spring Boot. Contracts can be defined using Groovy or YAML notation. After building on the producer side, Spring Cloud Contract generates a special JAR file with the stubs suffix, which contains all defined contracts and JSON mappings. Such a JAR file can be built on Jenkins and then published on Artifactory. The contract consumer also uses the same Artifactory server, so they can use the latest version of the stubs file. Because every application expects a different response from person-service, we have to define three different contracts between person-service and a target consumer.

Let’s analyze the sample scenario. Assuming we have performed some changes in the API exposed by person-service and modified contracts on the producer side, we would like to publish them on the shared server. First, we need to verify contracts against the producer (1), and in case of success, publish the artifact with stubs to Artifactory (2). All the pipelines defined for applications that use this contract are able to trigger the build on a new version of the JAR file with stubs (3). Then, the newest version of the contract is verified against the consumer (4). If contract testing fails, the pipeline is able to notify the responsible team about this failure.

1. Pre-Requirements

Before implementing and running any sample, we need to prepare our environment. We need to launch Jenkins and Artifactory servers on the local machine. The most suitable way for this is through Docker containers. Here are the commands required to run these containers.

$ docker run --name artifactory -d -p 8081:8081 docker.bintray.io/jfrog/artifactory-oss:latest
$ docker run --name jenkins -d -p 8080:8080 -p 50000:50000 jenkins/jenkins:lts

I don’t know if you are familiar with tools like Artifactory and Jenkins, but after starting them, we need to configure some things. First, you need to initialize Maven repositories for Artifactory. You will be prompted for that just after the first launch. It also automatically adds one remote repository: JCenter Bintray, which is enough for our build. Jenkins also comes with a default set of plugins, which you can install just after the first launch (Install suggested plugins). For this demo, you will also have to install the plugin for integration with Artifactory. If you need more details about Jenkins and Artifactory configuration, you can refer to my older article, How to setup Continuous Delivery environment.

2. Building Contracts

We are beginning contract definition from the producer side application. Producer exposes only one GET /persons/{id} method that returns Person object. Here are the fields contained by Person class.

public class Person { private Integer id; private String firstName; private String lastName; @JsonFormat(pattern = "yyyy-MM-dd") private Date birthDate; private Gender gender; private Contact contact; private Address address; private String accountNo; // ...
}

The following picture illustrates which fields of the Person object are used by consumers. As you see, some of the fields are shared between consumers, while others are required only by a single consuming application.

Now we can take a look at the contract definition between person-service and bank-service.

import org.springframework.cloud.contract.spec.Contract Contract.make { request { method 'GET' urlPath('/persons/1') } response { status OK() body([ id: 1, firstName: 'Piotr', lastName: 'Minkowski', gender: $(regex('(MALE|FEMALE)')), contact: ([ email: $(regex(email())), phoneNo: $(regex('[0-9]{9}$')) ]) ]) headers { contentType(applicationJson()) } }
}

For comparison, here’s the definition of a contract between person-service and letter-service.

import org.springframework.cloud.contract.spec.Contract Contract.make { request { method 'GET' urlPath('/persons/1') } response { status OK() body([ id: 1, firstName: 'Piotr', lastName: 'Minkowski', address: ([ city: $(regex(alphaNumeric())), country: $(regex(alphaNumeric())), postalCode: $(regex('[0-9]{2}-[0-9]{3}')), houseNo: $(regex(positiveInt())), street: $(regex(nonEmpty())) ]) ]) headers { contentType(applicationJson()) } }
}

3. Implementing Tests on the Producer Side

We have three different contracts assigned to the single endpoint exposed by person-service. We need to publish them in such a way that they are easily available for consumers. In that case, Spring Cloud Contract comes with a handy solution. We may define contracts with a different response for the same request, then choose the appropriate definition on the consumer side. All those contract definitions will be published within the same JAR file. Because we have three consumers we define three different contracts placed in the directories bank-consumer, contact-consumer and letter-consumer.

All the contracts will use a single base test class. To achieve this, we need to provide a fully qualified name of that class for Spring Cloud Contract Verifier plugin in pom.xml.

<plugin> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-contract-maven-plugin</artifactId> <extensions>true</extensions> <configuration> <baseClassForTests>pl.piomin.services.person.BasePersonContractTest</baseClassForTests> </configuration>
</plugin>

Here’s the full definition of the base class for our contract tests. We will mock the repository bean with the answer matching the rules created inside the contract files.

@RunWith(SpringRunner.class)
@SpringBootTest(webEnvironment = WebEnvironment.DEFINED_PORT)
public abstract class BasePersonContractTest { @Autowired WebApplicationContext context; @MockBean PersonRepository repository; @Before public void setup() { RestAssuredMockMvc.webAppContextSetup(this.context); PersonBuilder builder = new PersonBuilder() .withId(1) .withFirstName("Piotr") .withLastName("Minkowski") .withBirthDate(new Date()) .withAccountNo("1234567890") .withGender(Gender.MALE) .withPhoneNo("500070935") .withCity("Warsaw") .withCountry("Poland") .withHouseNo(200) .withStreet("Al. Jerozolimskie") .withEmail("piotr.minkowski@gmail.com") .withPostalCode("02-660"); when(repository.findById(1)).thenReturn(builder.build()); } }

The Spring Cloud Contract Maven plugin visible above is responsible for generating stubs from contract definitions. It is executed during Maven builds after running mvn clean install. The build is performed on Jenkins CI. The Jenkins pipeline is responsible for updating the remote Git repository, building binaries from source code, running automated tests, and finally, publishing the JAR file containing stubs on a remote artifact repository — Artifactory. Here’s the Jenkins pipeline created for the contract producer side (person-service).

node { withMaven(maven:'M3') { stage('Checkout') { git url: 'https://github.com/piomin/sample-spring-cloud-contract-ci.git', credentialsId: 'piomin-github', branch: 'master' } stage('Publish') { def server = Artifactory.server 'artifactory' def rtMaven = Artifactory.newMavenBuild() rtMaven.tool = 'M3' rtMaven.resolver server: server, releaseRepo: 'libs-release', snapshotRepo: 'libs-snapshot' rtMaven.deployer server: server, releaseRepo: 'libs-release-local', snapshotRepo: 'libs-snapshot-local' rtMaven.deployer.artifactDeploymentPatterns.addInclude("*stubs*") def buildInfo = rtMaven.run pom: 'person-service/pom.xml', goals: 'clean install' rtMaven.deployer.deployArtifacts buildInfo server.publishBuildInfo buildInfo } }
}

We also need to include the dependency spring-cloud-starter-contract-verifier to the producer app to enable Spring Cloud Contract Verifier.

<dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-contract-verifier</artifactId> <scope>test</scope>
</dependency>

4. Implementing Tests on the Consumer Side

To enable Spring Cloud Contract on the consumer side, we need to include the artifact spring-cloud-starter-contract-stub-runner to the project dependencies.

<dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-contract-stub-runner</artifactId> <scope>test</scope>
</dependency>

The only thing left is to build a JUnit test, which verifies our contract by calling it through the OpenFeign client. The configuration of that test is provided inside the annotation @AutoConfigureStubRunner. We select the latest version of the person-service stubs artifact by setting + in the version section of the ids parameter. Because we have multiple contracts defined in person-service, we need to choose the right one for the current service by setting the consumer-name parameter. All the contract definitions are downloaded from the Artifactory server, so we set the stubsMode parameter to REMOTE. The address of the Artifactory server has to be set using the repositoryRoot property.

@RunWith(SpringRunner.class)
@SpringBootTest(webEnvironment = WebEnvironment.NONE)
@AutoConfigureStubRunner(ids = { "pl.piomin.services:person-service:+:stubs:8090"
}, consumerName = "letter-consumer", stubsPerConsumer = true, stubsMode = StubsMode.REMOTE, repositoryRoot = "http://192.168.99.100:8081/artifactory/libs-snapshot-local")
@DirtiesContext
public class PersonConsumerContractTest { @Autowired private PersonClient personClient; @Test public void verifyPerson() { Person p = personClient.findPersonById(1); Assert.assertNotNull(p); Assert.assertEquals(1, p.getId().intValue()); Assert.assertNotNull(p.getFirstName()); Assert.assertNotNull(p.getLastName()); Assert.assertNotNull(p.getAddress()); Assert.assertNotNull(p.getAddress().getCity()); Assert.assertNotNull(p.getAddress().getCountry()); Assert.assertNotNull(p.getAddress().getPostalCode()); Assert.assertNotNull(p.getAddress().getStreet()); Assert.assertNotEquals(0, p.getAddress().getHouseNo()); } }

Here’s the Feign client implementation responsible for calling the endpoint exposed by person-service

@FeignClient("person-service")
public interface PersonClient { @GetMapping("/persons/{id}") Person findPersonById(@PathVariable("id") Integer id); }

5. Setup of the Continuous Integration Process

We have already defined all the contracts required for our exercise. We have also built a pipeline responsible for building and publishing stubs with contracts on the producer side (person-service). It always publishes the newest version of stubs generated from the source code. Now, our goal is to launch pipelines defined for three consumer applications each time new stubs are published to the Artifactory server by the producer pipeline.

The best solution for this is to trigger a Jenkins build when you deploy an artifact. To achieve this, we use a Jenkins plugin called URLTrigger, which can be configured to watch for changes on a certain URL, in case a REST API endpoint is exposed by Artifactory for the selected repository path.

After installing URLTrigger, we have to enable it for all consumer pipelines. You can configure it to watch for changes in the returned JSON file from the Artifactory File List REST API, accessed via the URI http://192.168.99.100:8081/artifactory/api/storage/%5BPATH_TO_FOLDER_OR_REPO%5D/. The file maven-metadata.xml will change every time you deploy a new version of the application to Artifactory. We can monitor the change of the response’s content between the last two polls. The last field that has to be filled is Schedule. If you set it to * * * * *, it will poll for a change every minute.

Our three pipelines for consumer applications are ready. The first run finished with success.

If you have already built the person-service application and publish stubs to Artifactory, you will see the following structure in the libs-snapshot-local repository. I have deployed three different versions of the API exposed by person-service. Each time I publish a new version of the contract, all the dependent pipelines are triggered to verify it.

The JAR file with contracts is published under the classifier stubs.

Spring Cloud Contract Stub Runner tries to find the latest version of contracts.

2018-07-04 11:46:53.273 INFO 4185 --- [ main] o.s.c.c.stubrunner.AetherStubDownloader : Desired version is [+] - will try to resolve the latest version
2018-07-04 11:46:54.752 INFO 4185 --- [ main] o.s.c.c.stubrunner.AetherStubDownloader : Resolved version is [1.3-SNAPSHOT]
2018-07-04 11:46:54.823 INFO 4185 --- [ main] o.s.c.c.stubrunner.AetherStubDownloader : Resolved artifact [pl.piomin.services:person-service:jar:stubs:1.3-SNAPSHOT] to /var/jenkins_home/.m2/repository/pl/piomin/services/person-service/1.3-SNAPSHOT/person-service-1.3-SNAPSHOT-stubs.jar

6. Testing a Change in the Contract

We have already prepared contracts and configured our CI environment. Now, let’s perform a change in the API exposed by person-service. We will just change the name of one field: accountNo to accountNumber.

This change requires a change in the contract definition created on the producer side. If you modify the field name there, person-service will build successfully and a new version of the contract will be published to Artifactory. Because all other pipelines listen for changes in the latest version of JAR files with stubs, the build will be started automatically. The microservices letter-service and contact-service do not use the field accountNo, so their pipelines will not fail. Only the bank-service pipeline reports an error in the contract, as shown on the picture below.

Now, if you were notified about the failed verification of the newest contract version between person-service and bank-service, you can perform the required change on the consumer side.

Original Link

Find the Best Agile Testing Tools for Your Team

Once associated only with small application development projects and co-located teams of 8-10 members, Agile methodology is increasingly being adapted for large-scale enterprise development. Choosing the right Agile testing tool is vitally important for companies just making the transition to Agile since the right tool in the right hands can foster team collaboration, drive down costs, shorten release cycles, and provide real-time visibility into the status and quality of your software projects. It helps, too, if the tool(s) you choose plays well with others, that is, it can seamlessly integrate with other business critical tools in your development environment, such as those you’re using for requirements traceability, defect logging, manual and automatic testing, or metrics and reporting. This kind of flexibility and functionality is especially important in large, enterprise-wide projects that need to scale across different departments, locations, lines of business, platforms, and technologies.

Different Test Case Management Tools for Different Agile Testing Methodologies

Every organization is unique and, before committing to an Agile testing tool, you should choose an Agile software testing methodology that works best within your culture and the skill-sets of your development and testing teams. One of the most popular software testing methodologies, Scrum takes a highly iterative approach that focuses on defining key features and objectives prior to each iteration or sprint. It is designed to reduce risk while providing value quickly. Introducing Scrum is quite a change for a team not used to Agile software development: they have to start working in iterations, build cross-functional teams, appoint a product owner and a Scrum master, as well as introduce regular meetings for iteration planning, daily status updates, and sprint reviews.

Unlike the time-boxed approach that Scrum takes, Kanban is designed around a continuous queue of work, which goes through a number of stages of development until it’s done. Kanban teams often use index cards or sticky notes arranged on walls, such as the Kanban Board shown below, to visualize workflow in a left-to-right manner. When work is completed in a stage, it moves into the next-stage column to its right. When someone needs new work to do, they pull it from a left-hand column.

Are Spreadsheets Limiting Your Testing and Reporting?

Agile teams using either Scrum or Kanban methods often rely on a spreadsheet application like Microsoft Excel as a test case management, documentation and reporting tool. There are significant risks to using spreadsheets to store and process test cases, however, especially on multi-team projects where individual teams often adapt spreadsheets to their specific needs, which can cause problems when it comes to getting uniform reports. If two or more people are working at the same time on a spreadsheet file, there’s also the danger of corrupting the file or creating other security risks.

Achieve Agile Project Management with JIRA

If you’re looking for a tool that makes it easy for different teams to collaborate, JIRA Software is an Agile project management tool that supports any Agile methodology, be it Scrum, Kanban, or your own unique flavor. From Agile dashboards to reports, you can plan, track, and manage all your Agile software development projects from a single tool. JIRA allows users to track anything and everything — issues, bugs, user stories, tasks, deadlines, hours — so you can stay on top of each of your team’s activities. In addition to offering collaboration tools, a chat solution, and development tools, JIRA has a wide range of integrations to help you connect to almost any other tool you’re likely to need.

One of these integrations, Zephyr for JIRA provides a full featured and sophisticated test case management solution inside JIRA. With the same look and feel as JIRA, Zephyr for JIRA lets you do testing right inside JIRA, which makes it easier for Agile teams to track software quality and make better-informed go/no-go decisions about when to ship high-quality software. You can also hook your automation and continuous integration tools to Zephyr for JIRA with the ZAPI add-on (sold separately) to enable access to testing data programmatically via RESTful APIs. With well documented RESTful APIs you can create tests and execution cycles, update execution status, add attachments, and retrieve information about users, projects, releases, tests and execution cycles.

Begin Comparing Automation Testing Tools Comparison Overview

Test automation is essential in today’s fast-moving software delivery environment. Test automation works by running a large number of tests repeatedly to make sure an application doesn’t break whenever new changes are introduced. For most Agile development teams, these automated tests are usually executed as part of a Continuous Integration (CI) build process, where developers check code into a shared repository several times a day. Each check-in is then verified by an automated build, allowing teams to detect errors and conflicts as soon as possible. CI tools such as Jenkins, Bamboo, and Selenium are also used to build, test and deploy applications automatically when requirements change in order to speed up the release process.

Jenkins

Jenkins is a CI/CD server that runs tests automatically every time a developer pushes new code into the source repository. Because CI detects bugs early on in development, bugs are typically smaller, less complex and easier to resolve. Originally created to be a build automation tool for Java applications, Jenkins has since evolved into a multi-faceted platform with over a thousand plug-ins for other software tools. Because of the rich ecosystem of plug-ins, Jenkins can be used to build, deploy and automate almost any software project–regardless of the computer language, database or version control system used.

Bamboo

Bamboo is a CI/CD server from Atlassian. Like Jenkins and other CI/CD servers, Bamboo allows developers to automatically build, integrate, test and deploy source code. Bamboo is closely connected with other Atlassian tools such as JIRA for project management and Hipchat for team communication. Unlike Jenkins, which is a free and open source Agile automation tool, Bamboo is commercial software that is integrated (and supported) out of the box with other Atlassian products such as Bitbucket, JIRA, and Confluence.

Selenium

Selenium is a suite of different open-source software tools that enable automated testing of web applications across various browsers/platforms. Most often used to create robust, browser-based regression automation suites and tests, Selenium — like Jenkins — has a rich repository of open source tools that are useful for different kinds of automation problems. With support for programming languages like C#, Java, JavaScript, Python, Ruby, .Net, Perl, PHP, etc., Selenium can be used to write automation scripts that run against most modern web browsers.

Which Test Automation Framework Fits Your Needs?

In addition to CI/CD tools, many Agile teams also rely on test automation frameworks made of function libraries, test data sources, and other reusable modules that can be assembled like building blocks so teams can create automation tests specific to different business needs. So, for example, a team might use a specific test automation framework to automate GUI tests if their software end users expect a fast, rich and easy user interface experience. If the team is developing an app for an Internet of Things (IoT) device that primarily talks to other IoT devices, they would likely use a different test automation framework.

Agile teams can execute one-touch control of test automation from within the Zephyr Platform with Vortex, Zephyr’s advanced add-on that that allows you to integrate with a growing suite of automated testing frameworks (including EggPlant, Cucumber, Selenium, UFT, Tricentis, and more) with minimal configuration. Besides being able to control the execution of thousands of automated test cases, Vortex makes it easy to automatically create test cases from test scripts and to apply insights from analytics on both automated and manual testing activities.

The Importance of Exploratory Testing on Agile Projects

Agile projects still need manual testers to engage in exploratory test sessions while the automation test suite runs. In addition to revising and fine-tuning the automated tests, exploratory testers are important on Agile projects since developers and other team members often get used to following a defined process and may stop thinking outside the box. Because of the desire for fast consensus among self-organizing Agile teams (including globally distributed ones), collaboration can devolve into groupthink. Exploratory testing combats this tendency by allowing a team member to play the devil’s advocate role and ask tough, “what if”-type testing questions. Because of the adaptable nature of exploratory testing, it can also be run in parallel with automated testing and doesn’t have to slow deployment down on Agile projects.

Enhance Your Exploratory Testing Techniques with Zephyr’s Vortex and Capture for JIRA

In addition to being able to create and reuse manual tests on Agile projects, Zephyr’s Vortex tool makes it easy to bring in and work with automation information from across your development stack, including from systems external to your organization. Vortex allows users–wherever they are in your organization– to integrate, execute, and report on test automation activities. By providing an intuitive screen that lets users access both manual and automated test cases at the same time, Vortex helps Agile teams better monitor their overall automation effort (that is, the number of manual versus automated tests) from one release to another.

Capture for JIRA helps testers on Agile projects create and record exploratory and collaborative testing sessions, which are useful for planning, executing and tracking manual or exploratory testing. Session-based test management is a type of structured exploratory testing that requires testers to identify test objectives and focus their testing efforts on fulfilling them. This type of exploratory testing is an extremely powerful way of optimizing test coverage without incurring the costs associated with writing and maintaining test cases. Like Zephyr for JIRA, Capture for JIRA has a deep integration with the JIRA platform, allowing users to capture screenshots within browsers, create annotations, and validate application functionality within JIRA.

Zephyr is the go-to testing solution for 18,000 Agile development and testing teams in over 100 countries, processing more than 40 million tests a day. Vortex and Capture for JIRA are two of the latest additions to Zephyr’s suite of advanced Agile and automation tools, which includes different Zephyr for JIRA, Zephyr Teams, and Zephyr Enterprise solutions. The Zephyr platform integrates with a wide range of automation tools, such as Selenium, Cucumber, EggPlant, QTP and more. It can also run on any CI/CD framework/server (such as Jenkins, Hudson, Bamboo), which allows Agile teams to bring automation and manual test results together seamlessly.

Original Link

Optimizing AWS Costs Significantly With a Free Jenkins Extension

The most common problem testing teams face is the unpredictability of the capacities’ demand needed for a certain project, which results in ever-growing costs for capacities that may not be used to the full scale or not used at all. Depending on the size of the project, these overpayments for downtime vary from hundreds to thousands of dollars.

How to Solve the Problem

Amazon Spot instances are significantly cheaper compared to on-demand instances (up to 90%). The only limitation users may face is Amazon guarantees 6-hour usage period after which instance may be interrupted. It depends on the demand for spot instances.

The first step is choosing the type of spot instance suitable for the project needs (note that on-demand instances are also available in conjunction with spot instances).

Spot Pricing

For one of our projects, we use m3.medium instances with a single executor. They have 1 vCPU and approx. 4Gb of RAM. They are suitable for the majority of the QA/dev jobs. Moreover, it has the biggest — 90% — discount on the spot market.

Solution Examples

Scalable pipeline approach integrated into AWS Spot market (no jobs = no costs) and scaling to thousands of instances when needed.

How Does It Help to Cut Costs?

Scalable Jenkins on Demand

Jenkins is scalable on demand using Amazon Spot Fleet instances. The Spot Fleet attempts to launch the number of Spot Instances that are required to meet the target capacity that you specified in the Spot Fleet request.

Image title

Image title

Actual Numbers of Savings

You can save up to 85% — it depends on the type of instance you are using.

Image title

Is there a significant difference in cutting costs when using other types of spot instances? It highly depends on the tasks and the size of the project.

Note: we do not recommend using T2 instances for Jenkins slaves since they have limitations in performance.

Below is a diagram showing the difference in costs:

Image title

Find other AWS Tools for reporting and cost optimization at the link.

Original Link

Jenkins Is Showing the CI/CD Way!

JetBrains just released the 2018 edition of their The State of Developer Ecosystem Survey. It is pretty interesting, as it is based on a broad-based, cross-section of more than 6,000 developers and, as such, is likely to be a very good proxy of the market.

The survey analyzes multiple sectors, from languages to databases. Interestingly enough, continuous integration and continuous delivery are not in the “DevOps” section, but in their “Team Tools” section.

So, what do we learn?

With 62%, Jenkins is the leading continuous integration tool, period! Even if you sum up the percentage of the next four followers (!), you don’t reach the level of Jenkins.

If you then split cloud vs. on-premise usage, the result is the same: Jenkins is the de facto continuous integration tool, well ahead of any competition. In the cloud usage category, adding Codeship (and, for some mysterious reason a second CodeShip, with a wrong spelling), puts the CloudBees ecosystem above 60%. In on-premise usage, the CloudBees ecosystem is even higher, at 66%.

As companies get serious about continuous integration and continuous delivery, it becomes obvious to all of them that they won’t get an overall increase in velocity and productivity if they solely focus on new applications: they need to build their DevOps muscle across the entire organization. As such, their CI/CD solution must integrate with a LOT of different tools, systems and environments. This is a key area where Jenkins shines, obviously, with more than 1,400 plugins!

And that’s typically where CloudBees comes into the picture. Once you move from a relatively well-contained Jenkins setup (typically an overloaded Jenkins master, set up years ago by a Jenkins aficionado) to a company-wide practice, concepts such as self-service, management at scale, collaboration, governance and security will emerge as key conditions for a successful enterprise-wide adoption of DevOps. From on-premise to the cloud, from self-managed to self-service, from opinionated to fully customized, as the CI/CD Power House, we have the solution that fits your requirements and the enablement services to help you.

Original Link

Local Continuous Delivery Environment With Docker and Jenkins

In this article, I’m going to show you how to set up a continuous delivery environment for building Docker images of our Java applications on your local machine. Our environment will consist of GitLab (optional, otherwise you can use hosted GitHub), Jenkins master, Jenkins JNLP slave with Docker, and a private Docker registry. All those tools will be run locally using their Docker images. Thanks to that, you will be able to easily test it on your laptop and then configure the same environment in production deployed on multiple servers or VMs. Let’s take a look at the architecture of the proposed solution.

1. Running Jenkins Master

We will use the latest Jenkins LTS image. Jenkins Web Dashboard is exposed on port 38080. Slave agents may connect to the master on the default 50000 JNLP (Java Web Start) port.

$ docker run -d --name jenkins -p 38080:8080 -p 50000:50000 jenkins/jenkins:lts

After starting, you have to execute the command docker logs jenkins in order to obtain an initial admin password. Find the following fragment in the logs, copy your generated password and paste it in the Jenkins start page available at http://192.168.99.100:38080.

We have to install some Jenkins plugins to be able to check out the project from the Git repository, build application from the source code using Maven, and finally, build and push  a Docker image to a private registry. Here’s a list of required plugins:

  • Git Plugin – this plugin allows u to use Git as a build SCM.
  • Maven Integration Plugin – this plugin provides advanced integration for Maven 2/3.
  • Pipeline Plugin – this is a suite of plugins that allows you to create continuous delivery pipelines as a code and run them in Jenkins.
  • Docker Pipeline Plugin – this plugin allows you to build and use Docker containers from pipelines.

2. Building the Jenkins Slave

Pipelines are usually run on a different machine than the machine with the master node. Moreover, we need to have the Docker engine installed on that slave machine to be able to build Docker images. Although there are some ready Docker images with Docker-in-Docker and Jenkins client agent, I have never found the image with JDK, Maven, Git, and Docker installed. These are the most commonly used tools when building images for your microservices, so it is definitely worth it to have such an image with a Jenkins image prepared.

Here’s the Dockerfile with the Jenkins Docker-in-Docker slave with Git, Maven, and OpenJDK installed. I used Docker-in-Docker as a base image (1). We can override some properties when running our container. You will probably have to override the default Jenkins master address (2) and slave secret key (3). The rest of the parameters are optional, but you can even decide to use an external Docker daemon by overriding the DOCKER_HOST environment variable. We also download and install Maven (4) and create a user with special sudo rights for running Docker (5). Finally, we run the entrypoint.sh script, which starts the Docker daemon and Jenkins agent (6).

FROM docker:18-dind # (1)
MAINTAINER Piotr Minkowski
ENV JENKINS_MASTER http://localhost:8080 # (2)
ENV JENKINS_SLAVE_NAME dind-node
ENV JENKINS_SLAVE_SECRET "" # (3)
ENV JENKINS_HOME /home/jenkins
ENV JENKINS_REMOTING_VERSION 3.17
ENV DOCKER_HOST tcp://0.0.0.0:2375
RUN apk --update add curl tar git bash openjdk8 sudo ARG MAVEN_VERSION=3.5.2 # (4)
ARG USER_HOME_DIR="/root"
ARG SHA=707b1f6e390a65bde4af4cdaf2a24d45fc19a6ded00fff02e91626e3e42ceaff
ARG BASE_URL=https://apache.osuosl.org/maven/maven-3/${MAVEN_VERSION}/binaries RUN mkdir -p /usr/share/maven /usr/share/maven/ref \ && curl -fsSL -o /tmp/apache-maven.tar.gz ${BASE_URL}/apache-maven-${MAVEN_VERSION}-bin.tar.gz \ && echo "${SHA} /tmp/apache-maven.tar.gz" | sha256sum -c - \ && tar -xzf /tmp/apache-maven.tar.gz -C /usr/share/maven --strip-components=1 \ && rm -f /tmp/apache-maven.tar.gz \ && ln -s /usr/share/maven/bin/mvn /usr/bin/mvn ENV MAVEN_HOME /usr/share/maven
ENV MAVEN_CONFIG "$USER_HOME_DIR/.m2"
# (5)
RUN adduser -D -h $JENKINS_HOME -s /bin/sh jenkins jenkins && chmod a+rwx $JENKINS_HOME
RUN echo "jenkins ALL=(ALL) NOPASSWD: /usr/local/bin/dockerd" > /etc/sudoers.d/00jenkins && chmod 440 /etc/sudoers.d/00jenkins
RUN echo "jenkins ALL=(ALL) NOPASSWD: /usr/local/bin/docker" > /etc/sudoers.d/01jenkins && chmod 440 /etc/sudoers.d/01jenkins
RUN curl --create-dirs -sSLo /usr/share/jenkins/slave.jar http://repo.jenkins-ci.org/public/org/jenkins-ci/main/remoting/$JENKINS_REMOTING_VERSION/remoting-$JENKINS_REMOTING_VERSION.jar && chmod 755 /usr/share/jenkins && chmod 644 /usr/share/jenkins/slave.jar COPY entrypoint.sh /usr/local/bin/entrypoint
VOLUME $JENKINS_HOME
WORKDIR $JENKINS_HOME
USER jenkins
ENTRYPOINT ["/usr/local/bin/entrypoint"] # (6)

Here’s the script entrypoint.sh.


#!/bin/sh
set -e
echo "starting dockerd..."
sudo dockerd --host=unix:///var/run/docker.sock --host=$DOCKER_HOST --storage-driver=vfs &
echo "starting jnlp slave..."
exec java -jar /usr/share/jenkins/slave.jar \
-jnlpUrl $JENKINS_URL/computer/$JENKINS_SLAVE_NAME/slave-agent.jnlp \
-secret $JENKINS_SLAVE_SECRET

The source code with the image definition is available on GitHub. You can clone the repository, build the image, and start the container using the following commands.

$ docker build -t piomin/jenkins-slave-dind-jnlp .
$ docker run --privileged -d --name slave -e JENKINS_SLAVE_SECRET=5664fe146104b89a1d2c78920fd9c5eebac3bd7344432e0668e366e2d3432d3e -e JENKINS_SLAVE_NAME=dind-node-1 -e JENKINS_URL=http://192.168.99.100:38080 piomin/jenkins-slave-dind-jnlp

Building it is just an optional step because the image is already available on my Docker Hub account.

3. Enabling Docker-in-Docker Slave

To add a new slave node, you need to navigate to Manage Jenkins -> Manage Nodes -> New Node. Then, define a permanent node with the name parameter filled. The most suitable name is default name declared inside the Docker image definition – dind-node. You also have to set the remote root directory, which should be equal to the path defined inside the container for the JENKINS_HOME environment variable. In my case, it is /home/jenkins. The slave node should be launched via Java Web Start (JNLP).

The new node is visible on the list of nodes as disabled. You should click it in order to obtain its secret key.

Finally, you may run your slave container using the following command containing the secret copied from the node’s panel in Jenkins Web Dashboard.

$ docker run --privileged -d --name slave -e JENKINS_SLAVE_SECRET=fd14247b44bb9e03e11b7541e34a177bdcfd7b10783fa451d2169c90eb46693d -e JENKINS_URL=http://192.168.99.100:38080 piomin/jenkins-slave-dind-jnlp

If everything went according to plan, you should see the enabled node dind-node in the node’s list.

4. Setting Up the Docker Private Registry

After deploying the Jenkins master and slave, there is one last required element in the architecture that has to be launched — the private Docker registry. Because we will access it remotely (from the Docker-in-Docker container), we have to configure a secure TLS/SSL connection. To achieve it, we should first generate a TLS certificate and key. We can use the openssl tool for it. We begin by generating a private key.

$ openssl genrsa -des3 -out registry.key 1024

Then, we should generate a certificate request file (CSR) by executing the following command.

$ openssl req -new -key registry.key -out registry.csr

Finally, we can generate a self-signed SSL certificate that is valid for one year using the openssl command as shown below.

$ openssl x509 -req -days 365 -in registry.csr -signkey registry.key -out registry.crt

Don’t forget to remove the passphrase from your private key.

$ openssl rsa -in registry.key -out registry-nopass.key -passin pass:123456

You should copy the generated .key and .crt files to your Docker machine. After that, you may run the Docker registry using the following command.

docker run -d -p 5000:5000 --restart=always --name registry -v /home/docker:/certs -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/registry.crt -e REGISTRY_HTTP_TLS_KEY=/certs/registry-nopass.key registry:2

If a registry has been successfully started, you should able to access it over HTTPS by calling the address https://192.168.99.100:5000/v2/_catalog from your web browser.

5. Creating the Application Dockerfile

The sample application’s source code is available on GitHub in the repository sample-spring-microservices-new. There are some modules with microservices. Each of them has a Dockerfile created in the root directory. Here’s a typical Dockerfile for our microservice built on top of Spring Boot.

FROM openjdk:8-jre-alpine
ENV APP_FILE employee-service-1.0-SNAPSHOT.jar
ENV APP_HOME /app
EXPOSE 8090
COPY target/$APP_FILE $APP_HOME/
WORKDIR $VERTICLE_HOME
ENTRYPOINT ["sh", "-c"]
CMD ["exec java -jar $APP_FILE"]

6. Building the Pipeline Through Jenkinsfile

This step is the most important phase of our exercise. We will prepare pipeline definition, which combines together all the currently discussed tools and solutions. This pipeline definition is a part of every sample application source code. The change in Jenkinsfile is treated the same as a change in the source code responsible for implementing business logic.

Every pipeline is divided into stages. Every stage defines a subset of tasks performed through the entire pipeline. We can select the node, which is responsible for executing the pipeline’s steps, or leave it empty to allow random selection of the node. Because we have already prepared a dedicated node with Docker, we force the pipeline to be built with that node. In the first stage, called Checkout, we pull the source code from the Git repository (1). Then we build an application binary using the Maven command (2). Once the fat JAR file has been prepared, we may proceed to build the application’s Docker image (3). We use methods provided by the Docker Pipeline Plugin. Finally, we push the Docker image with the fat JAR file to a secure private Docker registry (4). Such an image may be accessed by any machine that has Docker installed and has access to our Docker registry. Here’s the full code of the Jenkinsfile prepared for the module config-service.

node('dind-node') { stage('Checkout') { # (1) git url: 'https://github.com/piomin/sample-spring-microservices-new.git', credentialsId: 'piomin-github', branch: 'master' } stage('Build') { # (2) dir('config-service') { sh 'mvn clean install' def pom = readMavenPom file:'pom.xml' print pom.version env.version = pom.version currentBuild.description = "Release: ${env.version}" } } stage('Image') { dir ('config-service') { docker.withRegistry('https://192.168.99.100:5000') { def app = docker.build "piomin/config-service:${env.version}" # (3) app.push() # (4) } } }
}

7. Creating the Pipeline in Jenkins Web Dashboard

After preparing the application’s source code, Dockerfile, and Jenkinsfile, the only thing left is to create the pipeline using the Jenkins UI. We need to select New Item -> Pipeline and type the name of our first Jenkins pipeline. Then, go to the Configure panel and select Pipeline script from SCM in the Pipeline section. In the following form, we should fill in the address of the Git repository, user credentials, and location of the Jenkinsfile.

8. Configure GitLab WebHook (Optional)

If you run GitLab locally using its Docker image, you will be able to configure a webhook, which triggers a run of your pipeline after pushing changes to the Git repository. To run GitLab using Docker, execute the following command.

$ docker run -d --name gitlab -p 10443:443 -p 10080:80 -p 10022:22
gitlab/gitlab-ce:latest

Before configuring the webhook in the GitLab Dashboard, we need to enable this feature for the Jenkins pipeline. To achieve it, we should first install the GitLab Plugin.

Then, you should come back to the pipeline’s configuration panel and enable the GitLab build trigger. After that, the webhook will be available for our sample pipeline, called config-service-pipeline under the URL http://192.168.99.100:38080/project/config-service-pipeline as shown in the following picture.

Before proceeding to configuration of the webhook in the GitLab Dashboard, you should retrieve your Jenkins user API token. To achieve it, go to the profile panel, select Configure, and click the button Show API Token.

To add a new WebHook for your Git repository, you need to go to the section Settings -> Integrations and then fill the URL field with the webhook address copied from the Jenkins pipeline. Then paste Jenkins user API token into field Secret Token. Leave the Push events checkbox selected.

9. Running the Pipeline

Now, we may finally run our pipeline. If you use a GitLab Docker container as a Git repository platform, you just have to push changes in the source code. Otherwise, you have to manually start the build of the pipeline. The first build will take a few minutes because Maven has to download dependencies required for building an application. If everything ends with success, you should see the following result on your pipeline dashboard.

You can check out the list of images stored in your private Docker registry by calling the following HTTP API endpoint in your web browser: https://192.168.99.100:5000/v2/_catalog.

Original Link

Don’t Install Development Tools!

…Use Jenkins X, DevPods and Kubernetes!

I wrote previously about how you can be lazy and avoid installing Kubernetes (by letting cloud providers do it). In this installment, I want to tell you how to not install (or at least not to install many) development tools on your workstation. The tools I have installed that I will talk about are Jenkins X binary (jx) and Visual Studio Code.

Around the time of that blog I was expressing frustration when talking with James Strachan about all the work required to keep my local machine (a Mac) up to date with dev tools, and how tired I was of seeing the beachball on OS X when doing this:

(stare at it for a while…)

This is a drag on time (and thus money). There are lots of costs involved with development, and I talked about the machine cost for development (how using something like GKE can be much cheaper than buying a new machine) but there is also the cost of a developer’s time. Thankfully, there are ways to apply the same smarts here to save time as well as money. And time is money, or money is time?

Given all the work done in automating the detection and installation of required tools, environments, and libraries that goes on when you run ‘jx import’ in Jenkins X, it makes sense to also make those available for development time, and the concept of “DevPods” was born.

The pod part of the name comes from the Kubernetes concept of pods (but you don’t have to know about Kubernetes or pods to use Jenkins X. There is a lot to Kubernetes but Jenkins X aims to provide a developer experience that doesn’t require you to understand it).

Why not use Jenkins X from code editing all the way to production, before you even commit the code or open a pull request? All the tools are there, all the environments are there, ready to use (as they are used at CI time!).

This rounds out the picture: Jenkins X aims to deal with the whole lifecycle for you: from ideas/issues, change requests, testing, CI/CD, security and compliance verification, rollout and monitoring. So it totally makes sense to include the actual dev time tools.

If you have an existing project, you can create a DevPod by running (with the jx command):

This will detect what type of project is (using build packs) and create a DevPod for you with all the tools pre-installed and ready to go.

Obviously, at this point, you want to be able to make changes to your app and try it out. Either run unit tests in the DevPod, or perhaps see some dev version of the app running in your browser (if it is a web app). Web-based code editors have been a holy grail for some time, but never have quite taken off in the mainstream of developers (despite there being excellent ones out there, most developers prefer to develop on their desktop). Ironically, the current crop of popular editors are based around Electron which is actually a web technology stack, but it runs locally (Visual Studio Code is my personal favorite at the moment), in fact Visual Studio Code has a Jenkins X extension (but you don’t have to use it):

To get your changes up to the DevPod, in a fresh shell run (and leave it running):

> jx sync

This will watch for any changes locally (say you want to edit files locally on your desktop) and sync them to the DevPod.

Finally, you can have the DevPod automatically deploy an “edit” version of the app on every single change you make in your editor:

> jx create devpod —sync —reuse
> ./watch.sh

The first command will create or reuse an existing DevPod and open a shell to it, then the watch command will pick up any changes, and deploy them to your “edit” app. You can keep this open in your browser, make a change, and just refresh it. You don’t need to run any dev tools locally, or any manual commands in the DevPod to do this, it takes care of that.

You can have many DevPods running (jx get devpods), and you could stop them at the end of the day (jx delete devpod), start them at the beginning, if you like (or as I say: keep them running in the hours between coffee and beer). A pod uses resources on your cluster, and as the Jenkins X project fleshes out its support for dev tools (via things like VS Code extensions) you can expect even these few steps to be automated away in the near future, so many of the above instructions will not be needed!

End-to-End Experience

So bringing it all together, let me show a very wide (you may need to zoom out) screenshot of this workflow:

From Left to Right:

  • I have my editor (if you look closely, you can see the Jenkins X extension showing the state of apps, pipelines and the environments it is deployed to).
  • In the middle I have jx sync running, pushing changes up to the cloud from the editor, and also the “watch” script running in the DevPod. This means every change I make in my editor, a temporary version of the app (and its dependencies are deployed).
  • On the right is my browser open to the “edit” version of the app. Jenkins X automatically creates an edit environment for live changes, so if I make a change to my source on the left, the code is synced, build/tested and updated so I can see the change on the right (but I didn’t build anything locally, it all happens in the DevPod on Jenkins X).

On Visual Studio code: The Jenkins X extension for visual studio code can automate the creation of DevPods and syncing for you. Expect richer support soon for this editor and others.

Explaining Things With Pictures

To give a big picture of how this hangs together:

In my example, GitHub is still involved, but I don’t push any changes back to it until I am happy with the state of my “edit app” and changes. I run the editor on my local workstation and jx takes care of the rest. This gives a tight feedback loop for changes. Of course, you can use any editor you like, and build and test changes locally (there is no requirement to use DevPods to make use of Jenkins X).

Jenkins X comes with some ready to go environments: development, staging, and production (you can add more if you like). These are implemented as Kubernetes namespaces to avoid the wrong app things talking to the wrong place. The development environment is where the dev tools live: and this is also where the DevPods can live! This makes sense as all the tools are available, and saves the hassle of you having slightly different versions of tools on your local workstation than what you are using in your pipeline.

DevPods are an interesting idea, and at the very least, a cool name! There will be many more improvements/enhancements in this area, so keep an eye out for them. They are a work in progress, so do check the documentation page for better ways to use them.

Some more reading:

  • Docs on DevPods on jenkins-x.io
  • The Visual Studio Code extension for Jenkins X (what a different world: an open source editor by Microsoft!)
    • Expect extensions for other editors soon as well
    • This extension can watch for pipeline activity on Jenkins X, help out with DevPods and more
  • James Strachan’s great intro to Jenkins X talk at Devoxx-UK also includes a DevPod demo
  • The Jenkins X project page

Original Link

Docker – the Solution for Isolated Environments

Docker is a virtualization platform based on containers, which unlike, the hypervisor virtualization for which you have to create completely new machines to isolate them from each other and ensure their independence, Docker will allow you to create containers that will contain only your app. Packaged in form of containers, these applications can be easily deployed on any host running Docker, each container remaining fully independent!

The Docker platform consists of two components: the Docker daemon running in the background and responsible for managing your containers, and the Docker client that allows you to interact with the daemon through a command-line tool.

What Kind of Problems Could Docker Solve?

Let’s start with the most widespread problem. Your team has developed a product for a customer, tested it and now it is the time to deliver the built solution to the client. How do you do it in the shortest time and with minimum resources? Usually, you would prepare a lot of different configuration files and scripts and write the installation instructions and then spend additional time to solve the user errors or environment compatibility. Let’s suppose you did it once; what if you need to install your product several times? Instead of one customer, you have hundreds or thousands of customers and for each of them, you have to repeat all the installation and configuration steps. Doing this manually would take too much time, would be expensive and error-prone. And it becomes even more difficult if you need to update the product to a newer version.

Another problem – the reusability. Suppose you have a department that makes browser games. Assume your department develops several games and they all use the same technology stack. In order to publish a new game, you have to re-configure a new server with almost the same configuration as all the others.

Like any other problem, there is more than one solution we can apply to our issue:

The installation script

 The first approach is to write a script that will install everything you need and run it on all the relevant servers. A script can be a simple “sh” file, or something complex, like an SFX module. The disadvantages of this approach are the fragility and instability. If the script has not been written well, sooner or later, at some point it will fail. And after the failure, the customer environment will actually become “corrupt” and it won’t be so easy just to “roll back” the actions that the script managed to perform.

Cloud services

The second approach is to use cloud services. You manually install on a virtual server everything you need and then make an image. After that, you clone it as many times as you need. There are some disadvantages, too. Firstly, you are dependent on the cloud service and the client may disagree with the server choice you’ve made. Second, the clouds are slow. The virtual servers provided by clouds are greatly inferior in performance compared with the dedicated servers.

Virtual machines

The third approach is about virtual machines. There are still some drawbacks. Because you are limited in disk space or network traffic, it is not always convenient to download a virtual machine image, which can be quite large. Thus, any change within the VM image requires downloading the whole image again. Also, not all the virtual machines support the memory or CPU sharing. Those that support it, require a fine-tuning.

What are Docker’s Strengths?

Docker is a groundbreaking solution. Here are the most important differences between a Docker container and a simple server:

Stateless

The container configuration cannot and mustn’t be changed after the startup. All the preliminary work has to be done at the stage of creating the image (also named as compilation time) or during the start of the container. Different kinds of configurations, ports, public folders, environment variables: all of these must be known at the time the container starts. Of course, Docker allows you to do with its memory and file system whatever you want, but to touch something that you can touch during the startup is considered a bad approach.

Pure

The container does not know anything about the host system and cannot interfere with other containers, either to get into someone else’s file system, or send a signal to someone else’s process. Of course, Docker allows the containers to communicate with each other, but only using strictly the declared methods. You can also run the containers endowed with some superpowers, for example, access to a real network or to a physical device.

Lazy

When you start the container it does not copy the image file system from which it is created. The container only creates an empty file system on top of the image – a layer. The image consists of a list of overlapping layers. That is why the container has a high start-up speed of less than a second.

Declarative

All the container properties are stored in a declarative form. The steps of creating an image are also described as strictly separated steps. Network settings, the contents of the file system, memory capacity, public ports and so on are set in the Dockerfile or with the help of the static keys at startup.

Functionality

The container does only one thing, but does it well. It is assumed that the container will live only during one process that performs only one function in the application. Due to the fact that the container does not have its kernel boot partition, an init-process and often, even it has only one pseudo-root user, it is not a full-fledged operating system. This specialization makes the function realized by the container predictable and scalable.

Strict

By default, the Docker container prohibits everything except access to the network (which also can be disabled). However, if necessary, it is allowed to violate any of these rules.

At the same time, Docker allows us to design a more agile test architecture, by having each container incorporate a brick of the application (database, languages, components). To test a new version of a brick, you need only to replace the corresponding container.

With Docker, it is possible to build an application from containers, each layer having its components isolated from the others. This is the concept of microservices architecture.

Docker and Jenkins

IImage titlen our company, we started to use the Docker philosophy a few months ago. One of our colleagues did an investigation on how it can help us and what benefits we will get using it. In the beginning, we were using Docker only for some automated tests, but after we’ve seen how powerful it is we’ve decided to go even further.

Whether we are talking about the automated tests or about the integration tests, we face the same issues: parallel testing and isolated environments. Each test case needs a clean database in order to be executed. The database loading procedure takes about 1 minute and 30 seconds. To perform this procedure at the beginning of every automated test, it will take days in order to properly test the product. We’ve tried to export/import the database dump, but still, this is not fast enough.

With Docker, we load the database during the image compilation, only once. It means that the image already contains a clean database and every container started from that image provides a clean environment ready for testing.

Another point concerns the resources that are common for all the tests. On the one hand we want to isolate the System Under Test (SUT); on the other hand, we also want to share some artifacts between SUTs. We want to share all required dependencies. For example, we don’t want to have the copies of all the libraries jars in each image, because in this way we waste quite some GB of memory, not to mention the performance. In order to solve this, we’ve configured a Maven local repository to be shared between the host system, where Docker is installed, and the containers. Inside the Docker image, we keep only some instructions needed for the project to be tested. All the common libraries are shared between all the containers.

Besides the automated tests, we also do some integration tests on our product. These tests also need a separate database, in order to load something and test.

We ensure the continuous integration of our projects by using Jenkins. So far we had a virtual machine serving as a Jenkins master and two virtual machines as slaves. Not too long ago, we decided to remove all the executors from the Jenkins master and to let it handle only the Jenkins configurations. Thus, there are only 2 slaves that can actually do something. However, in addition to the automated and integration tests, we have some other jobs in Jenkins, that also require an environment for the execution. To this end, we’ve configured one slave to run the integration tests and the other one to execute the jobs that don’t need a database. So, with this setup (Fig. 1), we could perform only one integration test at one time.

Old Jenkins setup

Fig. 1: Old Jenkins setup

At a certain time, the test executions were taking too much time, because everything was executed sequentially. When all the slaves’ executors were active, meaning each of them compiles some project, at least one job would hang because of a lack of resources. “This is not good. We need a change,” we said. And the change has come with Docker.

We’ve configured the Docker engine on a Linux machine, and we’ve built the Dockerfiles for our tests and started triggering them from Jenkins. We have built an isolated environment to execute the integration tests, however, as a consequence, we were facing some other issues. After each test execution, we needed to do some cleaning and to export the logs. We could do it using some Linux scripts, but it would cause us a lot of headaches.

We’ve decided to change our approach in a way that will provide us more performance and fewer troubles. We’ve moved Jenkins from Windows to Linux and now we have Jenkins running inside Docker (as shown in figure 2). The Docker community is very innovative; on Docker Hub you can find a lot of Jenkins images. With some additional configurations you can build a serious CI environment.

There we found an image with Jenkins installed inside it. When you start a container using this image you can set a role to this container, to be a Jenkins master or a Jenkins slave.

New Jenkins setup

Fig. 2: New Jenkins setup

One Docker container is used as a Jenkins master, it stores all the configurations – the jobs, the plugins, the builds’ history, etc. and takes care about the workspaces, logs, and does all the necessary cleaning. Another four containers serve as slaves. Did you notice it? I said 4 slaves, compared to 2 as before. The extensibility is another reason why we use Docker. Every slave has its own isolated environment, meaning its own database. Shortly, it is like having 4 virtual machines, but much better. Now we can execute at least 3 integration tests at the same time.

The Docker image we have found in the repository has only Jenkins installed. But, in order to ensure the CI of our projects, we need some other tools to be installed inside the slave containers. For this purpose, we have extended the Jenkins image and we’ve installed some Linux specific tools, Java, ActiveMQ, PostgreSQL, we’ve configured the database roles and schemas, Maven, Node.JS, we’ve created some default directories that will be used later by the projects.

All these tools are installed only on the image that is used to start the slaves. For the master, we don’t need them, because it doesn’t perform any jobs.

Here are our images:

REPOSITORY TAG IMAGE ID CREATED SIZE
inther/jenkins-slave jdk8 027aeb7ecf07 13 days ago 1.708 GB
inther/jenkins-slave jdk7 12712cdf36f4 13 days ago 984.6 MB
inther/jenkins-master latest e74c32a835a8 5 weeks ago 443.2 MB
appcontainers/jenkins ubuntu 49069637832b 5 months ago 402.7 MB

There are 2 ways to start the containers using these images. We can start the containers one by one or to use Docker Compose – a tool for defining and running multi-container Docker applications. It is very easy to add one more slave container, even using the command line, not to mention the Docker Compose. It is amazing. The docker-compose.yml has the following content:

version: '2' volumes: keyvolume: datavolume: repositoryvolume: services: slave_1: image: inther/jenkins-slave:jdk8 container_name: jenkins_slave_1 hostname: jenkins-slave network_mode: jenkins stdin_open: true tty: true restart: always privileged: true dns: environment: - ROLE=slave - SSH_PASS=jenkinspassword volumes: - keyvolume:/var/lib/jenkins_slave_key - /home/jenkins/scripts:/home/jenkins/scripts - /home/jenkins/logs:/home/jenkins/logs - repositoryvolume:/home/repo command: /bin/bash slave_2: …

And now we have everything defined for the first container to run. We have similar configurations for all the other containers, only the name is different. Using the command  docker-compose up we can start all the containers and we can stop them in a similar way: docker-compose down. It is good to know that Docker Compose commands have to be executed from the same place where the docker-compose.yml file is located.

Backup Plan?

Of course, we have taken care about eventual troubles we can have in case the master container gets killed or deleted. What happens, will we lose all our CI system? No, we have a backup plan, we store all the configurations inside the Docker volumes (Fig. 3). A volume is a special directory within one or more containers designed to provide features for persistent or shared data. So we persist the Jenkins home to the Docker host. In case we delete all the containers related to Jenkins, we don’t lose anything, we still have all the necessary data to start another Jenkins instance.

Using a Docker volume to persist the data from the container

Fig. 3: Using a Docker volume to persist the data from the container

More details about how to integrate Docker with Jenkins could be found here.

Original Link

How to Use the Jenkins Declarative Pipeline

Jenkins provides you with two ways of developing your pipeline code: Scripted and Declarative. Scripted pipelines, also known as “traditional” pipelines, are based on Groovy as their Domain-specific language. On the other hand, Declarative pipelines provide a simplified and more friendly syntax with specific statements for defining them, without needing to learn Groovy.

Jenkins‘ pipeline plugin version 2.5 introduces support for Declarative pipelines. More information on how to write Scripted pipelines can be found at my previous blog post “How to Use the Jenkins Scripted Pipeline.”

In this post, we will cover all the directives available to develop your Declarative pipeline script, which will provide a clear picture of its functionality.

Declarative Pipelines Syntax

A valid Declarative pipeline must be defined with the “pipeline” sentence and include the next required sections:

  • Agent
  • Stages
  • Stage
  • Steps

Also, these are the available directives:

  • Environment (Defined at stage or pipeline level)
  • Input (Defined at stage level)
  • Options (Defined at stage or pipeline level)
  • Parallel
  • Parameters
  • Post
  • Script
  • Tools
  • Triggers
  • When

We will now describe each of the listed directives/sections, starting with the required ones.

Agent

Jenkins provides the ability to perform distributed builds by delegating them to “agent” nodes. Doing this allows you to execute several projects with only one instance of the Jenkins server, while the workload is distributed to its agents. Details on how to configure a master/agent mode are out of the scope of this blog. Please refer to Jenkins Distributed builds for more information.

Agents should be labeled so they can be easily identified from each other. For example, nodes can be labeled by their platform (Linux, Windows, etc), by their versions or by their location, among others. The “agent” section configures on which nodes the pipeline can be run. Specifying “agent any” means that Jenkins will run the job on any of the available nodes.

An example of its use could be:

pipeline {
agent any
...
}

Stages

This section allows generation of different stages on your pipeline that will be visualized as different segments when the job is run.

A sample pipeline including the stages sentence is provided:

pipeline {
agent any
stages {
...
}
}

Stage

At least one “stage” section must be defined on the “stages” section. It will contain the work that the pipeline will execute. Stages must be named accordingly since Jenkins will display each of them on its interface, as shown here:

Jenkins graphically splits pipeline execution based on the defined stages and displays their duration and whether it was successful or not.

The pipeline script for the previous image looks like the following:

pipeline {
agent any
stages {
stage ('build') {
...
}
stage ('test: integration-&-quality') {
...
}
stage ('test: functional') {
...
}
stage ('test: load-&-security') {
...
}
stage ('approval') {
...
}
stage ('deploy:prod') {
...
}
}
}

Steps

The last required section is “steps,” which is defined into a “stage.” At least one step must be defined in the “steps” section.

For Linux and MacOS, shell is supported. Here is an example:

steps { sh 'echo "A one line step"'
sh ''' echo "A multiline step"'
cd /tests/results
ls -lrt '''
}

For Windows, bat or PowerShell can be used, as shown:

steps { bat "mvn clean test -Dsuite=SMOKE_TEST -Denvironment=QA"
powershell ".\funcional_tests.ps1"
}

The other non required directives will be explained in the following paragraphs.

Environment

This directive can be both defined at stage or pipeline level, which will determine the scope of its definitions. When “environment” is used at the “pipeline” level, its definitions will be valid for all of the pipeline steps. If instead it is defined within a “stage,” it will only be valid for the particular stage.

Sample uses of this directive:

At the “pipeline” level:

pipeline {
agent any
environment {
OUTPUT_PATH = './outputs/'
}
stages {
stage ('build') {
...
}
...
}
}

Here, “environment” is used at a “stage” level:

pipeline {
agent any
stages {
stage ('build') {
environment {
OUTPUT_PATH = './outputs/'
}
...
}
...
}
}

Input

The “input” directive is defined at a stage level and provides the functionality to prompt for an input. The stage will be paused until a user manually confirms it.

The following configuration options can be used for this directive:

  • message: This is a required option where the message to be displayed to the user is specified.
  • id: Optional identifier for the input. By default, the “stage” name is used.
  • ok: Optional text for the Ok button.
  • submitter: Optional list of users or external group names who are allowed to submit the input. By default, any user is allowed.
  • submitterParameter: Optional name of an environment variable to set with the submitter name, if present.
  • parameters: Optional list of parameters to be provided by the submitter.

Here is a sample pipeline containing this directive:

pipeline {
agent any
stages {
stage ('build') {
input{
message "Press Ok to continue"
submitter "user1,user2"
parameters {
string(name:'username', defaultValue: 'user', description: 'Username of the user pressing Ok')
}
}
steps { echo "User: ${username} said Ok."
}
}
}
}

Options

Defined at pipeline level, this directive will group the specific options for the whole pipeline. The available options are:

  •  buildDiscarder 
  •  disableConcurrentBuilds 
  •  overrideIndexTriggers 
  •  skipDefaultCheckout 
  •  skipStagesAfterUnstable 
  •  checkoutToSubdirectory 
  •  newContainerPerStage 
  •  timeout 
  •  retry 
  •  timestamps 

Please refer to Jenkins Declarative pipeline options for a full reference on this.

For example, you can configure your pipeline to be retried 3 times before failing by writing:

pipeline {
agent any
options {
retry(3)
}
stages {
...
}
}

Parallel

Jenkins pipeline Stages can have other stages nested inside that will be executed in parallel. This is done by adding the “parallel” directive to your script. An example of how to use it is provided:

stage('run-parallel-branches') { steps { parallel( a: { echo "Tests on Linux" }, b: { echo "Tests on Windows" } ) }
}

Starting with Declarative Pipeline version 1.2, a new syntax was introduced, making the use of the parallel syntax much more declarative-like.

The previous script rewritten with this new syntax will look like:

pipeline { agent none stages { stage('Run Tests') { parallel { stage('Test On Windows') { agent { label "windows" } steps { bat "run-tests.bat" } } stage('Test On Linux') { agent { label "linux" } steps { sh "run-tests.sh" } } } } }
}

Any of the previous pipelines will look like this:

Both scripts will run the tests on different nodes since they run specific platform tests. Parallelism can also be used to simultaneously run stages on the same node by the use of multithreading, if your Jenkins server has enough CPU.

Some restrictions apply when using parallel stages:

  • A stage directive can have either a parallel or steps directive but not both.
  • A stage directive inside a parallel one cannot nest another parallel directive, only steps are allowed.
  • Stage directives that have a parallel directive inside cannot have “agent” or “tools” directives defined.

Parameters

This directive allows you to define a list of parameters to be used in the script. Parameters should be provided once the pipeline is triggered. It should be defined at a “pipeline” level and only one directive is allowed for the whole pipeline.

String and boolean are the valid parameter types that can be used.

pipeline { agent any parameters { string(name: 'user', defaultValue: 'John', description: 'A user that triggers the pipeline') } stages { stage('Trigger pipeline') { steps { echo "Pipeline triggered by ${params.USER}" } } }
}

Post

Post sections can be added at a pipeline level or on each stage block and sentences included in it are executed once the stage or pipeline completes. Several post-conditions can be used to control whether the post executes or not:

  • always: Steps are executed regardless of the completion status.
  • changed: Executes only if the completion results in a different status than the previous run.
  • fixed: Executes only if the completion is successful and the previous run failed
  • regression: Executes only if current execution fails, aborts or is unstable and the previous run was successful.
  • aborted: Steps are executed only if the pipeline or stage is aborted.
  • failure: Steps are executed only if the pipeline or stage fails.
  • success: Steps are executed only if the pipeline or stage succeeds.
  • unstable: Steps are executed only if the pipeline or stage is unstable.

Since sentences included in a pipeline post block will be run at the end of the script, cleanup tasks or notifications, among others, can be performed here.

pipeline { agent any stages { stage('Some steps') { steps { ... } } } post { always { echo“ Pipeline finished” bat. / performCleanUp.bat } }
}

Script

This step is used to add Scripted Pipeline sentences into a Declarative one, thus providing even more functionality. This step must be included at “stage” level.

Several times blocks of scripts can be utilized on different projects. These blocks allow you to extend Jenkins functionalities and can be implemented as shared libraries. More information on this can be found at Jenkins shared libraries. Also, shared libraries can be imported and used into the “script” block, thus extending pipeline functionalities.

Next we will provide sample pipelines. The first one will only have a block with a piece of Scripted pipeline text, while the second one will show how to import and use shared libraries:

pipeline { agent any stages { stage('Sample') { steps { echo "Scripted block" script { } } } }
}

Please refer to our post about Scripted pipelines at How to Use the Jenkins Scripted Pipeline for more information on this topic.

Tools

The “tools” directive can be added either at a pipeline level or at the stage level. It allows you to specify which maven, jdk, or gradle version to use on your script. Any of these tools, the three supported at the time of writing, must be configured on the “Global tool configuration” Jenkins menu.

Also, Jenkins will attempt to install the listed tool (if it is not installed yet). By using this directive you can make sure a specific version required for your project is installed.

pipeline { agent any tools { maven 'apache-maven-3.0.1' } stages { ... }
}

Triggers

Triggers allow Jenkins to automatically trigger pipelines by using any of the available ones:

  • cron: By using cron syntax, it allows to define when the pipeline will be re-triggered.
  • pollSCM: By using cron syntax, it allows you to define when Jenkins will check for new source repository updates. The Pipeline will be re-triggered if changes are detected. (Available starting with Jenkins 2.22).
  • upstream: Takes as input a list of Jenkins jobs and a threshold. The pipeline will be triggered when any of the jobs on the list finish with the threshold condition.

Sample pipelines with the available triggers are shown next:

pipeline { agent any triggers { //Execute weekdays every four hours starting at minute 0 cron('0 */4 * * 1-5') } stages { ... }
} pipeline { agent any triggers { //Query repository weekdays every four hours starting at minute 0 pollSCM('0 */4 * * 1-5') } stages { ... }
} pipeline { agent any triggers { //Execute when either job1 or job2 are successful upstream(upstreamProjects: 'job1, job2', threshold: hudson.model.Result.SUCCESS) } stages { ... }
}

When

Pipeline steps could be executed depending on the conditions defined in a “when” directive. If conditions match, the steps defined in the corresponding stage will be run. It should be defined at a stage level.

For a full list of the conditions and its explanations refer to Jenkins declarative pipeline “when” directive.

For example, pipelines allow you to perform tasks on projects with more than one branch. This is known as multibranched pipelines, where specific actions can be taken depending on the branch name like “master”, “feature*”, “development”, among others. Here is a sample pipeline that will run the steps for the master branch:

pipeline { agent any stages { stage('Deploy stage') { when { branch 'master' } steps { echo 'Deploy master to stage' ... } } }
}

2 Final Jenkins Declarative Pipeline Tips

Declarative pipelines syntax errors are reported right at the beginning of the script. This is a nice functionality since you will not waste time until a step fails to realize there is a typo or misspelling.

As already mentioned, pipelines can be written either declarative or scripted. Indeed, the declarative way is built on top of the scripted way making it easy to extend as explained, by adding script steps.

Jenkins pipelines are being widely used on CI/CD environments. Using either declarative or scripted pipelines has several advantages. In this post we presented all the syntax elements to write your declarative script along with samples. As we already stated, the declarative way offers a much more friendly syntax with no Groovy knowledge required.

Running BlazeMeter in Your Jenkins Pipeline

Your performance tests should be part of your Jenkins Pipeline, to ensure that any code change doesn’t degrade performance. Add BlazeMeter to Jenkins with the BlazeMeter Jenkins Plugin, run your tests and analyze with BlazeMeter’s insightful reports.

Try out BlazeMeter’s abilities by putting your URL in the box below, and your test will start in minutes.

Original Link

Deploy Your App in One Click

The apex of any software is at deploying it to the client. This process makes the effort of developing the software worth. But a manual deploy could get us crazy!

Most of the deployment processes have a long roadmap to get everything up and running. Don’t even think on forgetting a step, it would cost precious time to find the error and adjust the problem. If you don’t have a roadmap to deploy your software, maybe it’s the first move to take. It’s a step-by-step process that your software needs to finish the deploy. Even though the roadmap is difficult to do manually, any one of the team could follow it, not only the deployment hero.

Good! Having a roadmap means that you know exactly what to do, right? Why don’t automate it and avoid missing any part of this process? Why not configure an automation server to make this in one click? Would it be awesome, isn’t it?

This article won’t solve all your problems like transforming your deployment into a Netflix’s deploy pushing the software to production multiples times a day. Neither solves database migrations problems. The more complex the process, the harder it will be to start. So let’s create an easy plan to make this first step completely possible.

The Automated Deploy Plan

There is no need to use physical servers. It is possible to start this process using your development environment, not worrying much about messing any servers’ configuration. So here is the plan:

  • Create a virtual machine using Vagrant;
  • Configure Jenkins, the Automation Server;
  • Configure Jenkins to easily access the virtual machine;
  • Deploy a sample App.

Your Virtual Server

To speed up the process of creating a virtual machine (VM), let’s use Vagrant with Virtual Box. You can learn more about Vagrant subscribing to the 3 email quick course. It’s free. Subscribe here.

The main point is to make this first version of your VM be very simple. You ending goal in the future is to transform this machine into the closest environment as possible to production, or to a test environment.

Into an empty folder, create a file named as Vagrantfile and paste the content below.

Vagrant.configure("2") do |config| config.vm.box = "hashicorp/precise64" config.vm.network "private_network", ip: "192.168.33.10" config.vm.synced_folder ".", "/vagrant", type: "rsync" config.ssh.insert_key = false config.vm.provision "shell", inline: <<-SHELL apt-get update -y apt-get install default-jdk -y apt-get install tomcat7 -y SHELL
end

This example will use a default Ubuntu Server from Hashicorp, the creator of Vagrant (search more boxes here).

A private network will be created with the IP 192.168.33.10. This is the address to emulate a real server. The folder synchronization is configured to copy files from host to guest only (your VM). This way avoids any messy in your local environment.

The ssh.insert_key configuration makes this process the easiest. It disables the random Vagrant SSH key generation using the default insecure key. Yes, it is insecure! It is a development test. Security isn’t the priority, at least for now. These keys are available from the Vagrant GitHub project.

The provisioning is also very simple, updating the operating system and installing Java and Tomcat from Ubuntu package manager.

To create and start the VM, execute the command vagrant up from the root folder of Vagrantfile file. In a few minutes, the VM will be ready to use. The best part is that you didn’t need to install anything manually. Great!

After finishing the provisioning, check you tomcat installation using the IP configured at Vagrantfile, http://192.168.33.10:8080.

Here are some useful commands to use with Vagrant:

  • vagrant ssh to access the VM by SSH;
  • vagrant halt to shut down the VM;
  • vagrant status to check the VM status;
  • vagrant destroy to get rid of your VM.

Jenkins Configuration

A Jenkins’ installation with all suggested plugins will only need one extra plugin, Publish Over SSH. If you need some help, check this guide on How to Install a Jenkins’ Plugin in 5 Minutes.

In case of not having Jenkins installation, check this complete guide about installing, understanding the suggested plugins and creating your first job. This article will also help you to configure Maven’s installation if you have not done yet.

Authorize Jenkins to Access Your Server

After restarting Jenkins due to Publish Over SSH plugin installation, it’s time to configure the access to the server. In this didactic example using Vagrant, it will be very simple to do it without the need of creating an SSH key manually.

As said before, at Vagrant GitHub project, there is a private insecure key that will perfectly match the public key inside of the VM. And if you are curious as me, you can check the public key inside the VM using the command below. The result should be the content of Vagrant’s public insecure key.

vagrant ssh -c 'cat ~/.ssh/authorized_keys'

Now, from Jenkin’s home page, access the menu Manage Jenkins and then Configure System. After scrolling down, a new section will be available, the Publish over SSH section.

Publish over SSH Section

The first step is to copy the private insecure key from Vagrant project and paste at the field Key. This key will be used by Jenkins as the default to access any configured server. In the real world, you should create a specific key to your Jenkins server with a strong passphrase. The extra step would be the copy of the public key to all desired servers. At this example Vagrant already did it for us.

Now click the button Add to start configuring the server connection.

  • Name: give a nickname to the server like local-vagrant;
  • Hostname: the server IP, in this case, located on Vagrantfile, 192.168.33.10;
  • Username: the user that will be used in the process, vagrant;
  • Remote Directory: the base folder configuration, that must exist, used after connecting to the server. This example will use the Vagrant base folder, /vagrant.

In the end, your configuration should be like the image below. Before saving the configuration, click on the button Test Configuration. If everything is fine, it will show a Success message.

Publish over SSH Configuration

Don’t forget to have your Vagrant VM up and running otherwise the test will take a little longer to finish due to the timeout configuration. And it will show a connection problem as the image below.

Publish over SSH Connection Problem

There are other errors related to ssh key, username or even folder problems. All of them will be reported in the same way as the connection problem above.

Deploy Your App

From the Jenkins’ home screen, click on New Item to create the job. Set a name like maven-web-deploy, select the Freestyle Project option and click OK.

Now you are at job configuration’s page. On the section Source Code Management, select Git and fill the Repository URL field with the maven-web sample project, https://github.com/cyborgdeveloper/maven-web.git.

Next, at the Build section, click on Add build step and select the option Invoke top-level Maven targets. Select the preconfigured Maven version (you already did it before on the section about Jenkins Configuration) and set the Goals field as clean package.

At this point, you have reached exactly the same configuration fully explained in the third part of Jenkins’ series. So, if you want deeper details, check out the article here.

Now that the project is set, let’s configure the deploy. Click again at the button Add build step but this time select the option Send files or execute commands over SSH. Select the SSH Server previously configured, local-vagrant. If you want to watch everything that is happening during the SSH connection, click the Advanced button below checking the option Verbose output in console.

Your configuration should be like the image below.

Publish over SSH Deploy Configuration

Ready to run the job? Save the configurations and click the button Build Now. If everything works fine you should see a Hello World at http://192.168.33.10:8080/maven-web/.

First automated deploy Hello World

What Next?

Congratulations! You have just finished your first automated deploy! It was an awesome first step to start automating your own deploys.

What is the feeling after deploying an app by a click of a button? I would bet that you are feeling great and happy!

So let’s take the next step. Start applying the same process to your Java project (even in other programming languages, the process is pretty close to this one). Adjust some parts if you don’t have a Maven automatic build, or if you have an SVN project instead of a Git project or changing the Servlet Container to the one that fits best for your project. Write down in the comments session your challenges to apply this first step.

Let’s automate!

PS.: The Cyborg Developer initiative offers a FREE email course about software automation in 3 steps. You can subscribe here. You will also receive tips and tricks about automation directly to your mailbox (relax, I hate spam as much as you). Follow me on twitter @rcmoutinho.

Original Link

How to Change a Job Configuration Without Worrying

Starting with a new tech or solving a problem using a different strategy sometimes may be a bit scary. Tracking the history or checking the progress could increase significantly the confidence of making changes. Even more when it is a natural and automatic process.

Understand how a simple plugin could help you a lot on this journey, tracking the jobs and the system configurations. It is useful when you are starting with Jenkins but also if you are already automating your projects.

Job Configuration History

Job Configuration History plugin is very simple but extremely useful. The main objective is to save every change made on a job or on the system configurations. Considering that Jenkins store their information into files, the plugin just maintains the previous state of the configuration.

There are two main benefits of using this plugin. The first one is to track the history of any change that happened. This can save you lots and lots of time when a job suddenly stop working. Checking the history enables to quickly find the changes that cracked the job. The second, if it is used in a good way, could improve a lot the team learning. Each configuration change saves the user who made it. So if you want to learn with the user who made the change, you can. And you can also help or teach who made the mistake causing the job error showing exactly where was the problem.

Installing

If you already read the article How to Install a Jenkins’ Plugin in 5 Minutes, you may notice that I fooled you. Sorry, but you will take less than a minute to install this plugin and get it ready to use!

Note that you only need to type part of the name, select the item and click the button Install without restart. Jenkins will redirect to an installation page showing the progress. It should be a very quick operation.

Installing Job History Plugin

Hands-on

To ease this process, let’s consider the same operations done at Getting Started with Jenkins – The Ultimate Guide. They are simple changes related to Maven that will show system and jobs configuration changes.

After the plugin’s installation, at Jenkins home page, a new menu will be available, Job Config History. This page will show absolutely everything that has changed on your entire Jenkins. Useful to view created and deleted (maybe accidentally) jobs but mainly to check system configuration changes.

Job Config History - System

This new menu also appears in each available job. Now every change can be tracked along the automation.

Job Config History - Job

This plugin provides extra options like pages showing the differences and a restore process with a previous configuration. Realize how simple it is to notice the only change made, that Maven version is now specified.

Job Config Diffs

This plugin has extra configurations like the maximum number of history entries to keep, the maximum number of days to keep history entries, and so on. You can change those configurations at Manage Jenkins > Configure System, at Job Config History session. These changes will also be tracked.

What’s Next

Installing this plugin will help you to understand why a new deployment is not working compared to the previous one, track the performance based on the job changes, or even check the messy or a great job done by a teammate.

Start to improve your automation. Comment here how useful this plugin was on this journey.

Let’s automate!

P.S.: The Cyborg Developer initiative offers a FREE email course about software automation in 3 steps. You can subscribe here. You will also receive tips and tricks about automation directly to your mailbox (relax, I hate spam as much as you do). Follow me on Twitter @rcmoutinho.

Original Link

Bet You Didn’t Know – Discovering DevOps Secrets

This month, we’re highlighting some DevOps articles that aim to teach you something new – learn how to speed up your code merges, become a Git master, and breeze through Jenkins. We guarantee you’ll learn something you didn’t know before in this installment of This Month in DevOps!


5 Trending DevOps Articles on DZone

  1. 8 Useful But Not Well-Known Git Concepts, by Denny Zhang. These lesser-known Git tricks can help you solve problems that are not handled well by the GitHub and BitBucket GUIs.

  2. Don’t Install Kubernetes! by Michael Neale. You don’t have to install Kubernetes – read on to see how to take advantage of it anyway in the most efficient, least time-consuming way.

  3. 7 Code Merge Tools to Make Your Life 7x Easier, by Ben Putano. These open source and proprietary tools will help save you time and improve efficiency in your code merges.

  4. Hexagonal Architecture – It Works, by Ron Kurr. Learn what hexagonal software architecture is and how it helps developers build more resilient systems and make automated testing easier.

  5. Getting Started With Jenkins: The Ultimate Guide, by Rodrigo Moutinho. You’ve heard the buzz about Jenkins for CI/CD. This guide will teach you all the steps (and workarounds) you’ll need.

You can get in on this action by contributing your own knowledge to DZone! Check out our new Bounty Board, where you can claim writing prompts to win prizes! 


Dive Deeper Into DevOps

  1. DZone’s Guide to DevOps: Culture and Process: a free ebook download.

  2. Introduction to DevOps Analytics: Download this free Refcard to discover the value of using historical data to analyze and estimate the probability of success of a new release.


Who’s Hiring?

Here you can find a few opportunities from our Jobs community. See if any match your skills and apply online today!

DevOps Engineer
Grammarly
Location: Kyiv, Ukraine
Experience: 3+ years of experience managing a live production environment, preferably a high-load system. You have a good knowledge and at least some experience with AWS. You know Linux inside and out (or, you know at least some Linux, but can prove that you can learn fast).

Site Reliability Engineer
Scalyr
Location: San Mateo, CA, USA
Experience: 2+ years of experience in running large pieces of infrastructure. Experience with monitoring tools, building reliable applications in a cloud-like environment such as AWS, and concepts like auto-scaling, load balancing, and health checks.

Original Link

Building a Continuous Delivery Pipeline With Git and Jenkins

Jenkins is an automation server which can be used to build, test and deploy your code in a controlled and predictable way. It is arguably the most popular continuous integration tool in use today. The process of automatically building code in stages – and at each stage, testing and promoting it on to the next stage – is called a pipeline.

Jenkins is open source and has an extensive library of well-supported plugins. Not only is Jenkins cross-platform (Win/Mac/Linux), but it can also be installed via Docker, or actually on any machine with a Java Runtime Environment! (Raspberry Pi with a side of Jenkins, anyone?)

Note that there are other continuous integration tools available, including the ones described in this article, as well as my personal favorite, Travis. However, because Jenkins is so common, I want to explore a pattern that often becomes needlessly overcomplicated – setting up a staging pipeline (dev => QA => prod) using a Git repository.

Also note that Jenkins has its own “pipeline” concept (formerly known as “workflows”) that are for long-running, complicated build tasks spanning multiple build slaves. This article strives to keep things as simple as possible using backwards-compatible freestyle jobs. The idea is to use the power and simplicity of Git rather than introduce complexity from – and coupling to – Jenkins.

Review Your Git Workflow

The power of using Git for source control management is most realized when working on a team. Still, I recommend using Git for projects where you are the sole contributor, as it makes future potential collaboration easier – not to mention preserving a thorough and well-organized history of the project with every cloned instance of the repository.

For the purpose of the example, we’ll explore here, consider a typical team of 3-8 regular code contributors working in a single Git repository. If you have more than 8 developers on one project, you may want to consider breaking the application into smaller, responsibility-driven repositories.

A common Git workflow in use today is Vincent Driessen’s “GitFlow,” consisting of a master branch, a develop branch, and some fluctuating number of feature, release, and hotfix branches. When I’m working alone on a personal project, I often commit straight to the master branch. But on a large professional endeavor, GitFlow is used to help the code “flow” into the appropriate places at the appropriate times. You can see how Git branches are related to continuous integration and release management in general.

What Is the Goal of a Staging Pipeline?

Nearly every team I’ve worked on uses some variation of a staging pipeline, but surprisingly, no one ever really asks this question. It just feels like something we do because, well, it’s the way it’s supposed to be done.

So what is the goal, anyway? In most cases, a staging pipeline is intended to deploy automatically-built, easily-identifiable, and trustworthy versions of the code that gives non-developers insight into what has been created by the team. Note that I’m not talking about official versions here, just a runnable instance of the code that comes from a particular Git commit.

These non-developers may include technical team members, such as business analysts (BAs), project managers (PMs), or quality analysts (QAs). Or they may include non-technical roles, such as potential customers, executives, or other stakeholders. Each role will have a different set of reasons for wanting this visibility, but it’s safe to assume that these people are not developers and do not build the code on their own machines. After all, developers can run different versions of the code locally whenever and however they like.

Let’s keep this in mind, noting that while Jenkins can be set up for developers to run parameterized builds using manual triggers, doing so does not achieve the stated goal. Just because you can do something doesn’t mean that you should!

Mapping Git Branches to Staging Environments

Now that we understand the purpose of a staging pipeline in general, let’s identify the purpose of each environment. While the needs of each team will vary, I encourage you to embrace the KISS Principle and only create as many environments in your pipeline as needed. Here’s a typical (and usually sufficient) example:

Dev

The purpose of the dev environment is to provide insight into what is currently on the develop branch, or whatever branch is intended to be in the next “release” of the code.

QA (aka Staging)

The purpose of the QA environment is to provide a more stable and complete version of the code for the purpose of QA testing and perhaps other kinds of approval.

Prod

The purpose of the prod environment is to host production-ready code that is currently on the master branch (or whatever branch you use for this purpose). This represents what can be made available to users, even if the actual production environment is hosted elsewhere. The code in this branch is only what has already been approved in the QA environment with no additional changes.

While developers can check out and run code from any branch at any time, these environments represent trustworthy instances of that codebase/repository. That’s an important distinction because it eliminates environmental factors such as installed dependencies (i.e. NPM node_modules, or Maven JARs), or environment variables. We’ve all heard the “it works on my machine” anecdote. For example, when developers encounter potential bugs while working on their own code, they use the dev environment as a sanity check before sounding the alarm:

While the dev and prod environments are clearly linked to a Git branch, you might be wondering about the QA environment, which is less clear. While I personally prefer continuous deployments that release features as soon as they’re ready, this isn’t always feasible due to business reasons.

The QA environment serves as a way to test and approve features (from develop) in batch, thus protecting the master branch in the same way that code reviews (pull requests) are meant to protect the develop branch. It may also be necessary to use the QA environment to test hotfixes – although we certainly hope this is the exception, not the rule. Either way, someone (likely the quality analyst) prevents half-baked code from making its way into the master branch, which is a very important role!

Since the QA environment is not tied to a branch, how do you specify what code should be deployed, and where it should come from?

In my experience, many teams overlook the tagging portion of GitFlow, which can be a useful tool in solving this problem. The QA environment represents a release candidate, whether you officially call it that or not. In other words, you can specify the code by tagging it (i.e. 1.3.2-rc.1), or by referencing a commit hash, or the HEAD of any branch (which is just a shortcut to a commit hash). No matter what, the code being deployed to the QA environment corresponds to a unique commit.

It’s important that the person who is testing and approving the code in the QA environment is able to perform these builds on their own, whenever they deem it necessary. If this is a quality analyst and they deploy to the QA environment using commit hashes, then they need a manual, parameterized Jenkins job and read-only access to the repository. If, on the other hand, they don’t/shouldn’t have access to the code, a developer should create the tag and provide it (or the commit hash, or the name of the branch). Personally, I prefer the former because I like to minimize the number of manual tasks required of developers. Besides, what if all of the developers are in a meeting or out to lunch? That never happens… right?

After it’s approved, that exact version of the code should be tagged with a release number (i.e. 1.3.2) and merged into master*. Commits can have many tags, and we hope that everything went well so that the version of the code that we considered to be a release candidate actually was released. Meaning, it makes perfect sense for a commit to be labeled as 1.3.2-rc.1 and 1.3.2. Tagging should be automatic if possible.

* Note that my recommendation differs from Driessen’s on this point, as he suggests tagging it after merging. This may depend on whether your team merges or rebases. I recommend the latter for simplicity.

How to Make Staging Environments More Trustworthy

You can make your environments even more trustworthy in the following ways:

  • Follow a code review process where at least one other team member must approve a pull request
  • Configure build and unit test enforcement on all pull requests, so it is impossible to merge code that would “fail” (whatever that means for your team/application)
  • Establish branch protection in your Git repository so users cannot accidentally (or intentionally) push code directly to environment-related branches in the team repository, thus circumventing the review process
  • Set up a deployment hook, so that a Jenkins build job is automatically triggered when code is committed (or merged in) to the corresponding branch. This may make sense for the develop branch!
  • Be cautious about who has access to configure Jenkins jobs; I recommend two developers only. One person is too few due to the Bus Factor, and more than two unnecessarily increases the likelihood of a job being changed without the appropriate communication or consensus.
  • Display the version of the code in the application somewhere, such as the footer or in the “about” menu. (Or, put it in an Easter Egg if you don’t want it visible to users.) The way you obtain the version, specifically, will depend greatly on the language of your app and the platform you use to run it.

Creating a QA Build Job From a Commit Hash

I have now sufficiently nagged you about all the ways you should protect your code. (You’ll thank me later, I promise!) Let’s get down to the business of configuring a Jenkins job for a single Git repository.

This job will fit into the middle of the dev => QA => prod pipeline, helping us deploy code to the QA (aka staging) environment. It will allow a quality analyst to build and tag the code given a commit hash and tag name.

This build should:

  1. Check out the specific commit (or ref).
  2. Build the code as usual.
  3. Tag the commit in Git.
  4. Push the tag to the origin repo.
  5. (Optional, but likely) Deploy it to a server.

Notice that order matters here. If the build fails, we certainly don’t want to tag and deploy it. Steps 2 and 5 are fairly standard for any Jenkins job, so we won’t cover those here.

One-Time Setup

Since Jenkins needs to push tags to the origin repo, it will need a basic Git configuration. Let’s do that now. Go to Jenkins > Manage Jenkins > Configure System > Git plugin. Enter a username and email. It doesn’t really matter what this is, just be consistent!

Create a New Job

  1. Create a new freestyle project with a name of your choosing (for example, “QA-staging”)
  2. Under General, check “This project is parameterized”. Add two parameters, as shown below. The “Default Value” of COMMIT_HASH is set to “refs/heads/master” for convenience since we just want to make sure the job has a valid commit to work with. In the future, you may wish to set this to “refs/heads/develop”, or clear this field entirely.3. Under Source Code Management, choose ‘Git’. Add the URL of the repository and the credentials. (Jenkins will attempt to authenticate against this URL as a test, so it should give you an error promptly if the authentication fails.) Use the commit hash given when the job was started by typing ${COMMIT_HASH} in the Branch Specifier field.4. Under Post-build Actions, add an action with the type “Git Publisher”. Choose “Add Tag” and set the options as shown below. We check both boxes, because we want Jenkins to do whatever it needs to do in the tagging process (create or update tags as needed). ${TAG} is the second parameter given when the job was started.

When you run the job, you’ll be prompted to enter a commit hash and tag name. Here, you can see that I’ve kicked off two builds: The first build checked out and tagged the latest commit on master (you’d probably want /refs/heads/develop if you’re using GitFlow, but you get the idea).

The second build checked out an older commit, built it, and tagged it with “test”. Again, you’d probably be building and tagging later versions of the code, not earlier ones, but this proves that the job is doing exactly what it’s told!

The first build, the HEAD of the master branch, succeeded. It was then tagged with “0.0.1” and pushed to the origin repo. The second build, the older commit, was tagged as well!

Conclusion

Git and Jenkins are both very powerful, but with great power comes great responsibility. It’s common to justify an unnecessary amount of complication in a build pipeline simply because you can. While Jenkins has a lot of neat tricks up his sleeve, I prefer to leverage the features of Git, as it makes release management and bug tracking significantly easier over time.

We can do this by being careful about the versions of code that we build and tagging them appropriately. This keeps release-related information close to the code, as opposed to relying on Jenkins build numbers or other monikers. Protecting Git branches reduces the risk of human error, and automating as many tasks as possible reduces how often we have to pester (or wait on) those humans.

Finally, processes are necessary when working on a team, but they can also be a drag if they are cumbersome and inflexible. My approach has always been: if you want people to do the right thing, make it the easy thing. Listen to your team to detect pain points over time, and continue to refine the process with Git and Jenkins to make life easier.

Original Link

Tips to Help PL/SQL Developers Get Started With CI/CD

In most ways, PL/SQL development is just like working with any other language but, sometimes it can be a little different. If you’d like to create a Continuous Deployment Pipeline for your PL/SQL application, here is a short list of tips to help you get started.

Work From Files

Do not use the Database as your source code repository. If you are making changes to your application in a live database, stop right now and go read PL/SQL 101: Save your source code to files by Steven Feuerstein.

Now that you’re working from files, those files should be checked into…

Version Control

There are a lot of version control applications out there, the most popular right now is probably Git. One advantage of using Git is, you work in your own local copy of the repository making frequent commits that only you see. If you’re working in your own personal database you could compile your changes to the database at this point.

If you’re working in a database shared with other people and you’re ready to compile your code, from the central repository to get any changes since your last pull. Handle any merge conflicts, then your changes back up to the shared repository. After the push, you can compile your code to the database. This helps ensure that people don’t overwrite each other’s changes.

Making frequent small commits will help keep everything running smoothly.

What About Your Schema Objects?

Your schema objects, such as tables and views, are as much a part of your application as any other component and changes to these objects should be version controlled as well. The process of keeping track of schema changes is called Schema Migration and there are open source tools you can use such as Flyway and Liquibase.

Unit Testing

In any application it’s smart to have good test coverage, If your goal is Continuous Deployment it’s critical. A Unit Test is when you test a single unit of code, in PL/SQL that may be a function or procedure.

Unit tests typically follow these steps:

  1. Setup the environment to simulate how it will be when the application is deployed. This might include changing configuration settings and loading test data.
  2. Execute the code unit.
  3. Validate the results.
  4. Clean up the environment, resetting it to the state it was in before running the tests.

utPLSQL is a great tool for unit testing your PL/SQL applications. You will write a package of tests for each ‘unit’ of your application which should test all of the required functionality and potential errors. utPLSQL is an active open source project with an impressive set of features. Check out the docs to get started.

Build

Building a database application usually consists of running the change scripts in a specific order. It’s common to create a master script that executes the others in order of how the objects depend on each other. However, if the master script simply executes the other scripts, you will need to create additional scripts to track and verify changes, and more scripts to give you the ability to rollback the changes if/when there’s a problem.

There are build tools such as Gradle and Maven that can easily execute your scripts. But you’ll still need to create the additional control scripts. If you use a Schema Migration tool it should include a lot of these additional functions without having to write extra scripts. For an example, check out Dino Date which has a Liquibase migration included.

How to Handle the PL/SQL Code

You could include your PL/SQL code in a Schema Migration changeset but adding schema migration notation to your PL/SQL introduces another layer of complexity and potential errors.

In the Dino Date runOnChange directory, you will find examples of setting up Liquibase changesets that watch for changes in the files of objects that you would rather keep ‘pure’. When you run a migration, if the file has changed Liquibase will run the new version.

In a shared database environment, you should execute a schema migration after you pull/merge/push your changes into your version control system.

Automate!

All of these pieces can be tied together and automated with an automation server such as Hudson or Jenkins (both are open-source) to create a build pipeline.

A simple (maybe too simple) build pipeline using the above tools could follow these steps:

  1. Developer makes a change and pushes it to the shared Git repository.
  2. Hudson notices the repository has changed and triggers the build pipeline.
  3. The project is pulled from Git.
  4. Liquibase deploys the changes to a test database.
  5. utPLSQL is trigged to run the unit tests.
  6. Liquibase deploys the changes to the production database.

Other Useful Tools

  • Edition Based Redefinition[pdf] can be used to deploy applications with little to no downtime.
  • Oracle Developer Cloud Service comes with a ton of pre-configured tools to help with almost every aspect of your development process.
  • Gitora can help you version control your application if you are not able to move out to files.

Original Link

How to Install a Jenkins Plugin in 5 Minutes

The main thing that makes Jenkins useful to any kind of project is its plugins. They are extensions beyond the core to give superpowers to your project on the right dose. That maintains a light Jenkins core and enable projects with Java, .NET, PHP, and many other types to be automated.

This article will cover how to identify plugins already installed and how to install new ones, search, update and so on. To install Jenkins and understand the basics of the initially suggested plugins, check Getting Started with Jenkins – The Ultimate Guide.

Note that a new Jenkins version (2.107.2) is available but this article will keep the 2.73.3 version used in the previous articles.

Manage Plugins

After the login, on the left, there is a menu Manage Jenkins. This is the place to set up any specific configuration to your Jenkins. If you are using the version 2.73.3 you may have noticed the big warning to update to the most recent Long-Term Support (LTS) version.

The menu Manage Plugins will be the only place to care about in this article.

Installed Plugins

Jenkins Manage Plugins

Following the standard installation, the Installed tab will have a bunch of plugins. If none of the suggested plugins were marked for installation, this tab will be empty.

Using Jenkins as a kind of Cron, scheduling tasks on background, no plugins will be necessary. But to create an automation to the simplest project you will need at least a version control system adapter as SVN or GIT to get your source code.

Not all plugins can be uninstalled directly (button placed on the last column). Some of them are dependencies to others so Jenkins will show a message for the ones that can’t be uninstalled. The same scenario is applied on disabling plugins (checkbox in the first column).

Jenkins Uninstalling or Disabling Plugins

After uninstalling or disabling any plugin, Jenkins needs a restart to get things done. You may need to scroll down for a while to reach the end of the page due to the number of plugins. Make sure that no job is running at this moment.

Restarting Jenkins

Searching, Installing, Updating…

Installing and searching for plugins is a pretty simple task. Just need to access the Available tab. Jenkins will list all available plugins separated by session. The easy way is using the filter field. Type whatever you want and mark as many plugins as you want using the filter and Jenkins will keep everything you marked. When you are done, just click on Install without restart to apply your needs. If you want deep details of each plugin, check official page of Jenkins Plugins.

Installing without restart is an old feature to avoid rebooting Jenkins so you can test new plugins faster (see this article from Kohsuke). But you can also restart if you prefer.

Considering that Jenkins community is quite active, plugins are receiving constant updates to improve features or to fix bugs. The tab Updates will always warn you the plugins which need updates. Sometimes a Jenkins update will be asked for too. In the same way as uninstalling and disabling, updating plugins needs Jenkins restart to apply modifications.

Advanced

As the tab says, this is an Advanced topic that this article will not cover but basically, it enables to upload plugins, change the URL that Jenkins check for updates or configuring proxies.

What Next

Time to play! Dealing with Jenkins plugins is very important during the automation process. Many of your problems maybe already solved by plugins, you only need to install it. A very good example is checking out your project using GIT. There are no needs to create a Shell Script to clone it, just use the GIT plugin and earn many other features for free.

Here is a very simple action to put everything into practice:

  • Install Jenkins without the suggested plugins (use docker to speed up the process if you want);
  • Take one or two plugins. Use the list from Getting Started with Jenkins – The Ultimate article to help in the process;
  • Install only the ones that make sense to your project (maybe you just need GIT or Subversion initially).

Done! You have just learned how to deal with any plugin on Jenkins. Now, in less than five minutes, you have the skill to search and install the desired plugin.

If you have any problem during this process leave a comment below.

Let’s automate!

PS.: The Cyborg Developer initiative offers a FREE email course about software automation in 3 steps. You can subscribe here. You will also receive tips and tricks about automation directly to your mailbox (relax, I hate spam as much as you hate). Follow me on twitter @rcmoutinho.

Original Link

Back to the Roots: Towards True Continuous Integration (Part 1)

In this article, I would like to show you what many people believe CI is, what is true Continuous Integration, and what is not CI. Also, I will give you some examples to better understand it. 

What Is CI?

CI (continuous integration) is a software development practice in which a continuous integration server polls a version control repository builds an artifact and validate the artifact with a set of defined tests. It is a common practice for most enterprises and individuals… and this is not the true Continuous Integration definition, sorry for the joke.

What Is True Continuous Integration?

True Continuous Integration is not simply some kind of “Jenkins/Travis/Go/Teamcity” that polls the git repository of the project, compiles it, and runs a bunch of tests against the artifact. In fact, this is the less interesting part of CI, which is not a technology (like Jenkins) but an agile practice created by Grady Booch and adopted and prescribed by the Extreme programming methodology.

As an analogy with another Extreme programming technique, TDD is not about unit testing (although it uses unit testing), but about feedback and obtaining feedback as soon as possible to speed up the development cycles (which is implemented in a concrete usage of unit testing).

With CI, software is built several times a day (ideally, every few hours) – every time a developer integrates code in the mainline (which should be often) in order to avoid “integration hell” (merging code from different developments at the end of a development interaction). CI avoids this “integration hell” by integrating code as soon as possible and forcing team members to view what other developers are doing to make shared team decisions about new code.

The methodology states that every team member integrates into the mainline as often as possible. Every contribution to the VCS (Version Control System) is potentially a release, so every contribution should not break functionality and should pass all known tests.

A CI server will construct an artifact from the last sources of the mainline and pass all known tests. If there is a failure, the CI server will warn all members of the team of the state of the build (RED). The maximum priority of the team is to keep the build in its default value (GREEN).

What Is Not CI?

Once we realize that CI is far more than the simple use of a CI server, we can state that:

  • Working with feature branches and have a CI checking master is not CI.
  • Working with pull requests is not CI.

It’s important to note that I’m not judging in terms of good/bad practices; both feature branches and pull requests are simply other methodologies different than CI.

Both feature branches and pull requests mandates that the work must be done in another branch different than the master (the one monitored by the CI Server) this leads to longer cycles before they could be merged into master.

Feature branches and pull requests rely profoundly on team resource/task planification to avoid refactors on one task(branch) that affects developments on another task(branch) minifying the threaded “integration hell.”

An example of integration hell: we have the following code; two classes that leverage the API, and the rest calls to an external API: 

APIUsersAccessor
class APIUsersAccessor
{ const USERS_API_PATH = "/users"; /** * @var string */ private $host; /** * @var string */ private $username; /** * @var string */ private $password; public function __construct(string $host, string $username, string
$password) { $this->host = $host; $this->username = $username; $this->password = $password;
} public function getAllUsers(): array { $data = array( "email" => $this->username, "password" => $this->password ); $headers = array( "Content-Type" => "application/json;charset=UTF-8" ); $request = \Requests::GET($this->host.self::USERS_API_PATH,
$headers, json_encode($data)); return json_decode($request->body); }
}

APIProductsAccessor
class APIProductsAccessor
{ const PRODUCTS_API_PATH = "/products"; /**
* @var string */ private $host; /** * @var string */ private $username; /** * @var string */ private $password; public function __construct(string $host, string $username, string
$password) { $this->host = $host; $this->username = $username; $this->password = $password;
} public function getAllProducts(): array { $data = array( "email" => $this->username, "password" => $this->password ); $headers = array( "Content-Type" => "application/json;charset=UTF-8" ); $request = \Requests::GET($this->host.self::PRODUCTS_API_PATH,
$headers, json_encode($data)); return json_decode($request->body); }
}

As you can see, both blocks of code are very similar (the classical code duplication). Now we are going to start two development features with two development branches. The first development must add a telephone number to the request to the Products API; the second one must create a new API to query all cars available at a store. This is the code in the Products API after adding the telephone number:

APIUsersAccessor (with telephone)
class APIUsersAccessor
{
.... public function __construct(string $host, string $username, string
$password)
{
....... $this->telephone = $telephone; } public function getAllUsers(): array { $data = array( "email" => $this->username, "password" => $this->password, "tel" => $this->telephone );
..... }
}

The developer has added the missing field and has added it to the request. The developer of branch 1 expects this diff as the merge with a master:

true continuous integration

The problem is that developer 1 does not know that developer 2 has made a refactor in order to reduce code duplication because CarAPI is too similar to UserAPI and ProductAPI, so the code in his branch will be like this:

BaseAPIAccessor
abstract class BaseAPIAccessor
{ private $apiPath; /**
* @var string */ private $host; /** * @var string */ private $username; /** * @var string */ private $password; protected function __construct(string $host,string $apiPath, string
$username, string $password) { $this->host = $host; $this->username = $username; $this->password = $password; $this->apiPath = $apiPath;
} protected function doGetRequest(): array { $data = array( "email" => $this->username, "password" => $this->password ); $headers = array( "Content-Type" => "application/json;charset=UTF-8" ); $request = \Requests::GET($this->host.$this->apiPath, $headers,
json_encode($data)); return json_decode($request->body); }
}
concrete APIs
class ApiCarsAccessor extends BaseAPIAccessor
{ public function __construct(string $host, string $username, string
$password) { parent::__construct($host, "/cars", $username, $password);
} public function getAllUsers(): array { return $this->doGetRequest(); }
}
class APIUserAccessor extends BaseAPIAccessor
{ public function __construct(string $host, string $username, string
$password) { parent::__construct($host, "/users", $username, $password);
} public function getAllUsers(): array { return $this->doGetRequest(); }
}
class APIProductsAccessor extends BaseAPIAccessor
{ public function __construct(string $host, string $username, string
$password) { parent::__construct($host, "/products", $username, $password);
} public function getAllProducts(): array { return $this->doGetRequest(); }
}

So the real merge will be:

true continuous integrationBasically, we will have a big conflict at the end of development cycle when we merge branch 1 and branch 2 into the mainline. We will have to do a lot of code reviews, which will involve an archaeological process of reviewing all past decisions in a development phase to see how to merge the code. In this concrete case, the telephone number will also involve some kind of rewrite.

Some will argue that developer 2 should not have done a refactor because planning stated that he has to develop only CarAPI, and planning stated clearly that there should be no collision with UserAPI. Well, yes…but to make that this kind of extreme planification work, there should be a good planning of all resources, we should have a lot of architectural meetings involving developer 1 and developer 2.

In these architectural meetings, developer 1 and developer 2 should have realized that there was some kind of code duplication and they have to decide to intervene and replan, or do nothing and increase technical debt, moving the refactor decision to future iterations. This may not sound too agile, right? The point is that is difficult to mix agile and non-agile practices.

If we do feature branch/pull requests, a full iterative planification process works better if we’re doing agile continuous integration. Again, I’m not stating that feature branches/pull requests are good/bad tools, I’m simply stating that they are non-agile practices.

Agile is all about communication and continuous improvement, and it’s all about feedback as soon as possible. In the agile approach, developer 1 will be aware of the refactoring of developer 2 in the beginning, being able to start a dialog with developer 1, and check if the type of abstraction that they’re proposing will be the correct one to fit, and the addition of a telephone number. 

OK….but wait! I need a feature branch! What if not all features are deliverable at the end of an iteration?

Feature branches are a solution to a problem – what to do if not all code is deliverable at the end of an iteration – but it is not the only solution.

CI has another solution to this problem – “feature toggles.” Feature branches isolate the work-in-progress feature from the final product via a branch (the WIP lives in a separate copy of the code), feature toggles isolate the feature from the rest of the code using.. Code!

The simplest feature toggle one can write is the dreaded if-then-else, is the example you will find in most sites when you googled “feature toggle.” It is not the only way to implementing, as any other type of software engineering you can replace this conditional logic with polymorphism.

In this example in Slim, we are creating in the current iteration a new REST endpoint, we do not want to be ready for production, we have this code:

code prior the toggling
<?php
require '../vendor/autoload.php';
use resources\OriginalEndpoint
$config = [ 'settings' => [ 'displayErrorDetails' => true, 'logger' => [ 'name' => "dexeus", 'level' => Monolog\Logger::DEBUG, 'path' => 'php://stderr',
], ],
];
$app = new \Slim\App(
$config );
$c = $app->getContainer();
$c['logger'] = function ($c) { $settings = $c->get('settings'); $logger = LoggerFactory::getInstance($settings['logger']['name'],
$settings['logger']['level']); $logger->pushHandler(new
Monolog\Handler\StreamHandler($settings['logger']['path'],
$settings['logger']['level'])); return $logger;
};
$app->group("", function () use ($app){ OriginalEndpoint::get()->add($app); //we are registering the endpoint
in slim });

We can define the feature toggle with a simple if clause:

if clause feature toggle
<?php ....
$app->group("", function () use ($app){ OriginalEndpoint::get()->add($app); if(getenv("APP_ENV") === "development") { NewEndpoint::get()->add($app); // we are registering the new
endpoint if the environment is set to development (devs machines should
have APP_ENV envar setted to development)
} });

We can refine our code to express better what we’re doing and be able to have several environments (maybe for having a test AB situation?)

configuration map feature toggle
<?php
......
$productionEnvironment = function ($app){ OriginalEndpoint::get()->add($app);
};
$aEnvironment = function ($app){ productionEnvironment($app); NewEndpointA::get()->add($app);
};
$bEnvironment = function ($app){ productionEnvironment($app); NewEndpointB::get()->add($app);
};
$develEnvironment = function ($app){ productionEnvironment($app); NewEndpointInEarlyDevelopment::get()->add($app);
};
$configurationMap = [ "production" => $productionEnvironment, "testA" => $aEnvironment, "testB" => $bEnvironment, "development" => $develEnvironment
];
$app->group("", function () use ($app, $configurationMap){ $configurationMap[getenv("APP_ENV")]($app);
});

The advantages of this technique is consistent with the main goal of CI (having constant feedback about code integration/validation and collisions with other developments), the code in progress is developed and deployed into production, and we have constant feedback about the integration of the new feature with the rest of the code, leveraging the risk of enabling the feature when it’s developed.

It is a good practice to remove this kind of toggle from code once a new feature has been stabilized in order to avoid adding complexity to the codebase.

We have arrived at the end of this first part of true Continuous Integration. We have rediscovered that continuous integration is “not only” using a CI server but adopting a practice with perseverance and discipline. In the second part, we will talk about how to model a good CI flow. 

Original Link

The Rise of DevOps Engineers in the Current Market

The current demand for DevOps engineers in the market is rapidly increasing. This is primarily because these engineers and their operations have resulted in great success for companies all around the world. Business organizations with such engineers are experiencing overwhelming returns compared to the firms that do not employ these professionals. The engineers responsible can position code as much as 30 times faster, and the success rate of this code is doubled, helping a company to gain that competitive edge.

However, the process of becoming an expert DevOps engineer is highly complex. In order to be successful in this field, an aspiring individual must be well prepared and organized. Expertise only comes with extreme hard work and by having a good research background, which serves as a platform for tech-savvy newcomers in particular.

How Do You Initiate Your Career as a DevOps Engineer?

The most important factor an aspiring DevOps engineer should consider is to comprehend the fundamental principles of DevOps. People tend to disregard this factor and gravitate towards mastering the various tools of DevOps learning. Both are important concepts, but without a strong foundation achieved with the fundamentals of DevOps, an individual can never succeed in mastering this career.

I have interacted with several candidates who only have interest in understanding DevOps tools rather than concentrating on DevOps and its application. Another major misconception people have is that they consider DevOps to be only related to automation. Tools such as Jenkins, Cron, and Hudson existed long before the popularity of DevOps, which makes the prior argument redundant.

I also had the opportunity to interview various candidates for the position of DevOps engineer. The analysis that intrigued me the most was whether or not the applicant has a complete understanding of the DevOps concept. Thus, I always used to ask this particular question: “How would you explain the concept of DevOps to a person who has no technical background?”

A difficult question indeed. But the people who have a true interest in this subject have always managed to come up with the correct answer. The usual answer is that DevOps in an automated framework that involves integration and strategic placement.

In technical terms, the answer stands true, but it never satisfied me. I always look to answer questions and solve queries in a practical manner. Mechanical definitions never fascinated me and should be the case in this field as well. The study will only make sense if a person has the ability to gather knowledge about the origin of the concept. Knowing the reason DevOps came into existence and why there is a need for such functionality in the current economy will enable you to understand this postulate in a more efficient manner. It is also critical that an individual or engineer comprehends the challenges a team faces while operating in this system. Only then will the theory and fundamentals of this study begin to make sense.

To conclude the question of where and how do you begin your DevOps career, the most important thing is to emphasize the principles and fundamentals before moving on to mastering the tools of the concept. The most effective way I can suggest is to continuously research the topic. Gain valuable insight and experience. This can also be achieved by going through various books, in particular, the book titled “The Phoenix Project.” The content in the book might be fictional but its application is definitely possible in the world. I would also recommend this book to business officials who are in transition mode and are considering incorporating DevOps into their current operational setup.

Image title

Thinking From the Point of View of a Developer

An individual must be aware when the process of coding, re-engineering, testing, and scripting is being carried out. Gaining knowledge is one thing, and applying it on the field of work is another. Newbies reading this do not need to worry. You need not be an expert developer or programmer with complete knowledge of DevOps tools. You just have to think smart, on your feet, just as an experienced engineer would. Putting yourself in the shoes of a developer helps to solve queries in an efficient manner.

Personally, whenever faced with any challenging situation, I always used to count on my basic knowledge. Again, having a strong foundation helps. Learning the basic programming languages along with a complete command of the fundamentals will help you down the line.

Knowing what a developer is most likely to do while developing software or framing and integrating code into an existing setup is the key to success. This aids in resolving vital issues, which in turn, results in the positioning of code in a productive manner. Therefore, it is essential to know how the operation is to be carried out manually. Only then will the process seem easier when the tools of DevOps are incorporated in the long run.

Gain Expertise in Operations Before Stepping Into DevOps

I cannot emphasize this point enough. Gaining complete expertise in the art of operations and system administration before jumping into the field of DevOps is of absolute importance – a vital point for newcomers and aspiring engineers to note. DevOps exists because of the advancement of the information technology sector. The survival of IT was dependant on two factors, namely sysadmins and Ops (engineers who were skilled in several coding and scripting languages). With the help of the sysadmins and Ops, various systems, such as Linux and Windows, were managed. It also helped to set up and manage various web servers, along with the placement of strategic codes.

The sysadmins and ops engineers were proficient in Shell and used Script to reform and build automated function. All this was carried out way before DevOps was introduced. Now, engineers believe that such proficiency is no longer needed and therefore do not have complete control of these operational tools. This is a misconception that candidates should get out of their heads. Progress is made one step at a time. Similarly, one cannot master DevOps if the concept and fundamentals of Ops are not clear. Thus, an individual must strive to become a specialist in administration/operations first, and then set sail on the DevOps journey. The following list will help you gain more knowledge of DevOps engineering:

  • Learn how to operate Linux.

  • Start to learn various scripting languages such as Ruby, Perl, Bash, Python, etc.

  • Study web servers and the requirements needed to get them functioning.

  • Practice monitoring for the development of various infrastructures and software.

  • Have a firm grip on the principles and fundamentals of networking.

  • Learn about the development of servers manually without tools.

  • Practice RDBMS, ext, and NFS systems.

This may seem like a grueling task, but it is bound to pay dividends in the long run. You will develop your skills from the roots up so that tackling any challenges in the real-time work atmosphere will become easier. Such is the importance of having operational and administrative experience before learning about DevOps.

Learning How to Effectively Manage Code

Let us get one thing straight: in order to become a DevOps engineer, an individual must first have complete control of a Distributed Version Control System. DVCS umbrella tools such as Mercurial are of critical importance for engineers. DevOps initially became popular due to tools like Git and Mercurial. The old school techniques of using FTP for the purpose of transferring codes are no longer followed.

The dynamic features of these tools helped to bring DevOps into the present market. Thus, it is vital that you gain experience and learn how to use tools such as Git and Mercurial as you are likely to rely on them on a regular basis.

Develop Jenkins as Your Long-Term Server

Let us begin this discussion by talking about Jenkins. It can be called a solution to CI and CD. The purpose of CI is to collect all the code originating from various developers and transfer it to a single system. This is carried out numerous times in order to avoid lag or downstream problems. CD, on the other hand, assembles all this collected code and merges it together, which then is used in production.

Jenkins has existed in the IT sector for a very long time. It captured a considerable portion of the market even before DevOps came into the picture. The incorporation of Jenkins as a tool with the Ops team proved to be very successful in the long run, as the servers were more stable, which also featured automated functions. This eventually led to the domination of Jenkins in the CI and CD segments. There are other tools being used by DevOps engineers with regards to CI and CD. However, my personal preference would always be Jenkins, as it is easy to operate and user-friendly.

Efficient Configuration Management Is Key

The first rule of being a good DevOps engineer is never to be shy when it comes to getting your hands dirty. Management of the identified tools present is extremely significant. This is where we link the factors we talked about before. With unclear knowledge of operations, administration, and DevOps fundamentals, an engineer can never manage configurations efficiently.

When I started to learn about DevOps, I was very much intrigued by configuration management tools such as Chef and SaltStack. I was absolutely fascinated with how configuration management and its tools can help an engineer to manage the infrastructure present as code. Normally, OS installation was carried out manually, which made it susceptible to errors. With the incorporation of the CM tool, such errors are mitigated.

Constant Monitoring in DevOps

This is one of the factors which I personally feel should develop into a habit. This aspect has been part of the IT sector long before the existence of DevOps. Such tools help you to keep track and monitor a system’s records and its resources, in turn increasing productivity, efficiency, and profits in the long run.

Enter Virtualization

This concept has been present in the IT industry for well over a decade. What DevOps did differently was to provide engineers the means to configure, manage, and develop machines on a virtual platform. All this and more is made possible by the existence of tools like Packer. Another sphere in demand is containers, which compared to traditional virtualization, are easier to operate and work with. Containers are popular simply because of Docker, which I strongly recommend for the management of containers and virtualization.

Upgrade to the Cloud

With the incorporation of CM tools, the current market demanded a move into the cloud system. The management requirement of cloud infrastructure initiated a higher demand for DevOps and its services. Engineers must consequently have a clear understanding of cloud providers and their services. These cloud service providers also generate a quality certification which will surely add to your DevOps certification.

As I mentioned, to become a DevOps engineer requires a significant level of determination and hard work, as it is no easy task. Newbies and people with little or no experience in the technical field aspiring to be successful in this genre have an even more difficult task at hand. However, growth and success is only achieved with patience and perseverance. Therefore, do not be afraid to make mistakes, as you can only learn from them.

Original Link

  • 1
  • 2