CI/CD With Kubernetes and Helm

In this blog, I will be discussing the implementation of CI/CD pipeline for microservices which are running as containers and being managed by Kubernetes and Helm charts

Note: Basic understanding of Docker, Kubernetes, Helm, and Jenkins is required. I will discuss the approach but will not go deep into its implementation. Please refer to the original documentation for a deeper understanding of these technologies.

Original Link

Why and How to Use Git LFS

Although Git is well known as a version control system, the use of Git LFS (Large File Storage) is often unknown to Git users. In this post I will try to explain why and when Git LFS should be used and how to use it. The source code of this post can be found on GitHub.

What Is It?

Git LFS is an open-source project and is an extension to Git. The goal is to work more efficiently with large files and binary files into your repository.

Original Link

Adding a GitHub Webhook in Your Jenkins Pipeline

Have you ever tried adding GitHub webhook in Jenkins? In this blog, I will be demonstrating the easiest way to add a webhook in your pipeline.

First, what is a webhook? The concept of a webhook is simple. A webhook is an HTTP callback, an HTTP POST that occurs when something happens through a simple event-notification via HTTP POST.

Original Link

The 10 Best DevOps Tools for 2018

The integration of Development and Operations brings a new perspective to software development. If you’re new to DevOps practices or looking to improve your current processes, it can be a challenge to know which tool is best for your team.

We’ve put together this list to help you make an informed decision on which tools should be part of your stack. So, let’s take a look at the 10 best DevOps tools, from automated build tools to application performance monitoring platforms.

Original Link

Software Engineering Daily — GitOps Key Takeaways

In a recent episode of Software Engineering Daily, Alexis Richardson spoke with Jeffrey Meyerson and recorded a podcast on GitOps. Here is a small excerpt from that interview.

When did convergence start to happen around the ideas that became GitOps?

Original Link

VSTS Name Change in Azure DevOps Effects on Git Repositories

As I’ve said in the past, it is super easy to build a VSTS Build (now Azure DevOps Pipeline) to keep two repositories in sync. In that article, one of the steps is pushing the new code to the destination repositories with a URL like https://$(token), to automatically include a token to authenticate in the destination repository.

Now, some of my builds have started to fail due to timeout and I immediately suspected the reason: the name change from VSTS to Azure DevOps changed the base URL from to, and this broke the build.

Original Link

Working With Git Feature Branches [Video]

A very common strategy to work with Git in teams is to use feature branches. In this episode, you will learn how to create them, switch between branches, keep them up-to-date with the master, and finally, merge them back into the master again — all within IntelliJ Idea.

Original Link

The Official GitOps FAQ

We’ve recently updated "The Kubernetes Library" with a brand new page called "The GitOps FAQ." This is a living document that we intend to add to as we all learn more about this methodology for deploying and managing applications on Kubernetes.

There’s been a lot of discussion around GitOps in the community and out of this came many excellent questions — particularly around the differences and similarities between GitOps, Continuous Delivery, and Infrastructure as Code. This FAQ addresses some of those questions.

Original Link

Git Commands to Keep a Fork Up to Date

I’ve seen the following tweet about git making its way around Twitter recently:


Original Link

GitOps — Git Push All the Things

In today’s competitive environment you need to deliver features quickly without compromising on quality. But it can be difficult for most organizations to keep up and balance current release management practices with traditional operations procedures. And now with developers taking an end-to-end "you build, you own it" approach to development processes, they need tools and methods they know best in order to quickly adapt.

At the most recent Continuous Lifecycle conference in London, Alexis Richardson delivered the keynote address entitled, “GitOps: Git Push All the Things” where he discussed the industry challenges, including current CI/CD trends and how all of these tools and processes are converging with operations and monitoring. In addition to this, Alexis explained how by applying GitOps best practices, developers can take control of both the development and operations pipelines using the tools with which they are most familiar.

Original Link

What Is GitOps, Really?

A year ago, we published an introduction to GitOps – Operations by Pull Request. This post described how Weaveworks ran a complete Kubernetes-based SaaS and developed a set of prescriptive best practices for cloud-native deployment, management, and monitoring.

The post was popular. Other people talked about GitOps and published new tools for git push, development, secrets, functions, continuous integration, and more. Our website grew, with many more posts and GitOps use cases. But people still had questions. How is this different from traditional infrastructure as code and continuous delivery? Do I have to use Kubernetes? etc.

Original Link

See What’s New in GitKraken v4.0

This is your last chance. After this, there is no turning back. You take the blue pill: the story ends; you wake up at your computer with the same Git GUI you’ve always had. You take the red pill: you stay in Wonderland, and I show you what it’s like to develop like a Kraken…

Being the curious Kraken that he is, Keif swallowed the red pill and woke up in the construct with some new abilities. Keep reading to see what’s new in version 4.0 of GitKraken and get the latest version of our Git Client free.

Original Link

Managing Helm Releases the GitOps Way

What is GitOps?

GitOps is a way to do Continuous Delivery, it works by using Git as a source of truth for declarative infrastructure and workloads. For Kubernetes this means using git push instead of kubectl create/apply or helm install/upgrade.

In a traditional CICD pipeline, CD is an implementation extension powered by the continuous integration tooling to promote build artifacts to production. In the GitOps pipeline model, any change to production must be committed in source control (preferable via a pull request) prior to being applied on the cluster. This way rollback and audit logs are provided by Git. If the entire production state is under version control and described in a single Git repository, when disaster strikes, the whole infrastructure can be quickly restored from that repository.

Original Link

Git Strategies for Software Development: Part 1

Git is a version control system for tracking changes in files and coordinating work on those files among multiple people. It is primarily used for source code management in software development. It is a distributed revision control system and is very useful to support software development workflows.

The Git directory on every machine is a full repository which has full version tracking capabilities and independent of network access. You can maintain branches, perform merges, and continue with development even when you are not connected to the network. For me, having a full repository on my machine and ease of use (creating a branch, merging branches, and maintaining branching workflows) are the biggest advantages of Git. It is a free and open-source software distributed under the terms of the GNU.

Original Link

Merge Conflict: Everything You Need to Know

Previously, I wrote a long-form post about merge conflict. This time, I’m going to mix things up with an FAQ on the subject. Many of these frequently asked questions could be their own standalone articles, but sometimes, it’s good to have all the answers in one place so we get an overview of the landscape.

Without further ado, let’s get going!

Original Link

Quick Tip: Using Git With NiFi Registry in Docker

Apache NiFi is a great tool for handling data flows, however, the flow development lifecycle has been slightly challenging.

The recent release of NiFi Registry, a sub-project to provide shared resources across instances of NiFi, initially provides the capability to manage versioned flows. As of version 0.2.0, NiFi Registry added support for persisting flow snapshots to Git, making it very compelling!

Original Link

The Latest in GitHub, GitLab, and Git

This is your one-stop shop for GitHub and GitLab news: read developer opinions of the recent GitHub acquisition by Microsoft, new releases from GitLab (hint: a web IDE), and some tips and tricks in Git for good measure. 

The GitHub/Microsoft News

  1. Microsoft and GitHub: A Great Step Forward for DevOps, by Sacha Labourey. Microsoft recently announced their acquisition of GitHub, to mixed reactions. Let’s talk about what this really means for developers and companies.

  2. Making GitHub Easier to Use, by Tom Smith. See what makes successful as it enables project management for developers using GitHub.

  3. DevOps and Version Control: Why Microsoft Had to Get GitHub, by Yariv Tabac. It’s no secret that large corporations make frequent use of open-source software. Learn why OSS is necessary for DevOps and version control.

  4. Where Developers Stand on Microsoft Acquiring GitHub, by Alex McPeak. Microsoft has bought GitHub — check out this take on the acquisition from the developers at SmartBear.

GitLab Updates

  1. Meet the GitLab Web IDE, by Dimitrie Hoekstra. Learn about GitLab’s newly announced Web IDE, its current capabilities, and integration of more advanced features it’ll be rolling out in the future.

  2. GitLab: We’re Moving From Azure to Google Cloud Platform, by Andrew Newdigate. GitLab has decided to move from Azure to Google Cloud Platform to improve performance and reliability. Read on for the details of the migration.

Git Tips and Tutorials

  1. How (and Why!) to Keep Your Git Commit History Clean, by Kushal Pandya. Learn why commit messages are so important for organizing your Git repo and methods for keeping your logs in order.

  2. Quick Tip: Grep in Git, by Robert Maclean. Learn about two features in Git to simplify searching: git-grep and git-log grep.

  3. Git on the Go With These Mobile Apps for Git (and GitHub), by Jordi Cabot. On the go a lot but still need to keep an eye on your git repositories? Check out these apps for your smartphone.

You can get in on this action by contributing your own knowledge to DZone! Check out our new Bounty Board, where you can claim writing prompts to win prizes! 

Dive Deeper Into DevOps

  1. DZone’s Guide to DevOps: Culture and Process: a free ebook download.

  2. Introduction to DevSecOps: Download this free Refcard to discover an approach to IT security based on the principles of DevOps. 

Who’s Hiring?

Here you can find a few opportunities from our Jobs community. See if any match your skills and apply online today!

Senior DevOps Engineer
Clear Capital
Location: Roseville, CA, USA
Experience: AWS Foundational Services, configuration management tools, Continuous Integration/Continuous Delivery knowledge, Linux systems scripting, high-level programming language such as Python or Ruby.

Open Technology Architect
Location: Dallas, TX, USA
Experience: An experienced, “hands-on” software architect that has been in the industry or technology consulting for at least 6 years, with a passion for open-source technologies and the art of software engineering.

Original Link

How to Resolve GitHub Merge Conflicts

Ten years ago, I was just starting out in my career as a developer. Back then, I was using subversion for my version control — then I came across Git. I remember how thrilled I was to find that Git worked way better than subversion. Subversion requires a workaround just to have branches. In Git, branching is a first-class citizen: explicitly available without your having to use weird workarounds. Merging code is a lot smoother in Git as well.

In other words, Git was an awesome invention, one that spawned the business GitHub shortly thereafter. GitHub became everybody’s remote Git repository starting in early 2008.

Not too long after Git and GitHub emerged, this question appeared on Stack Overflow:

With more than 4,000 votes and 33 different answers, this is clearly a popular question for developers. In this article, I will explore how to handle merge conflicts in these common scenarios:

  1. Sending pull requests in GitHub
  2. Pulling remote changes to a local repository
  3. Performing a merge and rebase

In the end, I’ll wrap up by going through some simple ways to keep merge conflicts from happening in the first place.

1. Conflicts From Sending Pull Requests in GitHub

In this scenario, I deliberately created a merge conflict (it’s harder than you might think!) with two separate feature branches. Both feature branches (I’m calling them section1 and section2) branched off from the same master branch but got merged back to the master branch via pull requests at different times. As you might expect, the pull request that got merged first had no issue. But when I tried to merge the second pull request, I got a merge conflict.

You should expect to see something like the image below at the pull request when you’re facing a merge conflict.

Notice how GitHub disabled the merge pull request button. There’s also an additional message about conflicts in the branch.

There are two possible situations at this point:

  1. The Resolve conflicts button is available.
  2. The Resolve conflicts button is NOT available. Usually, this happens because the conflicts are more complicated.

I will proceed assuming the first scenario. The second has the same solution as when you fetch remote changes locally and experience the merge conflict. In that scenario, the resolution has to be done locally first. (I’ll talk more about how to resolve merge conflicts locally later on.)

Resolve Within GitHub’s Web Editor

  1. Click on Resolve conflicts and you should see the entire display of the changed files in the pull request. Notice that GitHub has disabled the Mark as resolved button.
  2. Resolve the conflicts in the first file you see.
  3. Ensure that all traces of <<<<<<, >>>>>>, and ====== are removed.
  4. If you do this correctly, you should see the button Mark as resolved become available for that particular file.
  5. If you have multiple files with conflicts, select the next file to resolve. Repeat steps two through four until you’ve resolved all of your pull requests’ merge conflicts.
  6. Now the Commit merge button is available.
  7. Click Commit merge and carry on with your merge pull request.

2. Conflicts From Pulling Remote Changes to a Local Repository

Now that you know how to resolve merge conflicts when sending pull requests to GitHub, it’s only right that you also learn how to resolve merge conflicts that arise when you fetch remote changes from GitHub. This section will also cover how to deal with the more complicated merge conflicts that GitHub does not let you resolve, as we touched on in the first section.

Let’s get started:

  1. Fetch all the remote changes from GitHub and switch to <branch-to-merge-into>. Let’s assume the same procedure as in the previous section and try to merge feature/add-section2 back into master. So <branch-to-merge-into> is master.
    git fetch origin
    git checkout <branch-to-merge-into>
    git pull
  2. Trigger the merge conflict by git merge feature/add-section2.
  3. Now you have basically two choices to resolve your conflict:
    1. You can open up your favorite IDE or code editor and go through the conflicts one at a time. Some editors might even help you by flagging the actual files.
    2. You can use native mergetools available in your system (I will cover this in the next section).
  4. Essentially, you are doing the same thing here as with the GitHub web editor example by removing the >>>> and <<<< and then changing the code for all affected files.
  5. Typing git status will render a statement about unmerged paths.
  6. Typing git commit -a will render a commit message about merging conflicts.
  7. You may add more comments or you can simply keep it as it is. Stage the change and you are done.

Setting Up Mergetools

If you are using a mac, you have a range of available mergetools for you. These include meld, opendiff, vimdiff, and tortoisediff.

To activate these tools, simply type git mergetool the moment after you’ve triggered a local merge conflict.

You can choose to configure your mergetool. Typing “`git config merge.tool vimdiff`” will configure vimdiff as the mergetool of choice. You can also install other mergetools if you like.

Finally, to trigger the mergetool, simply type git mergetool again.

Here’s a look at the mergetool I use. I prefer FileMerge, which opendiff is responsible for triggering.

3. Conflicts From Performing a Merge and Rebase

Sometimes, you’ll want to merge and rebase at the same time and you’ll fail due to a merge conflict. You might still be able to perform the regular merge on its own, or you might not. But let’s say you insist on doing it with the rebase. What do you do?

Once again, we’re trying to merge and rebase the feature/add-section2 branch into the master branch. You can only do this locally.

Let’s dive in:

  1. Fetch all the remote changes from GitHub for your <branch-to-merge-into> and <feature-branch>. In this case, remember that your feature branch is feature/add-section2.
    git fetch origin
    git checkout master
    git pull
    git checkout feature/add-section2
    git pull
  2. Perform the rebase inside your feature branch with git pull origin master -rebase.
  3. Resolve the merge conflict as per normal.
  4. Force push your newly rebased feature branch back to remote git push -u origin feature/add-section2 -f. (Warning! Be absolutely certain nobody else has made any new changes to the remote version of your feature branch. The forced push will override those new changes.)
  5. Now you can go to GitHub to perform the merge and rebase.

How to Reduce Merge Conflicts

So far, I have covered various ways to resolve merge conflicts under the three most common scenarios:

  1. Sending pull requests to GitHub
  2. Fetching remote changes from GitHub
  3. Attempting to merge and rebase

I have also added some helpful information about how to set up your mergetool, should you desire to do so. Before I conclude, I want to save you some future headaches by showing you how to reduce the number of merge conflicts you generate in the first place. As the saying goes, an ounce of prevention is worth more than a pound of cure. Here are three useful steps to help reduce merge conflict headaches you may have. Bear in mind, though: merge conflicts are inevitable. You can decrease them, but you can never fully eliminate them.

Fetch Remote Changes Frequently to Avoid Big Conflicts

Fetch remote changes frequently from the main branches and then handle the changes upstream. While you may need to resolve merge conflicts more frequently, this means that you’ll be resolving smaller conflicts each time. Resolving merge conflicts can get pretty hairy, especially for big projects with dozens of collaborators where the codebase runs into millions of lines.

Have Fewer Developers Working Off the Same Branch

Merge conflicts increase tremendously when you have many people working on separate features and trying to merge back to the same branch. This is where your project manager can help. He or she will likely plan release branches from the master branch and then break the release branches into smaller feature branches, which in turn can be further subdivided. Good old divide and conquer, I say.

Implement Feature Flags Management Solution With Trunk-Based Development

Instead of handling multiple feature branches, a more straightforward method may be to implement feature flags. Employ a feature flag management solution that allows for trunk-based development. This is especially crucial when it comes to early-stage development, when features are often dropped or changed drastically based on feedback. It’s worth it for your velocity and developer sanity to have a cleaner way to reduce merge conflicts. Sometimes, resolving merge conflicts can feel like writing the same code twice. Avoid unnecessary complications; create an environment where your developers (and you!) can be more productive.

That’s it! You are now fully equipped with all the knowledge you’ll need to handle those pesky merge conflicts. If you think your colleagues will find this guide useful, do share it with them. They will thank you for it.

Original Link

Git on the Go With These Mobile Apps for Git (and GitHub)

Git on the go with these mobile apps for Git (and GitHub)

I wanted to take a look at the mobile apps available to manage my Git repositories and GitHub projects. I was expecting to find just a couple of options but, to my surprise, there are dozens of apps that claim to be your ideal solution when you need to access git or GitHub on the go.

Just do a search with “git” as keyword in the Google Play Store and you’ll immediately get over 30 results. This includes several ones that are just apps to learn git (check out this open source quick git reference guide) while commuting. And also others that have been abandoned already or are still at an alpha stage.

After my (subjective) cleanup, my shortlisted selection of recommended apps for git and GitHub is the following.

Mobile Git Apps

Pocket Git is a powerful standalone Git client for Android with all the obvious features (cloning repositories, checkout branches, diff the views, creation of files, commits, tags,…. ) and support for HTTP and SSH protocols, passwords and private keys (with passphrase). Sure, it’s not free but it costs less than 3 USD, and as a developer yourself I’m sure you’ll understand your fellow developers need to eat, right?. See PocketGit in action in the featured image for this post.

On the Apple side of things, WorkingCopy is by far the most popular and appreciated git client for iPad and iPhone. As they convincingly explain in the app description, sometimes you just want to update a TODO file or make adjustments to your Jekyll site. Sometimes you just need to add a file the designer sent after hours. WorkingCopy is ideal for that so that you don’t forget to make those changes (or have to wait until a large screen is available). Another interesting feature is the graph of your commits. This graph lets you zoom out for an overview of of the commit tree or zoom in for specifics about each commit, with speed and beauty you won’t find in desktop Git applications. This app is free for browsing but you’ll need the enterprise edition to be able to push commits.

WorkingCopy git app for iPhone

Different views of the WorkingCopy app.

And more on the extreme side of things, instead of a Git client, you could even try to run a Git Server on your mobile with user and user group password and ssh authentication for repositories.

Mobile GitHub Apps

For Android devices, my top recommendation is ForkHub. ForkHub is an open source GitHub client for Android based on the abandoned official app. It uses the GitHub Java API built on top of API v3 to provide in the app all the typical web-based GitHub functionalities. You can even share code snippets as GitHub Gists.  With over 50K downloads is by far the most popular GitHub app for Android. And if you don’t like ForkHub, you could also try OpenHub, GitPoint (see below) or OctoDroid, though this latter one focuses more on visualizing the repos/projects and only offers limited edition functionalities.

As a complement to any of them, you could also install Git Social (helps you to keep up with the recent activity of the GitHub users you follow).

ForkHub GitHub App for Android

A screenshot of the ForkHub App on a tablet device.

For Apple devices, my first option would be GitPoint (also available for Android). Built with React native, GitPoint is a free, open source “GitHub in your pocket” option (as they define themselves) with a great UI. You can view repository and user information, control your notifications and manage your issues and pull requests. GitHawk is another great option, especially to help you clean your GitHub notifications. It offers rich commenting support (including emoji reactions) to respond to them as fast as you can. And if all you need is an app to manage your GitHub issues, GitShot will do the trick.

GitPoint - GitHub app for the iPhone


Any Apps for Bitbucket or GitLab?

There is an app for every need. GitLabControl helps you to manage your GitLab project on any iPad/iPhone device.  If instead, you’re using BitBucket, there are several options available. The most popular one,  Bitbeaker, is an open source Bitbucket client for Android.  Bitbeaker is not a full Git nor Mercurial client. It uses Bitbucket’s REST API instead. Bitbasket is a potential alternative for Android and CodeBucket one for iPhone. And for people with projects in both GitHub and BitBucket, OmniCode offers a unified interface for both platforms.

Bitbeaker - a bitbucket client app


Original Link

How (and Why!) to Keep Your Git Commit History Clean

Commits are one of the key parts of a Git repository, and more so, the commit message is a life log for the repository. As the project/repository evolves over time (new features getting added, bugs being fixed, architecture being refactored), commit messages are the place where one can see what was changed and how. So it’s important that these messages reflect the underlying change in a short, precise manner.

Why Meaningful Commit History Is Important

Git commit messages are the fingerprints that you leave on the code you touch. Any code that you commit today, a year from now when you look at the same change; you would be thankful for a clear, meaningful commit message that you wrote, and it will also make the lives of your fellow developers easier. When commits are isolated based on context, a bug which was introduced by a certain commit becomes quicker to find, and the easier it is to revert the commit which caused the bug in the first place.

While working on a large project, we often deal with a lot of moving parts that are updated, added or removed. Ensuring that commit messages are maintained in such cases could be tricky, especially when development spans across days, weeks, or even months. So to simplify the effort of maintaining concise commit history, this article will use some of the common situations that a developer might face while working on a Git repository.

But before we dive in, let’s quickly go through how a typical development workflow looks like in a our hypothetical Ruby application.

Note: This article assumes that you are aware about basics of Git, how branches work, how to add uncommitted changes of a branch to stage and how to commit the changes. If you’re unsure of these flows, our documentation is a great starting point.

A Day in the Life

Here, we are working on a small Ruby on Rails project where we need to add a navigation view on the homepage and that involves updating and adding several files. Following is a step by step breakdown of the entire flow:

  • You start working on a feature with updating a file; let’s call it application_controller.rb
  • This feature requires you to also update a view: index.html.haml
  • You added a partial which is used in index page: _navigation.html.haml
  • Styles for the page also need to be updated to reflect the partial we added: styles.css.scss
  • Feature is now ready with the desired changes, time to also update tests; files to be updated are as follows:
    • application_controller_spec.rb
    • navigation_spec.rb
  • Tests are updated and passing as expected, now time to commit the changes!

Since all the files belong to different territories of the architecture, we commit the changes isolated of each other to ensure that each commit represents a certain context and is made in a certain order. I usually prefer backend -> frontend order where most backend-centric change is committed first, followed by the middle layer and then by frontend-centric changes in the commits list.

  1. application_controller.rb & application_controller_spec.rb; Add routes for navigation.
  2. _navigation.html.haml & navigation_spec.rb; Page Navigation View.
  3. index.html.haml; Render navigation partial.
  4. styles.css.scss; Add styles for navigation.

Now that we have our changes committed, we create a merge request with the branch. Once you have merge request open, it typically gets reviewed by your peer before the changes are merged into repo’s master branch. Now let’s learn what different situations we may end up with during code review.

Situation 1: I Need to Change the Most Recent Commit

Imagine a case where the reviewer looked at styles.css.scss and suggested a change. In such a case, it is very simple to do the change as the stylesheet changes are part of last commit on your branch. Here’s how we can handle this;

  • You directly do the necessary changes to styles.css.scss in your branch.
  • Once you’re done with the changes, add these changes to stage; run git add styles.css.scss.
  • Once changes are staged, we need to add these changes to our last commit; run git commit --amend. (Command breakdown: Here, we’re asking the git commit command to amend whatever changes are present in stage to the most recent commit.)
  • This will open your last commit in your Git-defined text editor which has the commit message Add styles for navigation.
  • Since we only updated the CSS declaration, we don’t need to alter the commit message. At this point, you can just save and exit the text editor that Git opened for you and your changes will be reflected in the commit.

Since you modified an existing commit, these changes are required to be force pushed to your remote repo using git push --force-with-lease <remote_name> <branch_name>. This command will override the commit Add styles for navigation on remote repo with updated commit that we just made in our local repo.

One thing to keep in mind while force pushing branches is that if you are working on the same branch with multiple people, force pushing may cause trouble for other users when they try to normally push their changes on a remote branch that has new commits force pushed. Hence, use this feature wisely. You can learn more about Git force push options here.

Situation 2: I Need to Change a Specific Commit

In the previous situation, the fix was rather simple as we had to modify only our last commit, but imagine if reviewer suggested to change something in _navigation.html.haml. In this case, it is second commit from the top, so changing it won’t be as direct as it was in the first situation. Let’s see how we can handle this:

Whenever a commit is made in a branch, it is identified by a unique SHA1 hash string. Think of it as a unique ID that separates one commit from another. You can view all the commits, along with their SHA1 hashes in a branch by running git log. With this, you would see an output that looks somewhat as follows, where the most recent commits are at the top;

commit aa0a35a867ed2094da60042062e8f3d6000e3952 (HEAD -> add-page-navigation)
Author: Kushal Pandya <>
Date: Wed May 2 15:24:02 2018 +0530 Add styles for navigation commit c22a3fa0c5cdc175f2b8232b9704079d27c619d0
Author: Kushal Pandya <>
Date: Wed May 2 08:42:52 2018 +0000 Render navigation partial commit 4155df1cdc7be01c98b0773497ff65c22ba1549f
Author: Kushal Pandya <>
Date: Wed May 2 08:42:51 2018 +0000 Page Navigation View commit 8d74af102941aa0b51e1a35b8ad731284e4b5a20
Author: Kushal Pandya <>
Date: Wed May 2 08:12:20 2018 +0000 Add routes for navigation

This is where git rebase command comes into play. Whenever we wish to edit a specific commit with git rebase, we need to first rebase our branch by moving back HEAD to the point right before the commit we wish to edit. In our case, we need to change the commit that reads Page Navigation View.

Here, notice the hash of commit which is right before the commit we want to modify; copy the hash and perform the following steps:

  • Rebase the branch to move to commit before our target commit; run git rebase -i 8d74af102941aa0b51e1a35b8ad731284e4b5a20
    • Command breakdown: Here we’re running Git’s rebase command with interactive mode with provided SHA1 hash as commit to rebase to.
  • This will run rebase command for Git in interactive mode and will open your text editor showing all of your commits that came after the commit you rebased to. It will look somewhat like this:
pick 4155df1cdc7 Page Navigation View
pick c22a3fa0c5c Render navigation partial
pick aa0a35a867e Add styles for navigation # Rebase 8d74af10294..aa0a35a867e onto 8d74af10294 (3 commands)
# Commands:
# p, pick = use commit
# r, reword = use commit, but edit the commit message
# e, edit = use commit, but stop for amending
# s, squash = use commit, but meld into previous commit
# f, fixup = like "squash", but discard this commit's log message
# x, exec = run command (the rest of the line) using shell
# d, drop = remove commit
# These lines can be re-ordered; they are executed from top to bottom.
# If you remove a line here THAT COMMIT WILL BE LOST.
# However, if you remove everything, the rebase will be aborted.
# Note that empty commits are commented out

Notice how each commit has a word pick in front of it, and in the contents below, there are all possible keywords we can use. Since we want to edit a commit, we need to change pick 4155df1cdc7 Page Navigation View to edit 4155df1cdc7 Page Navigation View. Save the changes and exit editor.

Now your branch is rebased to the point in time right before the commit you made which included _navigation.html.haml. Open the file and perform desired changes as per the review feedback. Once you’re done with the changes, stage the them by running git add _navigation.html.haml.

Since we have staged the changes, it is time to move branch HEAD back to the commit we originally had (while also including the new changes we added), run git rebase --continue, this will open your default editor in the terminal and show you the commit message that we edited during rebase; Page Navigation View. You can change this message if you wish, but we would leave it as it is for now, so save and exit the editor. At this point, Git will replay all the commits that followed after the commit you just edited and now branch HEAD is back to the top commit we originally had, and it also includes the new changes you made to one of the commits.

Since we again modified a commit that’s already present in remote repo, we need force push this branch again using git push --force-with-lease <remote_name> <branch_name>.

Situation 3: I Need to Add, Remove, or Combine Commits

A common situation is when you’ve made several commits just to fix something previously committed. Now let’s reduce them as much as we can, combining them with the original commits.

All you need to do is start the interactive rebase as you would in the other scenarios.

pick 4155df1cdc7 Page Navigation View
pick c22a3fa0c5c Render navigation partial
pick aa0a35a867e Add styles for navigation
pick 62e858a322 Fix a typo
pick 5c25eb48c8 Ops another fix
pick 7f0718efe9 Fix 2
pick f0ffc19ef7 Argh Another fix!

Now imagine you want to combine all those fixes into c22a3fa0c5c Render navigation partial. You just need to:

  1. Move the fixes up so that they are right below the commit you want to keep in the end.
  2. Change pick to squash or fixup for each of the fixes.

Note:squash keeps the commits messages in the description. fixup will forget the commit messages of the fixes and keep the original.

You’ll end up with something like this:

pick 4155df1cdc7 Page Navigation View
pick c22a3fa0c5c Render navigation partial
fixup 62e858a322 Fix a typo
fixup 5c25eb48c8 Ops another fix
fixup 7f0718efe9 Fix 2
fixup f0ffc19ef7 Argh Another fix!
pick aa0a35a867e Add styles for navigation

Save the changes, exit the editor, and you’re done! This is the resulting history:

pick 4155df1cdc7 Page Navigation View
pick 96373c0bcf Render navigation partial
pick aa0a35a867e Add styles for navigation

As before, all you need to do now is git push --force-with-lease <remote_name> <branch_name> and the changes are up.

If you want to remove a commit altogether, instead of squash or fixup, just write drop or simply delete that line.

Avoid Conflicts

To avoid conflicts, make sure the commits you’re moving up the timeline aren’t touching the same files touched by the commits left after them.

pick 4155df1cdc7 Page Navigation View
pick c22a3fa0c5c Render navigation partial
fixup 62e858a322 Fix a typo # this changes styles.css
fixup 5c25eb48c8 Ops another fix # this changes image/logo.svg
fixup 7f0718efe9 Fix 2 # this changes styles.css
fixup f0ffc19ef7 Argh Another fix! # this changes styles.css
pick aa0a35a867e Add styles for navigation # this changes index.html (no conflict)

Pro-Tip: Quick fixups

If you know exactly which commit you want to fixup, when committing you don’t have to waste brain cycles thinking of good temporary names for “Fix 1”, “Fix 2”, …, “Fix 42”.

Step 1: Meet --fixup

After you’ve staged the changes fixing whatever it is that needs fixing, just commit the changes like this:

git commit --fixup c22a3fa0c5c

(Note that this is the hash for the commit c22a3fa0c5c Render navigation partial)

This will generate this commit message: fixup! Render navigation partial.

Step 2: And the sidekick --autosquash

Easy interactive rebase. You can have git place the fixup s automatically in the right place.

git rebase -i 4155df1cdc7 --autosquash

History will be shown like so:

pick 4155df1cdc7 Page Navigation View
pick c22a3fa0c5c Render navigation partial
fixup 62e858a322 Fix a typo
fixup 5c25eb48c8 Ops another fix
fixup 7f0718efe9 Fix 2
fixup f0ffc19ef7 Argh Another fix!
pick aa0a35a867e Add styles for navigation

Ready for you to just review and proceed.

If you’re feeling adventurous you can do a non-interactive rebase git rebase --autosquash, but only if you like living dangerously, as you’ll have no opportunity to review the squashes being made before they’re applied.

Situation 4: My Commit History Doesn’t Make Sense, I Need a Fresh Start!

If we’re working on a large feature, it is common to have several fixup and review-feedback changes that are being committed frequently. Instead of constantly rebasing the branch, we can leave the cleaning up of commits until the end of development.

This is where creating patch files is extremely handy. In fact, patch files were the primary way of sharing code over email while collaborating on large open source projects before Git-based services like GitLab were available to developers. Imagine you have one such branch (eg; add-page-navigation) where there are tons of commits that don’t convey the underlying changes clearly. Here’s how you can create a patch file for all the changes you made in this branch:

  • The first step to create the patch file is to make sure that your branch has all the changes present from master branch and has no conflicts with the same.
  • You can run git rebase master or git merge master while you’re checked out in add-page-navigation branch to get all the changes from master on to your branch.
  • Now create the patch file; run git diff master add-page-navigation > ~/add_page_navigation.patch.
  • You can specify any path you wish to keep this file in and the file name and extension could be anything you want.
  • Once the command is run and you don’t see any errors, the patch file is generated.
  • Now checkout master branch; run git checkout master.
  • Delete the branch add-page-navigation from local repo; run git branch -D add-page-navigation. Remember, we already have changes of this branch in a created patch file.
  • Now create a new branch with the same name (while master is checked out); run git checkout -b add-page-navigation.
  • At this point, this is a fresh branch and doesn’t have any of your changes.
  • Finally, apply your changes from the patch file; git apply ~/add_page_navigation.patch.
  • Here, all of your changes are applied in a branch and they will appear as uncommitted, as if all your modification where done, but none of the modifications were actually committed in the branch.
  • Now you can go ahead and commit individual files or files grouped by area of impact in the order you want with concise commit messages.

As with previous situations, we basically modified the whole branch, so it is time to force push!


While we have covered most common and basic situations that arise in a day-to-day workflow with Git, rewriting Git history is a vast topic and as you get familiar with above tips, you can learn more advanced concepts around the subject in the Git Official Documentation. Happy git’ing!

Original Link

Quick Tip: Grep in Git

This quick tip is about two small features of Git I wish I had known about earlier as it makes it way easier to do searching through it.


git-grep is a way to search through your tracked files for whatever you provide. For example, if we want all files with the word index in it: git grep index

Demo of git grep

We can limit to specific files, for example, if we want to filter the above example to just JSON files: git grep index -- '*.json'Demo of git grep with filter

We can search for multiple items in a single file, for example, if we want to find all files with index and model in it: git grep --all-match -e index -e modelDemo of git grep with multiple filters

git-log grep

git-log has a grep function too which is awesome for finding commit messages with a specific word or words in it. For example, if I want to find all commits about Speakers for DevConf I could do: git log --all --grep "Speaker"

Git log grep example

Original Link

Quick Tip: The ”Say” Command

MacOS has a great tool, called say which just says what you pass it. For example, say "Hello" and next, you’ll hear your device say “Hello.”

Where this is really useful is when you want to do a long-running action and get notified when it is done. For example:

git clone && say "clone complete"

So, what about for Windows? You can do something similar with Powershell. First the setup:

Add-Type -AssemblyName System.speech
$say = New-Object System.Speech.Synthesis.SpeechSynthesizer

Once you have that in place you can use it like this:

git clone; $say.Speak("clone complete")

Original Link

Spotlight on Git: Tips and Tricks

Git your head in the game! These tutorials show you how to set up CI/CD, use developer tricks, and more in Git, GitHub, and GitLab. There’s something for every kind of enterprise developer or hobbyist, so jump right in!

5 Trending DevOps Articles on DZone

  1. Saving Half-Done Work With Git Stash, by Kristina Pomorisac. Git stash allows you to save work in progress, which is handy for quickly switching contexts, without committing half-done work.

  2. Git Merge vs. Rebase, by Kristina Pomorisac. Git merge and rebase serve the same purpose – they combine multiple branches into one. Which should you use?

  3. Building a Continuous Delivery Pipeline With Git and Jenkins, by Lyndsey Padget. Learn how to use the power and simplicity of Git to build an automated continuous delivery pipeline with Jenkins.

  4. Building and Deploying Docker Containers Using GitLab CI Pipelines, by Kevin Hooke. Learn how to set up a GitLab CI pipeline to automate the building and deployment of Docker containers, saving you time and effort.

  5. Using GitHub as a Maven Repository, by Anupam Gogoi. This isn’t the “right” way to do things, but it’s fun and educational to learn how you can use GitHub as a Maven repository!

You can get in on this action by contributing your own knowledge to DZone! Check out our new Bounty Board, where you can claim writing prompts to win prizes! 

Dive Deeper Into DevOps

  1. DZone’s Guide to DevOps: Culture and Process: a free ebook download.

  2. Introduction to DevOps Analytics: Download this free Refcard to discover the value of using historical data to analyze and estimate the probability of success of a new release.

Who’s Hiring?

Here you can find a few opportunities from our Jobs community. See if any match your skills and apply online today!

DevOps Engineer — Terraform | AWS | Azure Project
Location: London, UK
Experience: Cross-functional DevOps Engineer, a Developer, Test Automation Engineer or Cloud Operations Engineer with experience in Terraform, AWS, Azure, and/or related technologies.

Sr. DevOps Engineer
Cogility Software
Location: Irvine, CA, USA
Experience: Bachelor’s degree or equivalent work experience; a minimum of 4+ years of work experience in this role.

Original Link

Git Merge vs. Rebase

Git merge and rebase serve the same purpose – they combine multiple branches into one. Although the final goal is the same, those two methods achieve it in different ways. Which method to use?

What Does Merge or Rebase Mean?

Here we have a sample repository that has two diverging branches: the master and the feature. We want to blend them together. Let’s take a look how these methods can solve the problem.


When you run git merge, your HEAD branch will generate a new commit, preserving the ancestry of each commit history.

Fast forward merge is a type of merge that doesn’t create a commit, instead, it updates the branch pointer to the last commit.


The rebase re-writes the changes of one branch onto another without creating a new commit.

For every commit that you have on the feature branch and not in the master, a new commit will be created on top of the master. It will appear as if those commits were written on top of master branch all along.

Merging Pros and Cons


  • Simple to use and understand.
  • Maintains the original context of the source branch.
  • The commits on the source branch are separated from other branch commits. This can be useful if you want to take the feature and merge it into another branch later.
  • Preserves your commit history and keeps the history graph semantically correct.


Rebasing Pros and Cons


  • Code history is simplified, linear and readable.
  • Manipulating a single commit history is easier than a history of many separate feature branches with its additional commits.
  • Clean, clear commit messages make it better to track a bug or when a feature was introduced. Avoid polluting the history with 20+ single-line commits!


  • Lowering the feature down to a handful of commits can hide context.
  • Rebasing doesn’t work with pull requests, because you can’t see what minor changes someone made. Rewriting of history is bad for teamwork!

You need to be more careful with rebasing than when merging.

Which Method to Choose?

When your team uses a feature based workflow or is not familiar with rebase, then git merge is the right choice for you:

  • It allows you to preserve the commit history for any given feature while not worrying about overriding commits and changing history. It helps you avoid unnecessary git reverts or resets!
  • Different features remain isolated and don’t interfere with existing commit histories.
  • Can help you re-integrate a completed feature branch.

On the other hand, if you value more a clean, linear history then git rebase may be most appropriate. You will avoid unnecessary commits and keep changes more centralized and linear!

If you rebase incorrectly and unintendedly rewrite the history, it can lead to serious issues, so make sure you know what you are doing!

Original Link

GitOps for Istio – Manage Istio Config Like Code

At this year’s Kubecon conference held in Copenhagen, Alexis Richardson, CEO of Weaveworks, and Varun Talwar of a stealth startup spoke about GitOps workflows and Istio. The talks were followed up with a demo and tutorial by Weaveworks’ Stefan Prodan on how to roll out and manage canary deployments to Istio using GitOps principles.

The talks and the demo explain:

  • What GitOps is and why you need it?
  • How Istio and GitOps best practices can manage applications running on it.
  • How to do a Canary deployment on Istio using GitOps workflows.

What Is GitOps?

GitOps is a way to do Continuous Delivery. “It works by using Git as a source of truth for declarative infrastructure and applications,” says Alexis Richardson.

Automated delivery pipelines roll out changes to your infrastructure when changes are made to Git. But the idea goes further than that – it uses tools to compare the actual production state with what’s under source control and then tells you when your cluster doesn’t match the real world.

Git Enables Declarative Tools

By using declarative tools, the entire set of configuration files can be version controlled in Git. By using Git as the single source of truth, your entire infrastructure can be reproduced, reducing your mean time to recovery from hours to minutes.

GitOps Empowers Developers to Embrace Operations

The GitOps core machinery in Weave Cloud is in the CI/CD tooling and the critical piece is continuous deployment (CD) and release management that supports Git-cluster synchronization. Weave Cloud deploy is specifically designed for version controlled systems and declarative application stacks. Every developer uses Git and makes pull requests and now they can use Git to accelerate and simplify operational tasks for Kubernetes and other declarative technologies like Istio as well.

Three Core Principles of GitOps

The core principles described below, according to Alexis, are why GitOps is both Kubernetes and Cloud Native centric:

#1. Declarative Configuration Is at the Core of GitOps

By using Git as the source of truth, and Kubernetes rolling updates, it is possible to observe your cluster and compare it with the desired state. By treating declarative configuration like code, it allows you to enforce convergence by reapplying changes if they didn’t succeed.

#2. Kubectl Should Not Be Used Directly

As a general rule, it’s not a good idea to push directly from CI to production. Many people let their CI tool drive deployment, but when you’re doing that you’re potentially giving a notoriously hackable thing access to production.

#3. Use an Operator Pattern

With the operator pattern, your cluster always stays in sync with what’s been checked into Git. Weave Flux, which is open source and is the basis of the canary deployments Istio demo below, you can manage changes in your cluster with an operator.

With both a developer flow and a production flow, merging from staging to production, the operator pulls changes into the cluster and deploys them atomically even when you have multiple changes.

GitOps Workflows for Istio

Next up Varun Talwar spoke about what Istio is and how GitOps workflows are made for managing apps on it.

Istio is a service mesh that launched about a year ago. It’s a dedicated infrastructure layer for all your service to service interactions in a microservices architecture. All actions in Istio are driven through declarative configuration files. This means that service meshes like Istio enable developers to manage service behavior completely through files that are kept in Git with your code.

With Git workflows developers can model anything in Istio including service behavior and their interactions like timeouts, circuit breakers, traffic routing, load balancing, as well as A/B testing, and canary releases.

Many Sets of Configuration Across Teams

Istio has four broad areas that are all declaratively configuration driven:

  1. Traffic management – everything to do with managing your ingress as well as service to service traffic.

  2. Observability – monitoring, latency QPS error rates of all of the traffic.

  3. Security – authentication all of the service to service calls as well as authorization.

  4. Performance – including failure scenarios retries timeouts, fault injection, and circuit breaking.

Because all of these areas can span different teams within your organization, it makes managing applications on Istio particularly challenging.

Many of these configuration driven settings are cross-team. One team, for example, may decide to use Zipkin for tracing, but another may want to use Jaeger instead. These decisions can be made for one service or they may be made across services. When decisions span teams, the approval workflows become more complex and are not always atomic. Canary releases, for example, are not a one-time atomic thing.

Canary Deployments to Istio With GitOps Workflows

Stefan Prodan showed us how to do a canary release to Istio using GitOps workflows with Weave Flux and Prometheus — all of which you can use in Weave Cloud as well for canary deployments and observability.

Briefly, canary deployments or releases are used when you want to test some new functionality with a subset of users. Traditionally you may have two almost identical servers: one that goes to all users and another with the new features that gets rolled out to only a set of users.

But by using GitOps workflows, instead of setting up two separate servers, your canary can be controlled through Git. When something goes wrong, the older version can be rolled out, and you can iterate on the canary deployment branch and keep rolling that out until it meets expectations.

Git-Controlled Canary Release With Full Observability in Weave Cloud

A change is pushed through your pipeline where you can send a percentage of your users a select subset of traffic. Using Weave Cloud you can observe in the dashboard whether the canary is working as expected. And if there are issues you can keep making changes, promote the next version, tag it, and deploy it through the same pipeline. This is an iterative process that GitOps workflows can help you manage.

Final Thoughts

Alexis Richardson gave us an overview of GitOps, and why you need to consider this approach when managing applications running on Kubernetes and Istio. Varun Talwar then spoke about what Istio is and how GitOps workflows can be used to manage your apps. Finally, Stefan Prodan showed us a particular use case where non-atomic workflows like canary releases can also be managed with GitOps on a service mesh like Istio.

To view the talk in its entirety, watch below.

Original Link

GitLab 10.8 — Incremental Rollouts, Push Mirroring, Dependency Scanning…and More

Continuing to deliver features in their latest release, GitLab’s 10.8 version allows teams to deploy applications to a subset of available nodes, maintain mirrors on private networks while keeping cloud-based repo available, and include dependency scanning from the GitLab service. GitLab is defining the way that CI/CD should be employed, by leveraging the model they have defined for their customers.

In the past, when a point release (aka a minor release) reached GA (general availability) status, adoption of these releases was often delayed or even skipped. This was centered around the cost/benefit analysis to analyze, validate and deploy a minor release across the organization. With the advent of Dev Ops, the concept of CI/CD started to gain momentum, but a majority of those following a DevOps model still opted to group and plan releases.

While on a project building out Salesforce for a client, we ran into that very scenario. Our team was able to build and validate new features within every sprint cycle, but often those releases were not deployed to the customer – due to the upfront planning and communication required before the features could be introduced. It was the reality we lived and I quickly realized that our Salesforce team wasn’t the only Agile team living in this mode.

The team at GitLab has followed the model taken by Salesforce, where they are releasing updates to their service on a predefined basis. Like other cloud providers, these features are available to everyone…and everyone must adapt to the changes without the ability to linger on a given release. As a seasoned software developer, I appreciate this approach – especially having to handle the headaches related to supporting applications and frameworks which are out of date.

GitLab 10.8 Release

The latest version of GitLab is 10.8, which was released on 05/22/2018 and includes three very impressive features:

  • Incremental Rollouts – this new release approach allows a subset of nodes to be targeted for a given build/release. As a result, deployments could be targeted to a subset of users without having to devise an alternate build process. Current rollout options allow for 10%, 25%, 50% or 100% of your pods to be updated.

  • Push Mirroring – the feature that was initially available to paying/enterprise-level customers is now available in the open-source version of GitLab. Push Mirroring allows Git repositories to be replicated from one location to another. The most common use case for this functionality is the creation of a private GitLab instance, while still maintaining a public version. This functionality can also be used to move a project away from GitLab but keeping the old repository up to date.

  • Dependency Scanning – shipping secure code should always be a priority. The team at GitLab agrees, with their introduction of dependency scanning with the 10.8 release. GitLab’s built-in security functionality includes SAST, DAST, container scanning, and dependency scanning to keep you on top of vulnerabilities and ship secure code. As I pointed out in my “Just How Easy Is it to Be Hacked?” article, your application is only as secure as your weakest dependency. Using dependency scanning, GitLab customers can utilize the Interactive Security Reports to analyze and track any known issues. 

But Wait, There’s More…

There are so many other features packed into the 10.8 release.

  • Fuzzy file finder in Web IDE

  • Stage/Commit by file in the Web IDE

  • Group milestone burndown chart

  • GitLab Prometheus service metrics

If you want more details, see the GitLab 10.8 Release Notes for more information.

Have a really great day!

Original Link

Saving Half-Done Work With Git Stash

Imagine that you are working on a part of a project and it starts getting messy. There is an urgent bug that needs your immediate attention. It is time to save your changes and switch branches. The problem is, you don’t want commit half-done work. The solution is git stash.

Stashing is handy if you need to quickly switch context and work on something else but you’re mid-way through a code change and aren’t quite ready to commit. — Bitbucket


Let’s say you currently have a couple of local modifications. Run  git status to check your current state:

$ git status # On branch master # Changes to be committed: # (use "git reset HEAD <file>..." to unstage) # modified: index.html # Changes not staged for commit: # (use "git add <file>..." to update what will be committed) # modified: assets/stylesheets/styles.css 

We need to work on that urgent bug. First, we want to save out unfinished work changes without committing them. This is where git stash comes in handy:

$ git stash Saved working directory and index state WIP on master: bb06da6 Modified the index page HEAD is now at bb06da6 Modified the index page (To restore them type "git stash apply") 

Your working directory is now clean and all uncommitted local changes have been saved! At this point, you’re free to make new changes, create new commits, switch branches, and perform any other Git operations.

By default, stashes are identified as “WIP,” work in progress, on top of the branch and commit they are created from.

Re-applying Your Stash

Git stash is a temporary storage. When you’re ready to continue where you left off, you can restore the saved state easily: git stash pop.

Popping your stash removes the changes from your stash and reapplies the last saved state. If you want to keep the changes in the stash as well, you can use git stash apply instead.

Additional Tips and Tricks

There are a couple of other things you can do with a stash. Let’s take a look!

Try this out by adding CSS-line hight to your styles and stash it with a nice comment.

  • Stashing untracked files: This is the only way to save untracked files: $ git stash -u or $ git stash --include-untracked
  • List multiple stashes: When you git stash or git stash save, Git will create a Git commit object with a name and then save it in your repo. You can view the list of stashes you made at any time! $ git stash list.
$ git stash list stash@{0}: On master: Modified the index page
stash@{1}: WIP on master: bb06da6 Initial Commit
  • Partial stashes: You can choose to stash just a single file, a collection of files, or individual changes from within files: $ git stash -p or $ git stash --patch.
    • RSpec tests are a must in the Ruby on Rails projects, but they might not be always complete. Stash only the part that is ready to go!
  • Viewing stash diffs There are two ways to view a stash: to view the full diff of a stash – $ git stash show -p  or view only the latest stash –  $ git stash show .
$ git stash show index.html | 1 + style.css | 2 ++ 2 files changed, 3 insertions(+)
  • Creating a branch from the stash: Create a new branch to apply your stashed changes to, and then pop your stashed changes onto it: $ git stash branch <branch_name> <stash_id>.
  • Remove your stash: Use with caution, as it can be difficult to revert. The only way to revert it is if you didn’t close the terminal after deleting the stash.
    If you no longer need a particular stash, you can delete it with: $ git stash drop <stash_id>. Or you can delete all of your stashes from the repo with: $ git stash clear.

Hope this article helped you to get a better understanding how stashing works. Be sure to test it out!

Original Link

Bet You Didn’t Know – Discovering DevOps Secrets

This month, we’re highlighting some DevOps articles that aim to teach you something new – learn how to speed up your code merges, become a Git master, and breeze through Jenkins. We guarantee you’ll learn something you didn’t know before in this installment of This Month in DevOps!

5 Trending DevOps Articles on DZone

  1. 8 Useful But Not Well-Known Git Concepts, by Denny Zhang. These lesser-known Git tricks can help you solve problems that are not handled well by the GitHub and BitBucket GUIs.

  2. Don’t Install Kubernetes! by Michael Neale. You don’t have to install Kubernetes – read on to see how to take advantage of it anyway in the most efficient, least time-consuming way.

  3. 7 Code Merge Tools to Make Your Life 7x Easier, by Ben Putano. These open source and proprietary tools will help save you time and improve efficiency in your code merges.

  4. Hexagonal Architecture – It Works, by Ron Kurr. Learn what hexagonal software architecture is and how it helps developers build more resilient systems and make automated testing easier.

  5. Getting Started With Jenkins: The Ultimate Guide, by Rodrigo Moutinho. You’ve heard the buzz about Jenkins for CI/CD. This guide will teach you all the steps (and workarounds) you’ll need.

You can get in on this action by contributing your own knowledge to DZone! Check out our new Bounty Board, where you can claim writing prompts to win prizes! 

Dive Deeper Into DevOps

  1. DZone’s Guide to DevOps: Culture and Process: a free ebook download.

  2. Introduction to DevOps Analytics: Download this free Refcard to discover the value of using historical data to analyze and estimate the probability of success of a new release.

Who’s Hiring?

Here you can find a few opportunities from our Jobs community. See if any match your skills and apply online today!

DevOps Engineer
Location: Kyiv, Ukraine
Experience: 3+ years of experience managing a live production environment, preferably a high-load system. You have a good knowledge and at least some experience with AWS. You know Linux inside and out (or, you know at least some Linux, but can prove that you can learn fast).

Site Reliability Engineer
Location: San Mateo, CA, USA
Experience: 2+ years of experience in running large pieces of infrastructure. Experience with monitoring tools, building reliable applications in a cloud-like environment such as AWS, and concepts like auto-scaling, load balancing, and health checks.

Original Link

Building a Continuous Delivery Pipeline With Git and Jenkins

Jenkins is an automation server which can be used to build, test and deploy your code in a controlled and predictable way. It is arguably the most popular continuous integration tool in use today. The process of automatically building code in stages – and at each stage, testing and promoting it on to the next stage – is called a pipeline.

Jenkins is open source and has an extensive library of well-supported plugins. Not only is Jenkins cross-platform (Win/Mac/Linux), but it can also be installed via Docker, or actually on any machine with a Java Runtime Environment! (Raspberry Pi with a side of Jenkins, anyone?)

Note that there are other continuous integration tools available, including the ones described in this article, as well as my personal favorite, Travis. However, because Jenkins is so common, I want to explore a pattern that often becomes needlessly overcomplicated – setting up a staging pipeline (dev => QA => prod) using a Git repository.

Also note that Jenkins has its own “pipeline” concept (formerly known as “workflows”) that are for long-running, complicated build tasks spanning multiple build slaves. This article strives to keep things as simple as possible using backwards-compatible freestyle jobs. The idea is to use the power and simplicity of Git rather than introduce complexity from – and coupling to – Jenkins.

Review Your Git Workflow

The power of using Git for source control management is most realized when working on a team. Still, I recommend using Git for projects where you are the sole contributor, as it makes future potential collaboration easier – not to mention preserving a thorough and well-organized history of the project with every cloned instance of the repository.

For the purpose of the example, we’ll explore here, consider a typical team of 3-8 regular code contributors working in a single Git repository. If you have more than 8 developers on one project, you may want to consider breaking the application into smaller, responsibility-driven repositories.

A common Git workflow in use today is Vincent Driessen’s “GitFlow,” consisting of a master branch, a develop branch, and some fluctuating number of feature, release, and hotfix branches. When I’m working alone on a personal project, I often commit straight to the master branch. But on a large professional endeavor, GitFlow is used to help the code “flow” into the appropriate places at the appropriate times. You can see how Git branches are related to continuous integration and release management in general.

What Is the Goal of a Staging Pipeline?

Nearly every team I’ve worked on uses some variation of a staging pipeline, but surprisingly, no one ever really asks this question. It just feels like something we do because, well, it’s the way it’s supposed to be done.

So what is the goal, anyway? In most cases, a staging pipeline is intended to deploy automatically-built, easily-identifiable, and trustworthy versions of the code that gives non-developers insight into what has been created by the team. Note that I’m not talking about official versions here, just a runnable instance of the code that comes from a particular Git commit.

These non-developers may include technical team members, such as business analysts (BAs), project managers (PMs), or quality analysts (QAs). Or they may include non-technical roles, such as potential customers, executives, or other stakeholders. Each role will have a different set of reasons for wanting this visibility, but it’s safe to assume that these people are not developers and do not build the code on their own machines. After all, developers can run different versions of the code locally whenever and however they like.

Let’s keep this in mind, noting that while Jenkins can be set up for developers to run parameterized builds using manual triggers, doing so does not achieve the stated goal. Just because you can do something doesn’t mean that you should!

Mapping Git Branches to Staging Environments

Now that we understand the purpose of a staging pipeline in general, let’s identify the purpose of each environment. While the needs of each team will vary, I encourage you to embrace the KISS Principle and only create as many environments in your pipeline as needed. Here’s a typical (and usually sufficient) example:


The purpose of the dev environment is to provide insight into what is currently on the develop branch, or whatever branch is intended to be in the next “release” of the code.

QA (aka Staging)

The purpose of the QA environment is to provide a more stable and complete version of the code for the purpose of QA testing and perhaps other kinds of approval.


The purpose of the prod environment is to host production-ready code that is currently on the master branch (or whatever branch you use for this purpose). This represents what can be made available to users, even if the actual production environment is hosted elsewhere. The code in this branch is only what has already been approved in the QA environment with no additional changes.

While developers can check out and run code from any branch at any time, these environments represent trustworthy instances of that codebase/repository. That’s an important distinction because it eliminates environmental factors such as installed dependencies (i.e. NPM node_modules, or Maven JARs), or environment variables. We’ve all heard the “it works on my machine” anecdote. For example, when developers encounter potential bugs while working on their own code, they use the dev environment as a sanity check before sounding the alarm:

While the dev and prod environments are clearly linked to a Git branch, you might be wondering about the QA environment, which is less clear. While I personally prefer continuous deployments that release features as soon as they’re ready, this isn’t always feasible due to business reasons.

The QA environment serves as a way to test and approve features (from develop) in batch, thus protecting the master branch in the same way that code reviews (pull requests) are meant to protect the develop branch. It may also be necessary to use the QA environment to test hotfixes – although we certainly hope this is the exception, not the rule. Either way, someone (likely the quality analyst) prevents half-baked code from making its way into the master branch, which is a very important role!

Since the QA environment is not tied to a branch, how do you specify what code should be deployed, and where it should come from?

In my experience, many teams overlook the tagging portion of GitFlow, which can be a useful tool in solving this problem. The QA environment represents a release candidate, whether you officially call it that or not. In other words, you can specify the code by tagging it (i.e. 1.3.2-rc.1), or by referencing a commit hash, or the HEAD of any branch (which is just a shortcut to a commit hash). No matter what, the code being deployed to the QA environment corresponds to a unique commit.

It’s important that the person who is testing and approving the code in the QA environment is able to perform these builds on their own, whenever they deem it necessary. If this is a quality analyst and they deploy to the QA environment using commit hashes, then they need a manual, parameterized Jenkins job and read-only access to the repository. If, on the other hand, they don’t/shouldn’t have access to the code, a developer should create the tag and provide it (or the commit hash, or the name of the branch). Personally, I prefer the former because I like to minimize the number of manual tasks required of developers. Besides, what if all of the developers are in a meeting or out to lunch? That never happens… right?

After it’s approved, that exact version of the code should be tagged with a release number (i.e. 1.3.2) and merged into master*. Commits can have many tags, and we hope that everything went well so that the version of the code that we considered to be a release candidate actually was released. Meaning, it makes perfect sense for a commit to be labeled as 1.3.2-rc.1 and 1.3.2. Tagging should be automatic if possible.

* Note that my recommendation differs from Driessen’s on this point, as he suggests tagging it after merging. This may depend on whether your team merges or rebases. I recommend the latter for simplicity.

How to Make Staging Environments More Trustworthy

You can make your environments even more trustworthy in the following ways:

  • Follow a code review process where at least one other team member must approve a pull request
  • Configure build and unit test enforcement on all pull requests, so it is impossible to merge code that would “fail” (whatever that means for your team/application)
  • Establish branch protection in your Git repository so users cannot accidentally (or intentionally) push code directly to environment-related branches in the team repository, thus circumventing the review process
  • Set up a deployment hook, so that a Jenkins build job is automatically triggered when code is committed (or merged in) to the corresponding branch. This may make sense for the develop branch!
  • Be cautious about who has access to configure Jenkins jobs; I recommend two developers only. One person is too few due to the Bus Factor, and more than two unnecessarily increases the likelihood of a job being changed without the appropriate communication or consensus.
  • Display the version of the code in the application somewhere, such as the footer or in the “about” menu. (Or, put it in an Easter Egg if you don’t want it visible to users.) The way you obtain the version, specifically, will depend greatly on the language of your app and the platform you use to run it.

Creating a QA Build Job From a Commit Hash

I have now sufficiently nagged you about all the ways you should protect your code. (You’ll thank me later, I promise!) Let’s get down to the business of configuring a Jenkins job for a single Git repository.

This job will fit into the middle of the dev => QA => prod pipeline, helping us deploy code to the QA (aka staging) environment. It will allow a quality analyst to build and tag the code given a commit hash and tag name.

This build should:

  1. Check out the specific commit (or ref).
  2. Build the code as usual.
  3. Tag the commit in Git.
  4. Push the tag to the origin repo.
  5. (Optional, but likely) Deploy it to a server.

Notice that order matters here. If the build fails, we certainly don’t want to tag and deploy it. Steps 2 and 5 are fairly standard for any Jenkins job, so we won’t cover those here.

One-Time Setup

Since Jenkins needs to push tags to the origin repo, it will need a basic Git configuration. Let’s do that now. Go to Jenkins > Manage Jenkins > Configure System > Git plugin. Enter a username and email. It doesn’t really matter what this is, just be consistent!

Create a New Job

  1. Create a new freestyle project with a name of your choosing (for example, “QA-staging”)
  2. Under General, check “This project is parameterized”. Add two parameters, as shown below. The “Default Value” of COMMIT_HASH is set to “refs/heads/master” for convenience since we just want to make sure the job has a valid commit to work with. In the future, you may wish to set this to “refs/heads/develop”, or clear this field entirely.3. Under Source Code Management, choose ‘Git’. Add the URL of the repository and the credentials. (Jenkins will attempt to authenticate against this URL as a test, so it should give you an error promptly if the authentication fails.) Use the commit hash given when the job was started by typing ${COMMIT_HASH} in the Branch Specifier field.4. Under Post-build Actions, add an action with the type “Git Publisher”. Choose “Add Tag” and set the options as shown below. We check both boxes, because we want Jenkins to do whatever it needs to do in the tagging process (create or update tags as needed). ${TAG} is the second parameter given when the job was started.

When you run the job, you’ll be prompted to enter a commit hash and tag name. Here, you can see that I’ve kicked off two builds: The first build checked out and tagged the latest commit on master (you’d probably want /refs/heads/develop if you’re using GitFlow, but you get the idea).

The second build checked out an older commit, built it, and tagged it with “test”. Again, you’d probably be building and tagging later versions of the code, not earlier ones, but this proves that the job is doing exactly what it’s told!

The first build, the HEAD of the master branch, succeeded. It was then tagged with “0.0.1” and pushed to the origin repo. The second build, the older commit, was tagged as well!


Git and Jenkins are both very powerful, but with great power comes great responsibility. It’s common to justify an unnecessary amount of complication in a build pipeline simply because you can. While Jenkins has a lot of neat tricks up his sleeve, I prefer to leverage the features of Git, as it makes release management and bug tracking significantly easier over time.

We can do this by being careful about the versions of code that we build and tagging them appropriately. This keeps release-related information close to the code, as opposed to relying on Jenkins build numbers or other monikers. Protecting Git branches reduces the risk of human error, and automating as many tasks as possible reduces how often we have to pester (or wait on) those humans.

Finally, processes are necessary when working on a team, but they can also be a drag if they are cumbersome and inflexible. My approach has always been: if you want people to do the right thing, make it the easy thing. Listen to your team to detect pain points over time, and continue to refine the process with Git and Jenkins to make life easier.

Original Link

Commitland! [Comic]

In response to accelerated release cycles, a new set of testing capabilities is now required to deliver quality at speed. This is why there is a shake-up in the testing tools landscape—and a new leader has emerged in the just released Gartner Magic Quadrant for Software Test Automation.

Gartner: Digital Transformation, DevOps, and the Future of Testing. Download Now! 


git ,source control ,comic ,devops

Original Link

8 Useful but Not Well-Known Git Concepts

For advanced Git usage, I usually leverage the GUI of GitHub or BitBucket. But the GUI way may not solve some requirements beautifully.

I found several Git fundamental concepts. They are quite useful, but may not be well-known.

Check it out and let me know what you think!

Image title

Q: What is the .gitkeep file?

You may know.gitignore very well. But what is .gitkeep?

By default, Git doesn’t track empty directories. A file must exist within it. We can use the .gitkeep file to add empty directories into a git repo.

Q: What’s git cherry-pick?

Pick changes from another branch, then apply them to the current branch.

And this usually happens, before branch merge.

$ git checkout $my_branch
$ git cherry-pick $git_revision

In the below figure, we cherry-pick C D E from branch2 to branch1.

Image title

Q: How to set up two remote repo URLs for one local git repo.

Sometimes I need to push before I can test, so I may push quite often. I don’t want to have too many tiny git pushes or too many push notifications for everyone. So what I usually do?

(Warning: this may not be a good practice for some projects, in terms of code security.)

Set up two remote repos for one local git repo. Keep pushing to one remote repo. Once it’s fully tested, push to the official repo once.

$ git clone $ git config remote.myremote.url
$ git config remote.origin.url $ cat .git/config $ git push origin master # origin points to github
$ git push bitbucket master # myremote points to bitbucket

Q: What is git stash?

Stash local changes without git commit or git push. Switch to another branch, then come back later.

# Shelve and restore incomplete changes # Temporarily stores all modified tracked files
$ git stash # Restores the most recently stashed files
$ git stash pop # Lists all stashed changesets
$ git stash list # Discards the most recently stashed changeset
$ git stash drop

Q: What is git rebase?

git-rebase: Reapply commits on top of another base tip. If you don’t care about your git commit history, you can skip “git rebase,” but if you would prefer a clean, linear history free of unnecessary merge commits, you should reach for git rebase instead of git merge when integrating changes from another branch.

Q: What is git squash?

You may have several local git commits. Now run “git push,” and it will generate several git commit histories. To consolidate them as one, we can use the “git squash” technique.

# get local 3 latest commits
$ git rebase -i HEAD~3 # In editor, change "pick" to "squash". Keep the first one as "pick" $ git rebase --continue
$ git push origin $branch_name

Q: What is git revert?

A fun metaphor is to think of Git as timeline management utility. Commits are snapshots of a point in time or points of interest. Additionally, multiple timelines can be managed through the use of branches.

When “undoing” in Git, you are usually moving back in time, or to another timeline where mistakes didn’t happen.

  • [Undo local changes] When the changes haven’t been pushed yet

If I want to totally discard local changes, I will use git checkout. When the repo is small, I might even run “git clone,” then git checkout.

$ git checkout $branch_name
$ git log -n 5
$ git revert $git_revision
$ git push origin $branch_name
  • How to undo a git pull?

Command Scope Common use cases
git reset Commit-level throw away uncommited changes
git reset File-level Unstage a file
git checkout Commit-level Switch between branches or inspect old snapshots
git checkout File-level Discard changes in the working directory
git revert Commit-level Undo commits in a public branch
git revert File-level (N/A)

Q: What is git reflog?

Git maintains a list of checkpoints which can accessed using reflog.

You can use reflog to undo merges, recover lost commits or branches and a lot more.

Cheat sheet of my frequent Git commands:

Name Summary
Show latest history with one line for each git log -n 10 –oneline
Check git log by patterns git log –grep=”$pattern”
Check git log by files git log —
Change the last commit message git commit –amend
Check git configuration for current repo git config –list
Delete local branch git branch -D $branch
Delete remote branch git push origin –delete $branch
Delete local tag git tag -d $tag
Delete remote tag git push –delete origin $tag

Original Link

Smooth Out Your CI/CD

Each month, we explore an aspect of DevOps. This time, we’re making your life easier with articles on how to smooth out your development and continuous integration/delivery processes with Jenkins pipeline-as-code, Git, Kubernetes, and even the humble command line. You won’t want to miss these tips, so let’s get started!

5 Trending DevOps Articles on DZone

  1. GitLab Opens CI/CD to GitHub Users, by John Vester. Learn more about the news that CI/CD leader GitLab has opened up their solution to GitHub repositories and customers.

  2. How to Use the Jenkins Scripted Pipeline, by Alejandro Berardinelli. In this simple tutorial, you’ll learn about Jenkins pipeline as code and see some guidelines on how to develop your pipeline scripts.

  3. Why the Command Line? Why Now?, by Samidip Basu. Command line tools are a consistent way for developers to work across operating systems. See why CLI is more relevant now than ever.

  4. Oh, Git Configurations! Let’s Simplify It, by Tomer Ben David. Learn where exactly to find your git configuration files by going inside the mind of Linus Torvalds to better understand how git is organized – simply.

  5. Don’t Install Kubernetes!, by Michael Neale. We didn’t say not to use it – you don’t actually have to install Kubernetes. See how to take advantage of it anyway in the most efficient, least time-consuming way.

You can get in on this action by contributing your own knowledge to DZone! Check out our new Bounty Board, where you can claim writing prompts to win prizes! 

Dive Deeper Into DevOps

  1. DZone’s Guide to DevOps: Culture and Process: a free ebook download.

  2. Test Design Automation: Download this free Refcard to get started with test design automation, explore its many benefits, and find real-world use cases.

Who’s Hiring?

Here you can find a few opportunities from our Jobs community. See if any match your skills and apply online today!

Site Reliability Engineer
Wikimedia Foundation
Location: San Francisco, CA, United States
Experience: 3+ years experience in an SRE/Operations/DevOps role as part of a team. Experience with managing geographically distributed, highly available, high traffic infrastructure based on Linux.

Python Engineer
commercetools, Inc.
Location: Berlin, Germany
Experience: At least three years of work experience in software engineering for production. Experience with machine learning algorithms and relevant Python libraries. Solid understanding of databases, git, continuous integration and best practices in monitoring and deployment.

Original Link

New Major Version for Git Client Tower Reaches Public Beta

We’re happy to announce the public beta for our new major version of Tower – the popular Git client for Mac and Windows.

Over the last months, several thousand users had early access as part of a private beta and the feedback has been fantastic. As of today, the beta is open to the public. To get the new version, you can simply sign up here.

New Features

We’re introducing many great new features to Tower and we’d love to highlight the following three: 

We’re excited to finally bring Pull Requests to Tower. A highly requested and super useful feature. You can now create, merge, close, comment and inspect Pull Requests right from within Tower. The new version supports Pull Requests for GitHub, GitLab, and Bitbucket. 

Pull Requests in Tower

Pull Requests in the new Tower. 

We also added Interactive Rebase to Tower – a powerful way to rewrite the history of a repository. We took our time to implement this feature for a reason: we wanted to get it right and make it a pleasure to use. 

Interactive Rebase in Tower

Interactive Rebase in the new Tower. 

With Quick Actions, we developed a unique feature that makes using Tower much more productive. It allows you to perform actions right from the comfort of your keyboard. Just to spark your imagination, here are some examples:

  • Give it a branch name (or parts of it) and it will offer to do a checkout.

  • Give it a file name and it will offer to show it in the File History.

  • Give it a commit hash and it will offer to show this commit’s details.

Make sure to give it a try. We use it every day. 

Quick Actions in Tower

Quick Actions in the new Tower. 

There are many other new features that take Tower to a whole new level.  You can read more about them in detail on our blog

The new version also introduces a completely new UI concept that makes navigating through your Git repository as easy as browsing the web. In addition, we heavily invested in substantial performance improvements and polishing existing features. 

We’d love to hear your feedback! Make sure to sign up for the free public beta and get to explore the fastest, most powerful Tower – both on Mac and Windows. 

Original Link

Oh, Git Configurations! Let’s Simplify It

Oh, Git Is Complex!

Git is a complex system. To see the proof, check out:


To mitigate that, we are going to get inside the head of Linus Torvalds! (Just a little bit – maybe into a single neuron). In this part, we will focus on configurations. I always thought that to understand a system, I needed to understand its configurations (or even better, its installation).

When I get to a new company, the first thing I try to figure out is:

  1. How do I install this?
  2. How do I configure this?

How Does Linus Torvalds Think?

“Everything is a file.”

Yes, this is how Linus thinks: everything is a file.

As Linus loves this idea that everything is a file, you can just view git configurations as a file.

So, if we manage to learn which files Linus uses in git, we might be able to penetrate oh-my-complex-git!

3 Fallback Layers of Config

  1. System – Your OS git config – git config --system
  2. Global – Your User git config – git config --global
  3. Local – Your Repository git config – git config --local
git config --system # => /etc/gitconfig git config --glboal # => ~/.gitconfig or ~/.config/git/config git config --local # => .git/config

Git first reads the git config from .git/config -> [fallback to] ~/.gitfconfig -> [fallback to] /etc/gitconfig.

Email per Repo

If you have different email addresses for different repositories, this goes into .gitconfig == git config --local

Get the Global Config With git config --list --global

➜ tmp git config --list --global Ben David
format.pretty=format:%h %Cblue%ad%Creset %ae %Cgreen%s%Creset

Would that be the same as cat ~/.gitconfig? What do you think?

➜ tmp cat ~/.gitconfig
[user] name = Tomer Ben David email =
[core] autocrlf = input
[format] pretty = format:%h %Cblue%ad%Creset %ae %Cgreen%s%Creset

Yes and no! It’s a different notation, but generally the same!

Get the Merged Config With git config --list

The merged config is the combination of all configs with the hierarchy. Let’s see it on my machine:

git config --list ➜ tmp git config --list
alias.a=!git add . && git status!git add -u . && git status
alias.aa=!git add . && git add -u . && git status
alias.c=commit -m --amend!git add . && git commit
alias.acm=!git add . && git commit -m
alias.l=log --graph --all --pretty=format:'%C(yellow)%h%C(cyan)%d%Creset %s %C(white)- %an, %ar%Creset'
alias.ll=log --stat --abbrev-commit
alias.lg=log --color --graph --pretty=format:'%C(bold white)%h%Creset -%C(bold green)%d%Creset %s %C(bold green)(%cr)%Creset %C(bold blue)<%an>%Creset' --abbrev-commit --date=relative
alias.llg=log --color --graph --pretty=format:'%C(bold white)%H %d%Creset%n%s%n%+b%C(bold blue)%an <%ae>%Creset %C(bold green)%cr (%ci)' --abbrev-commit
alias.master=checkout master
alias.spull=svn rebase
alias.spush=svn dcommit
alias.alias=!git config --list | grep 'alias\.' | sed 's/alias\.\([^=]*\)=\(.*\)/\ => /' | sort
credential.helper=osxkeychain Ben David
format.pretty=format:%h %Cblue%ad%Creset %ae %Cgreen%s%Creset

That was it – all the git config on my machine! I hope I didn’t put any passwords in there! If I did please, let me know – code review is important, guys!

Git Config –local/global/system – Get a Single Config

Ask layer by layer for a config:

➜ tmp git config --local
fatal: BUG: setup_git_env called without repository # We called from a non-repository folder.
➜ tmp git config --global
Tomer Ben David
➜ tmp git config --system # => nothing in the global config file.
➜ tmp

Aha! So it’s coming from the global (user) config!

Note that in the file, it’s with [user] and the name, and the git config returns the combined name

If you need a hierarchy larger than three, then in the file, it will look like this:

[branch "gke"] remote = origin merge = refs/heads/gke

So, this one should be branch.gke.remote. Let’s verify this:

➜ .git git:(gke) git config branch.gke.remote # => yes it is branch.gke.remote!

Set a New Config With git config mysection.mykey myvalue

➜ .git git:(gke) git config mysection.mykey myvalue
➜ .git git:(gke) git config mysection.mykey

So, we were able to set it. Let’s look at the file:

➜ .git git:(gke) cat config
[core] repositoryformatversion = 0 filemode = true bare = false logallrefupdates = true ignorecase = true precomposeunicode = true
[remote "origin"] url = fetch = +refs/heads/*:refs/remotes/origin/*
[branch "gke"] remote = origin merge = refs/heads/gke
[mysection] ##### =====> Here it is! see the section in [] mykey = myvalue


You now know exactly where your git config files are. This is very helpful and much more explainable than using the git commands. I’m not sure I was able to simplify it, but it makes at least some sense at the end of the day for us programmers!


Ry’s Git Tutorial here is my BOOK Review

Original Link

GIT for the Scared Java Dev: Trusting Your IDE [Video]

One of the main issues some developers have when it comes to Git is trusting their IDE integration because… who knows what the IDE is doing? So better rely on a good old command line window and execute the GIT commands yourself.

But isn’t there a better way? Doesn’t using IDEs have its advantages?

In this practical screencast, you will learn how to trust IntelliJ’s Git skills and get a solid start into all the great productivity features that this IDE offers, Git and Version Control-wise.

Original Link

Resolve Git Conflicts Using Katalon Studio

Why Do We Have Git Conflicts?

In a source control system like Git, conflicts may occur when two or more people make changes to the same file concurrently. The conflicts may appear at a member’s local repository or Git remote repository. In order to avoid conflicts, the team must collaborate following several Git practices. For example, before pushing new source code to the Git remote repository, one must remember to fetch the latest version from Git remote repository, resolve any conflicts and merge the code with the local version.

The chart below demonstrates how conflicts may occur when Tom and Emma are working on the same project. The conflicts occur when Tom and Emma try to push new code to the Git remote repository without updating the changes from each other.

Git conflicts Katalon Studio

Resolve Git Conflicts Using Katalon Studio

Let’s consider the following situation: Tom and Emma are working on the same test case in a test project. Emma added a new comment (“EMMA ADDED THIS COMMENT”), then committed and pushed the change to the Git remote repository.

Resolve Git Conflict Katalon Studio

At almost the same time, Tom also added a new comment (“TOM ADDED THIS COMMENT”), then committed and tried to push to the Git remote repository.

Resolve Git Conflict Katalon Studio

Unfortunately, since Emma had pushed the code before Tom, the version of the code in Git was different from the version of the code in Tom’s local repository and therefore, Git rejected Tom’s “push” action.

What should Tom do to push his change to the Git remote control?

First, Tom has to “pull” the code from the Git remote repository to his local machine.

Obviously, Tom will see a message about the conflict:

Resolve Git conflict

In the “Script” mode of the test case “TC2_Verify Successful Appointment” in Tom’s Katalon Studio project, there are errors with indicators such as “<<<<<<<” (convention from Git). Let’s look at the script more carefully:

Resolve Git conflict

Recall that the comments were added by Tom and Emma, and the “conflict” is now on Tom’s Katalon Studio project. Everything within “<<<<<<< HEAD” and “=======” is the change from Tom. And, everything within “=======” and “>>>>>>> branch ‘master’…” comes from Emma, which is currently in the Git remote repository.

Now Tom has to decide which change is correct, or both are correct or wrong. Tom has to replace these lines of code by the correct one, e.g. “THIS IS THE CORRECT COMMENT”:

Resolve Git conflict

After resolving the conflict, Tom is now able to commit and push the change to the Git remote repository.


In order to minimize the conflict in a team having more than one members, you should define a process from the very beginning so that all team members are on the same page when using Git. Here are some suggestions for good practices:

  • Commit often: do not wait until a huge amount of scripts created to commit and push to the Git remote repository. The smaller set of the scripts is pushed, the easier you resolve the conflict.
  • Pull changes from the Git remote repository before working on new scripts and before committing.
  • Each member works on each feature at a time.

Best Practices in Katalon Studio Integration With Git

Git is a very powerful system for source version control, so if you have more than one member on your Katalon Studio project, you may consider using Git as a remote repository for your testing project. The content below provides several typical workflows for working with Git using Katalon Studio. Each workflow is presented via a diagram with each activity being taken from either Katalon Studio, Git, or Local Repository.

The activities involve performing one or more commands on Katalon Studio and Git:

1. Katalon Studio:

2. Git:

Below are some good practices in using Katalon Studio integration with Git.

Starting a Katalon Studio Project With Git

Below is a recommended workflow for starting a new Katalon Studio project and integrating with Git. The main activities in this workflow include Katalon Studio Registration, enabling Git integration, creating a Git repository, and cloning the project to Katalon Studio.


Add an Existing Katalon Studio Project Into a New Git Repository

In case you have an existing Katalon Studio project and you want to add it to a Git repository and share it with team members, you can apply the following workflow. This workflow involves Commit and Push changes to Git.


Add an Existing Katalon Studio Project to an Existing Git Repository

Very often, we have a Git repository from the development team for storing production code. The QA/tester team is required to save automation test scripts into this repository under a “test” folder.

You can apply the following steps to start a new Katalon Studio project and integrate it with the existing Git repository from the development team:

Start Katalon project in existing Git

The situation is a bit more complicated if your automation team already works on a Katalon project for a while and tries to add it to an existing Git repository of the development team. In this case, you can apply the following workflow. In this workflow, you have to move the existing Katalon automation project into the test folder of the development’s local repository. You then open the Katalon project from Katalon Studio.

add Katalon project into existing Git

For further instructions and help, please refer to the Katalon Studio Tutorials and Katalon Forum.

Original Link

DevOps Zone 2017: End-of-Year Special

DevOps went through a ton of changes and growth over the last year. In this review, we hope to remind you of the most interesting events and refresh your memory on some of the most-read and most interesting articles from our contributors. Let’s take a look back on the best of this important zone in 2017!

Smash Hits From the DevOps Zone

  1. Most Useful Linux Command Line Tricks, by Seco Max. Here it is –  your most popular article in the DevOps Zone this year! Brush up on the command line skills you’ve forgotten, and learn some you might not know.

  2. Solution Architecture vs. Software Architecture, by David Shilman. This breakout article from a great new author dives into distinct architecture domains, like DevOps and data architecture, how they interconnect, and their importance in overall solution architecture.

  3. The OOP(S) Concepts You Need To Know: Part 1, by Gabriele Tomassetti. Be sure not to miss this series on Object Oriented Programming, or OOP, the most widely used paradigm fundamental to becoming a better programmer.

  4. How to Set Up a Continuous Delivery Environment, by Piotr Minkowski. Jenkins’ Pipeline Plugin offers a simple way to configure all steps in the same place. You can do almost everything inside your pipeline without any additional plugins.

  5. A Complete Guide to Performance Testing Types: Steps, Best Practices, Metrics, and More, by Angela Stringfellow. You might think you know everything about performance testing and automation. Read this review of common misconceptions and pitfalls to see how your testing measures up.

  6. The SOLID Principles by Examples Series (Parts 1, 2, 3, 4, 5), by Michele Ferracin. As the name says, this series of posts explains and illustrates all five principles with code examples, for cleaner, more loosely-coupled code. Don’t miss it!

Cool DevOps Articles You May Have Missed

  1. Pimp My Git – Manage Different Git Identities, by Sandra Parsick. Git users- this tutorial runs through two methods of managing Git identities when cloning new repositories for separate projects.

  2. Can DevOps Be a Role?, by Jeffrey Lee. Companies are hiring DevOps engineers left and right, but is DevOps really a role? Let’s explore what DevOps is as a role, and as a mindset, brought to you by our awesome DevOps Zone Leader, Jeffrey.

  3. The Ultimate List of C# Tools: IDEs, Profilers, Automation Tools, and More (Part 1), by Angela Stringfellow. If you’re tired of using the same C# programming tools, this compilation should give you a better idea of what is out there.

  4. What Is CI/CD?, by Izzy Azeri. Let’s take a look at CI and CD, the fundamental cornerstones of any DevOps shop and look at how you can leverage these concepts to help better deliver your next project.

You can get in on this action by contributing your own knowledge to DZone! Check out our new Bounty Board, where you can claim writing prompts to win prizes! 

DevOps News

So much happened in the DevOps world in 2017, it’s hard to wrap your mind around. Luckily, our contributor Cynthia Dunlop compiled a great list of the major happenings in her article The Top Software Testing News of 2017!

Superstar Contributors

Though these contributors don’t appear in the list above, they’ve consistently provided us with quality content for the DevOps Zone (and often, other zones, too). Here’s a shoutout to more authors worth checking out:

  1. Tom Smith – research analyst, providing us with invaluable executive interviews and more

  2. John Vester – Zone Leader with a finger on the pulse of development news

  3. Erik Dietrich – expert programmer-of-all-trades

  4. Grzegorz Ziemoński – Zone Leader, Java master, and self-titled “software craftsman”

  5. Daniel Stori – Zone Leader and comic artist extraordinaire

  6. Anders Wallgren – CTO at Electric Cloud, sharing years of DevOps experience and knowledge

The zone wouldn’t be what it is without all our great contributors. Thanks for another year of sharing your content with us and our community!

Dive Deeper Into DevOps

  1. DZone’s Guide to Automated Testing: Improving Application Speed and Quality: a free ebook download.

  2. Continuous Testing 101: This DZone Refcard aims to clear up misconceptions around this concept and teach you about its methodology.

And be sure to check out our newest DevOps Guide, available today on our Guides page!

Original Link

Defeating DevOps Drift: A Continuous Improvement Approach

Once your organization has embraced and implemented a DevOps approach to software development, the work shouldn’t end there. Your team may have a streamlined DevOps strategy that fosters a culture of continuous delivery, but they still need to avoid falling into the trap of a drifting or coasting mentality.

There are definitive signs that systems are adrift, including:

  • Archaeology: Out of date systems and tooling
  • Recalcitrance: Last-generation approaches to DevOps (i.e.: VMs instead of containers)
  • Fragility: Reluctance to update existing infrastructure/fear of breaking something
  • Convolution: Increasing or uncontrolled infrastructure complexity and costs

If any – or all – of these indicators are present, it’s likely time to begin the continuous improvement process. The first step involves developing metrics to track the key DevOps indicators that matter to your business, which will help to determine ROI.

Time-to-Developer Productivity

On-boarding new (or existing) employees to a project is often a lengthy, painful process. No two developer’s machines are alike, and documentation is almost always out of date when it comes time to sign a new team member up. Many real-world teams end up with a fleet of one-off dev environment configurations that are difficult to troubleshoot, update, and collaborate upon. As a result, new team members usually face an uphill battle to productivity.

By using DevOps toolsets, such as Vagrant and Docker, this variability can be greatly reduced or eliminated. Once the team starts treating their environment like any other piece of source code, time-to-productivity plummets. When these practices are properly implemented, it is not uncommon to see developers successfully building, writing, and testing code on day one of a new assignment. In the more traditional approaches we often see amongst our clients, this can take upwards of one to two weeks, or even longer.

Production Defect/Error Rates

By building quality into your DevOps pipeline, you can catch defects before they make it to your deployed software. Utilizing infrastructure as code principles decreases environmental variability and ensures that software running on a developer’s laptop acts the same way that it does in production.

Time It Takes to Move Code From Git Commit to Production Deployment

The highest-performing organizations can complete full development cycles in minutes or hours, moving code from a developer commit to production. Legacy workflows can take weeks, months – or in worst-case scenarios – years. By lowering the amount of time it takes to move code from commit to production, organizations greatly reduce their risk of failed deployments by limiting the scope of any one deployment. Additionally, developers become empowered and responsible for ensuring their code works as desired the first time. External dependencies and integration tasks become much simpler, and more features-per-dollar can be delivered.

Time It Takes to Release a New Feature

Business outcomes are largely tied to an organization’s ability to innovate and respond to changing market demands. The previous three metrics work in concert to determine how quickly your development team can respond to the needs of the business and deliver the needed features. When new feature work ends up mired in red tape, technical challenges, or poor quality, the needs of the business cannot be satisfied in a timely manner, resulting in missed opportunity. This drops straight to an organization’s bottom line.

Customer/End-User Satisfaction

This often overlooked metric is an important high-level indicator of the effectiveness of an organization’s overall approach. Low customer satisfaction can be an early indicator of underlying issues, such as inadequate performance, system downtime, poor quality, or incorrect/inadequate feature sets. If the end user is not satisfied with the product, further investigation is typically warranted to identify the root cause.

System Downtime

Downtime impacts your entire software supply chain, from developers all the way to end users. Increasing downtime can be a sign that your infrastructure and/or software is becoming fragile and needs attention. If this metric starts to trend up, it is important to do a root cause analysis to determine the best next steps.


Track and feed results back into DevOps planning and execution activities. This is the key takeaway. Tracking the right metrics allows you to try different approaches and know if they are effective or not. In turn, you can know where and when to invest, along with creating a feedback loop to iterate in the right direction. This allows you to build strategies that support a constant strive to drive metrics in the desired direction of increased quality, reduced delivery time, and increased productivity.

Which leads us to the ROI of continuous improvement:

True Agility: Increased ability to innovate, respond to market changes, stay ahead of the competition, stay software nimble and enable the business to deliver.

Retention and Productivity: Increased employee satisfaction and team performance. The best developers will not tolerate an environment that does not support this culture. High performers must be enabled with the right support and tooling to succeed. According to the Society for Human Resource Management, it costs an average of $4,129 to bring on a new hire, with an average recruitment time of 42 days, plus a hard cost to the company of 75% of the salary for that position – all of which could be avoided if employees feel supported and valued.

Performance: Increased system reliability at lower cost. Continuous DevOps improvement allows you to both improve reliability and optimize/maximize the usage of your infrastructure, which will manifest via increased ROI.

Training: Cross-functional collaboration, because building a successful DevOps culture will encourage increased collaboration across your organizational boundaries. Your teams will work to support each other and ensure overall success.

Mitigation: Reduced security risk. Continually improving your DevOps systems and approach allows you to stay nimble enough to respond to (and mitigate) day zero security risks. Lower performing organizations often get stuck in out of date tooling and systems, thus increasing their attack surface. And since a breach costs an average of $3.62-million, this ROI speaks for itself.

Quality: Reduced defect rates. Defects that make their way into production are more costly to fix than ones that are caught earlier on in the development cycle. By implementing the right tooling and knowing how often these defects end up impacting customers, you stand to drive down development costs and improve your end user’s experience.

Awareness: Insight into key business metrics. Knowledge is power. Instrumenting your infrastructure to capture the data your organization depends on allows you to make the strategic investments necessary to drive your business in the right direction

Original Link

Exploring DevOps: Easing the Transition

These articles can help you start the transition to DevOps by helping you plan and teaching you essential DevOps skills like CI/CD, pipeline workflows, and Jenkins. Plus, check out our free DZone resources for a deeper look at DevOps concepts, and job opportunities for those in the DevOps and testing field.

5 Trending DevOps Articles on DZone

  1. Twelve-Factor Multi-Cloud DevOps Pipeline, by Abhay Diwan. Learn the twelve steps of a build in a CI/CD DevOps pipeline that employs multiple cloud environments, from source code to monitoring.

  2. End-to-End Tutorial for Continuous Integration and Delivery by Dockerizing a Jenkins Pipeline, by Hüseyin Akdoğan. Learn how to implement container technologies with your Jenkins CI/CD workflows to make them easier to manage in this tutorial.

  3. Functional Testing for Container-Based Applications, by Chris Riley. An application’s infrastructure changes the way it is tested. Learn about how containers can be used to benefit testing, especially functional testing.

  4. QA Automation Pipeline – Learn How to Build Your Own, by Yuri Bushnev. Continuous delivery is driving the shift towards automation in software delivery; see how to set up an automated pipeline for your QA processes.

  5. DevOps: The Next Evolution – GitOps, by Danielle Safar. Is it time for DevOps to evolve again? In this post, we take a look at a potential evolution: GitOps. Come find out what it’s all about.

You can get in on this action by contributing your own knowledge to DZone! Check out our new Bounty Board, where you can claim writing prompts to win prizes! 

DevOps Around the Web

  1. Chef Extends OpsWorks Capabilities in AWS, Helen Beal, December 6, 2017. See how Chef can help you manage your application lifecycle with continuous automation.

  2. GitLab Tells Us About Auto DevOps, Richard Harris, November 15, 2017. GitLab can help you improve your applications’ security with better use of automation in your workflow.

  3. DevOps Chat: Chef Habitat Project Continues to Mature, Alan Shimel, December 7, 2017. See what’s in the future of the ambitious Chef Habitat project for enabling a cohesive DevOps process.

Dive Deeper Into DevOps

  1. DZone’s Guide to Automated Testing: Improving Application Speed and Quality: a free ebook download.

  2. Getting Started With Kubernetes: DZone’s updated Refcard on the open-source orchestration system for managing containerized applications across multiple hosts.

Who’s Hiring?

Here you can find a few opportunities from our Jobs community. See if any match your skills and apply online today!

Senior Software Engineer
Location: Remote
Experience: Master’s degree or equivalent in Computer Science, IT, or a closely related field and 2 years of experience as a Software Engineer, Programmer Analyst, or in a related position.

Software Engineer – Test
Location: Hamburg, Germany
Experience: B.S. in Computer Science, related degree, or equivalent experience. 3+ years experience in coding, DevOps, systems engineering, or test automation.

Original Link

Our Journey to Git—One Year Later

In my last blog post, I outlined the obstacles we had to master when moving from Subversion to Git. Since then more than one year has passed and several of the assumptions we have once made no longer hold. This post describes what changes we had to make in our development process and build & deployment infrastructure due to migrating to Git and which improvements we gained.

Development Process

Feature Branches

In the beginning, we intended to continue development on Git as we did with Subversion: Having a master branch where all the development activity goes on and release branches that went through a stabilization phase. The release branches receive bugfixes only and are used for hotfixes after release.

However, in the last status meeting before the migration, we decided to perform a rather radical change and introduce feature branches on-top of the release-branch strategy. And we are quite strict with that: All bug-fixes and features must be developed on separate branches and reviewed before merge. My colleague Martin has already outlined why we made this move and what benefits we gained.

Review Process

As my colleague Elmar described, we have a review process that is tied to tickets in our issue tracker and covers all files that were touched during the development of a ticket. In order to track which files were already reviewed, we rated them with a home-grown IDE plugin that persisted the review state (dirty, ready for review, reviewed) together with a checksum of the file content at the time of rating in the file header, so no files slip through review.

With the advent of feature branches, we suddenly faced lots of merge conflicts due to the review checksum being changed on multiple branches that are to be merged into master. Moreover, the per-file review status tracking is somewhat superfluous with feature branches as all changes that are to be reviewed are encapsulated on a branch.

We decided soon to ditch the file-based status tracking altogether with our home-grown Eclipse plugins. Instead, we are now using GitLab Merge Requests to track and review file changes for each branch (and thus each issue). Merging back to the master branch and also integration of master in a feature branch is now seamless most of the time with no unnecessary conflict interruptions. In addition, GitLab Merge Requests support our developers by showing the success of build and test execution and provide a one-click solution for merging the feature branch back to master.

Fixed Release Schedule

The benefit of feature branches and reviews with merge requests is a master branch that is always ready for production as it contains only finished and reviewed features. We used this to change our release process as well. Previously we negotiated the features that have to go in a specific version and started implementing on trunk until all features were finished. After the hot implementation phase, we stabilized trunk for a few weeks.

Nowadays we have a fixed release cycle of six weeks. Features that are not implemented and reviewed after the six-week implementation period will simply be shifted to the next release. My colleague Martin already gave a good summary of the rationale behind this decision.

From an employee perspective, the biggest benefit is this fixed six-week schedule one can rely on. We aligned regular meetings and activities to this schedule and packed them on one day. So in one week, we spend Wednesday in planning meetings and in the following week we spend the day within maintenance groups to further improve our codebase. This leaves room for focused working without meeting-interruptions on other days.

Development Infrastructure

Besides social and process related aspects we also had to change a lot in our development infrastructure.

Build Server

At the time we migrated to Git we were still using Jenkins 1. We were setting up builds for our master branch, release branches and one build for all the feature branches (which are all prefixed with cr/). While we were able to tell which commit broke a test on master or a release branch, it was quite hard to tell on the feature branches, as builds were scheduled as commits were pushed. This left quite some burden on the shoulders of our developers to monitor Jenkins before merging branches to master and watch out for the interesting build failure mail and ignore the ones from other branches.

Soon we decided to annotate the build status from Jenkins to GitLab via an API call. So we had at least a binary indicator whether build and tests pass. Moreover, GitLab also neatly visualizes this information on merge requests.

Still, we had no good way to distinguish whether a build fails due to compile errors or test failures. With the advent of build pipelines, we wanted to get a much better overview. Hence, we settled out to set up a fresh instance of Jenkins 2 with all these fancy new features: Separate builds for each branch, separation between compile, unit test, user interface test, and packaging.

Splitting the monolithic build into separate steps was no big issue – configuring Jenkins, however, was! There was still a lack of documentation of the new Jenkinsfile specification: Sending emails for build failures worked only with try ... catch ... blocks in the build script, annotating the build status to GitLab was failing due to some classpath issue and almost every feature required a new plugin for Jenkins.

In the meantime, I was playing with the integrated GitLab CI solution for our in-house time and project management solution and I was quite satisfied with its simplicity. I convinced my colleagues to pilot GitLab CI for Teamscale continuous integration alongside Jenkins. After spending half a day we were already at the same level than with the Jenkins 2 setup, however, with no mail sending issues and the build status is (of course) automatically displayed in GitLab merge requests.

Still, there was a longer migration phase of moving all parts from the Jenkins build infrastructure as GitLab CI focuses on a simple approach. Nowadays, we even manage parametric builds and scheduled performance tests in GitLab. In fact, I’ve migrated the last bits of infrastructure jobs (website build, Slack bots) away from Jenkins and pulled the plug in the week of writing this post.

Dockerized Infrastructure

We use GitLab CI fully dockerized, this means each build is running in a separate Docker container. Setting this up just required to create several docker images that contain the required build tools (compile Java, compile C#, user interface testing, …). From this, we gained much larger reproducibility of test results and can ensure concurrent builds are not interfering with each other.

On the other side, we can use the Docker infrastructure to provide third-party systems for integration tests. We no longer need to host a test instance of, e.g., Redmine to test our connectors with, but can spin up a pre-configured instance in a container along with the main build container. Again, the gain is separating things that may otherwise conflict with each other. It further allows modification of the test container even for single branches.


With the advent of Git, we changed not only our version control system. It changed—and gave us the freedom to change—the way we develop and build software. These changes were needed to maintain stability and quality of our software with a growing team of contributors. Looking back I doubt that all technical and social changes would have been possible with our old infrastructure—at least not without heavy burdens.

Original Link

DevOps: The Next Evolution – GitOps

Earlier in September, Rob Stroud and Alexis Richardson came together to discuss implications of delivering velocity and quality while taking advantage of the new normal of continual deployment with cloud-native applications. (View the webinar on demand.)

The Need for Speed  

One thing is clear: speed is critical to organizations everywhere. Businesses are instructing their teams to drive velocity and quality in order to innovate and differentiate themselves in a competitive market. 

DevOps has reached escape velocity and is taking route in the way we deliver technology. But the reality is we can’t continue to provide in the same way we used to. Customer information is demanded in real time and companies are expected to transition products based on consumer requirements within a moments notice. 

Cloud and other technologies are allowing us to go faster than we ever have but one of the problems at the moment is the disconnect that occurs when application development can go faster than a traditional I&O organization can consume. Even highly regulated industries such as banking, financing, and insurance are adapting their technology processes to support DevOps. 

Silos of Automation Are Destroying Velocity

Today 23% of enterprises are now releasing monthly or faster (compared to every 6 months two years ago). Rob argues that’s not fast enough. We want daily, hourly even minutely releases to achieve the ultimate goal of velocity & deployment at will.

However, there is a dangerous disconnect between executives overestimating DevOps maturity or maturity of automation across the lifecycle. Some companies want to get into a cadence on an almost daily basis but existing processes, tools and techniques (even though they attempt to go faster) don’t meet that requirement. 

Essentially we are facing 3 challenges: Rob’s Challenges are 

1. Speed: We’d all like to go faster because we can be competitive and save money.

2. Automation phase shift: Moving from one release a month → one release a minute: How can this be done?

3. New application types: We have a reason to re-architect or try different things.

“Cloud Native” What?

When Netflix started to transition into a purely web delivered movie business they created these set practices when designing systems: Web-scale, Global, High Availability, Consumer-facing & Cloud Native. Their goals boiled down to two things: Improve Availability and quickly change the product in line with consumer feedback. 

But is Cloud Native the answer? 

What Is GitOps?

Weaveworks has been running cloud-native technologies for the past three years, and we hope our lessons learned can help speed up your deployments, improve the quality of your releases and build on your existing DevOps practices.

GitOps builds on DevOps with Git as a single source of truth for the whole system. Over the few years at Weaveworks, we learned that success came down to getting 3 things right: 

1. Have a complete automated pipeline.

2. Operating a fast-paced business 24/7 requires monitoring and observability baked into the beginning. Security is of critical importance.

3. Everything has to be version controlled and stored in a single source of truth from which you can recover.

Head over to watch the full webinar and learn how we implemented these best practices. 

Original Link

The Multiple Usages of git rebase –onto

I’m not Git expert and I regularly learn things in Git that changes my view of the tool. When I was showed git rebase -i, I stopped over-thinking about my commits. When I discovered git reflog, I became more confident in rebasing. But I think one of the most important command I was taught was git rebase --onto.

IMHO, the documentation has room for improvement regarding the result of the option. If taking the image of the tree, it basically uproots a part of the tree to replant it somewhere else.

Let’s have an example with the following tree:

o---o---o---o master \ \---o---o branch1 \ \---o branch2

Suppose we want to transplant branch2 from branch1 to master:

o---o---o---o master \ \ \ \---o' branch2 \ \---o---o branch1

This is a great use-case! On branch branch2, the command would be git rebase --onto master branch1. That approximately translates to move everything from branch2 starting from branch1 to the tip of master. I try to remember the syntax by thinking the first argument is the new commit, the second is the old one.

So, but what use-cases are there to move parts of the tree?

Delete Commits

While my first reflex when I want to delete commits is to git rebase -i, it’s not always the most convenient. It requires the following steps:

  1. Locating the first commit to be removed
  2. Effectively run the git rebase -i command
  3. In the editor, for every commits that needs to be removed, delete the line
  4. Quit the editor

If the commits to be removed are adjacent, it’s easier to rebase --onto, because you only need the new and the old commit and can do the “deletion” in one line.

Here’s an example:

o---A---X---Y---Z master

To remove the last 3 commits X, Y and Z, you just need:

git rebase --onto A Z

Long-Lived Remote Branches

While it’s generally a bad idea to have long-lived branches, it’s sometimes required.

Suppose you need to migrate part of your application to a new framework, library, whatever. With small applications, a small task-force can be dedicated to that. The main development team goes on week-end with instructions to commit everything before leaving on Friday. When they come back on Monday, everything has been migrated.

Sadly, life is not always so easy and applications can be too big for such an ideal scenario. In that case, the task force works on a dedicated migration branch for more than a week-end, in parallel with the main team. But they need to keep up-to-date with the main branch, and still keep their work.

Hence, every now and then, they rebase the migration branch at the top of master:

git rebase --onto master old-root-of-migration

This is different from merging because you keep the history linear.

Local Branches

Sometimes, I want to keep my changes locally, for a number of reasons. For example, I might tinker with additional (or harsher) rules for the code quality tool. In that case, I want to take time to evaluate whether this is relevant for the whole team or not.

As above, this is achieved by regularly rebasing my local tinker branch onto master.

As above, it lets me keep my history linear and change the commits relevant to tinker if the need be. Merging would prevent me from doing this.


git rebase --onto can have different use-cases. The most important ones are related to the handling of long-lived branches, whether local or remote.

As always, it’s just one more tool in your toolbelt, so that not everything looks like a nail.

As every command that changes the Git history, git rebase --onto should only change local commits. You’ve been warned!

Original Link

Exploring DevOps: Work Smarter, Code Better

This month’s dive into DevOps is all about improving your code from the source- you! Learn techniques to work more efficiently, making your final product better (and your life easier). Read about principles of programming, tricks in Git, why you might want to switch to Linux, and more. Plus, you’ll find free ebook downloads and job opportunities. Happy coding!

5 Trending DevOps Articles on DZone

  1. SOLID Principles by Examples: Single Responsibility, by Michele Ferracin. This post kicks off a series about understanding the SOLID principles through simple examples to write high-cohesion, low-coupling code.

  2. Simple CRUD With Git, by Unni Mana. Learn to use the basic CRUD commands in Git, like creating a repo and adding or deleting files, to make your life easier and improve productivity.

  3. 8 Reasons to Learn and Switch to Linux, by Grzegorz Ziemoński. The benefits of switching to Linux span across multiple areas, including becoming a better professional, saving money, and having more fun. What are you waiting for?

  4. Configuration as Code With Docker and Spring Boot, by Ram Gopinathan. Learn how this DevOps practice makes life easier for operations, as well as delivering increased velocity to the software lifecycle.

  5. What I’m Talking About When I Talk About TDD, by Uberto Barbini. See how Test-Driven Design makes designing software architecture easier by allowing you to test as you go along and fix mistakes as they arise. 

DevOps Around the Web

  1. Keybase Git gets keys, basically: Secure chat app encrypts your repos, Thomas Claburn, October 5, 2017. This feature promises to ensure security, wherever you’re coding.

  2. Java SE 9 and Java EE 8 arrive, 364 days later than first planned, Simon Sharwood, September 22, 2017. Now that the long, long wait is over, let’s talk about the code. Check out a full list of the new features.

  3. Linux Foundation Aims to Advance Open-Source Software Development, Sean Michael Kerner, September 14, 2017. Check out this video to see why Linux and The Linux Foundation want to foster collaboration and open source development.

Dive Deeper Into DevOps

  1. DZone’s Guide to Automated Testing: Improving Application Speed and Quality: a free ebook download.

  2. Getting Started With Kubernetes: DZone’s updated Refcard on the open-source orchestration system for managing containerized applications across multiple hosts.

Who’s Hiring?

Here you can find a few opportunities from our Jobs community. See if any match your skills and apply online today!

DevOps Engineer
Location: Santa Clara, CA, United States
Experience: Strong system administration background for Linux based systems. Experience working with config and deploy management tools like Chef or Puppet. Comfortable in scripting languages like Python, Ruby, and Bash.

DevOps Engineer
Location: Raleigh, NC, United States
Experience: Experience with Azure or AWS cloud services. Advanced Linux and Windows administration skills. Knowledge of container technologies including Docker, optionally Swarm or Kubernetes.

Original Link

GitLab Releases 10.0

GitLab announced the release of GitLab 10.0 providing modern developers additional capabilities to fully embrace the benefits of DevOps, specifically monitoring, continuous integration (CI) and deployment (CD), and Kubernetes based application development. Since the release of GitLab 9.0 in March 2017, GitLab has seen more than two million downloads, with close to 1,000 people having contributed to the research panel that fueled the new features now available in GitLab 10.0.

GitLab 10.0 delivers on input from enterprise customers, as well as joint development from the growing community of more than 1,800 global contributors. With a comprehensive slate of new feature options across both Enterprise and Community Editions of GitLab 10.0, users gain:

  • Faster time to value: GitLab reduces the amount of time developers must spend on tooling, freeing them to focus on software development. The addition of Auto DevOps brings best practices to your project by automatically configuring software development lifecycles by default, providing out-of-the-box build, test, code quality, review apps, deploy and monitoring.

  • Increased productivity: Refreshed User Interface providing easier navigation as well as features used to reduce cycle time include enhanced subgroups, deploy boards and Prometheus, the ability to store files in an object repository, and augmented integration support for both Slack as well as JIRA.

  • Expanded Kubernetes capabilities: GitLab CI/CD makes deploying to Kubernetes a seamless process by offering a quick way to configure, deploy, and monitor applications inside Kubernetes, whether GitLab is installed inside or outside the cluster.

  • Greater collaboration with group-level issue boards: Enabling teams working across multiple projects, such as when adopting a microservices architecture, group-level issue boards allow you to manage issues across all projects in a single group, in one view. Lists, labels, and milestones are all managed at the group-level in a group-level board, allowing you to focus on the group abstraction at this level.

“Enterprises are faced with the need to build more software in-house than ever before and at speeds that enable enterprise processes to accelerate. Competitive software development environments and tools are becoming a key competitive differentiator,” said Holger Mueller VP & Principal Analyst, Constellation Research. “We are seeing an increase in the adoption of cloud-native capabilities utilizing Kubernetes in the market, requiring modern application development and the need for automated processes. It is good to see vendors providing that capability.”

More than 100,000 organizations have chosen GitLab for their software development lifecycle, making GitLab the preferred Git-based application with two-thirds of the self-hosted market share. Enterprises including Ticketmaster, ING, NASDAQ, Alibaba, Sony, VMWare and Intel, faced with an increasing need for modern, collaborative solutions, have adopted GitLab to maintain pace with the demands of today’s work environment.

“We’ve partnered with GitLab as part of a joint effort to simplify and fortify the developer experience,” said William Freiberg, COO at Mesosphere. “Mesosphere DC/OS lets IT operations teams give their developers one-click deployment and elastic scaling of containers and data services on any infrastructure, and GitLab 10 adds unique capabilities to extend our platform. Our customers are excited to use GitLab’s Auto DevOps to quickly adopt industry best practices in an easy and optimized way, while our joint support for Kubernetes provides a best-of-breed approach to Containers-as-a-Service.”

“We are thrilled to deliver a secure, feature-rich version 10.0, in addition to partnering alongside organizations helping to further GitLab’s commitment to making DevOps best practices easily accessible,” said Sid Sijbrandij, CEO and co-founder at GitLab. “Running Auto DevOps with Kubernetes and using GitLab’s CI/CD capabilities provides the simplest, most efficient way to automate a secure and flexible continuous delivery pipeline.”

As part of its mission to be the development tool for Kubernetes and cloud-native software, GitLab has also joined forces with Cloud Native Computing Foundation (CNCF). Recognized by CNCF as one of the 30-highest velocity open source projects and top-ten open source projects earlier this year, GitLab 10.0 furthers the commitment to help enterprises realize the value of cloud-native application development.

With the most robust and only integrated modern software development life cycle (SDLC) product on the market, GitLab helps organizations embrace the power of the cloud by supporting the latest in software development needs.

Original Link

Simple CRUD With Git

In this article, I will show you how to perform basic operations with Git that will improve the productivity of a developer. Git is an essential tool to learn and master, as many projects depend on it. This article will show some of the basic operations that come handy in day to day life while dealing with Git.

Create a Repository

The first thing we need to do is to create a remote repository on For that, you need to log into After a successful login, you need to create a repository called “test” using the “create a repository” button. This is the repository we are going to work with.

Add New Files

By default, the repository is empty except for a README.MD file.

In order to add a new file, first you need to clone this “test” repository to your local disk. In order to do that, run the following command:

git clone

This will download and create a folder called “test” on your local disk. If there are any files in it, all of them will be downloaded into the “test” folder.

Now add a new file called “test.txt” with some contents in it under the “test” folder. Then run the following command:

git add .

The above command will add the new files to your repository.

The next command is to commit this new file.

git commit -m "this is my first file"

The -m switch is the message that you want when you commit the changes.

Now we need to send this change to the remote repository. For that, run the following command.

git push origin master

This command will lead to authenticate the user once again and send your file to new repository.

Let us take a look at the above command in detail:

  • push – Sends the files or changes to remote repository.

  • origin – The default name of the repository given by Git. 

  • master – The default branch name in the repository. This is where all your changes will be pushed into unless a different branch name is mentioned.

Update Files

Now you are going to modify the files you created in the above step. Let us modify the “test.txt” file we created in the above step. We need to perform the following operations one by one:

git add .
git commit -m "I have modified my file"
git push origin master

Once you execute the above commands one after another, your files will be updated in the git remote repository.

Delete Files

Now we want to delete either a single file or multiple files from the remote repository. In order to do this, issue the following commands:

git rm <name of your file>
git commit -m "the file has been deleted"
git push origin master

These commands will help you in delete the file in your git repository as well as from your local file system.

In order to delete multiple files, you need to mention the files to be deleted. For example, if you want to delete 2 files, namely, file1.txt and file2.txt, then issue the following command:

git rm file1.txt file2.txt


We have seen the minimal necessary commands while working with a Git repository. In all the CRUD operations which we have seen, the following commands are mandatory:

git commit -m "your commit message"
git push origin master

You can do much more with Git, but this article is just beginner level for anyone to understand the basic concepts of Git. 

Original Link