ALU

continuous delivery

Doing Cloud Right: Takeaways from Our Recent Jez Humble Webinar

Last month, we hosted our "Doing Cloud Right Webinar" with Jez Humble (DORA CTO, author) and Anders Wallgren (Electric Cloud CTO). In the webinar, Jez and Anders discussed some of the most striking findings of the recent 2018 Accelerate State of DevOps Report (ASODR), including the fact that organizations that "do cloud right" are 23 times more likely to be elite DevOps performers!

Continue reading for some top takeaways from this insightful webinar.

Original Link

Using Jenkins-X UpdateBot

Jenkins-X UpdateBot is a tool for automating the update of dependency versions within project source code. Say you’re building two projects, A and B, such that B uses A as a dependency. The release process for A could use UpdateBot to update the source for project B to use a new version of A. With UpdateBot this would result in a pull request so that the change could be tested and reviewed or automatically merged.

Within pipelines on the Jenkins-X platform, UpdateBot is automatically present and invoked by updatebot commands in Jenkinsfiles. But UpdateBot can also be used outside of Jenkins-X and running it alone can help to understand what it can do and test out version replacements. So let’s try it out with a simple tester project.

Original Link

How to Build Hybrid Cloud Confidence

Software complexity has grown dramatically over the past decade, and enterprises are looking to hybrid cloud technologies to help power their applications and critical DevOps pipelines. But with so many moving pieces, how can you gain confidence in your hybrid cloud investment?

The hybrid cloud is not a new concept. Way back in 2010, AppDynamics founder Jyoti Bansal had an interesting take on hybrid cloud. The issues Jyoti discussed more than eight years ago are just as challenging today, particularly with architectures becoming more distributed and complex. Today’s enterprises must run myriad open source and commercial products. And new projects — some game-changers — keep sprouting up for companies to adopt. Vertical technologies like container orchestrators are going through rapid evolution as well. As they garner momentum, new software platforms are emerging to take advantage of these capabilities, requiring enterprises to double down on container management strategies.

Original Link

Keep Your Automated Testing Simple and Avoid Anti-Patterns

Automated Tests That Need to Be Tested?

We are so fortunate today to have so many automated testing libraries, frameworks, and tools available that make creating automated tests quite easy. Some even allow people who do not have any coding experience to create automated tests. If you’re new to automated testing, should you just dive in with one of those tools and crank out some tests?

Or perhaps you’ve got programming experience, and you’ve automated some regression tests, but they keep sporadically failing in your continuous integration. You and your team are spending way too much time diagnosing failures to see if changes to the production code caused regression failures or if it’s just something wrong with the automated test script. If you haven’t had the time to learn good automated testing and principles, troubleshooting and maintaining automated scripts can slow your team’s ability to deliver new features.

Original Link

Top DevOps Experts You Should Know

To me, continuous integration/continuous delivery (CI/CD) also includes continuous learning and experimentation…I’m always interested in finding out about new books, podcasts or new teachers. This isn’t just my work philosophy, but it also carries onto my personal growth projects. It’s led to fun reads from authors like Tim Ferriss’ blogs, Farnam Street, Freakonomics, etc… Similarly, I’ve been collating a list of DevOps experts which I’ve been turning to increase my knowledge in this space.

I was turned on to DevOps because a few years ago, I watched several hackers at Defcon talk about how they hacked into a Tesla. They were amazed at how the systems were architected. The car was basically a server on wheels. Yet, they had found an obscure way to access the controls. Then, unlike most vendor/hacker horror stories, Tesla actually patched the security hole within weeks and then hired those hackers to further secure their systems.

Original Link

DevSecOps Best Practices – Building an E-Commerce Application on Alibaba Cloud

In this article, we will explore the concept of DevSecOps and discuss how we can apply its principles by building an e-commerce application on Alibaba Cloud.

Gartner predicts that,

Original Link

The Latest DevOps Webinars from DZone

CI/CD for Cloud Native Applications on Kubernetes

Teams often spend days manually setting up Jenkins pipelines and implementing continuous integration/continuous delivery (CI/CD) effectively. In this webinar, you will learn an automated approach to develop and deliver cloud-native apps on Kubernetes, including:

  • The pillars of continuous everything (integration | testing | delivery | deployment)

    Original Link

What CI/CD Tool Should I Use?

In our ongoing series, the Kubernetes FAQ, where we’ve been answering some commonly asked questions in the community, this week we discuss what you need to consider when choosing a CICD tool.

There are a ton of CICD tools out there to choose from – both open source solutions as well as commercial ones. In here, we highlight some of the most important considerations to make when setting up a continuous delivery pipeline.

Original Link

What We Learned About CI/CD Analyzing 75,000 Builds

One can read a lot of why continuous integration is a powerful techniqueand how it can speed up your mobile app development process. However, we decided to dig deeper and analyze who are the users of a continuous integration tool tailored for mobile (confused about why CI for mobile is different than standard CI?) and what are their cases, needs and, most importantly, the benefits…in real numbers! We gathered data from Nevercode, a CI/CD tool for Android, iOS, React Native, Ionic and Cordova, and analyzed more than 75K builds from the first quarter of 2018. Here are the results.

What Is Continuous Integration (CI)

In short, continuous integration (CI) is a software development practice of merging developer build copies daily, if not multiple times a day, into a shared code repository. Each integration is verified by an automated build to detect errors and get to the root of the problem as soon as possible without losing track of the development process. Check out the top CI tools for mobile projects.

Original Link

How to Improve Software Delivery Performance

Software value stream mapping and the ability to visualize how work flows through an ever-changing and complex software delivery system enables teams to measure metrics and provide quick analysis to see if your system is performing well or not.

But where do you start with all of that? Where do you start improving, where do you spend your time? What do the elite performers have in common? The Accelerate: 2018 State of DevOps report by DevOps Research and Assessment surfaces capabilities that are statistically shown to improve software delivery performance and are shared across high performing teams and organizations.

Original Link

DevOps Tool Tyranny

In the software development world, we hear the adage "use the right tool for the job" all the time. Its use goes back decades, and we’ve all been told, "you don’t hammer a nail with…" For me, deciding on the tool is often the most important step in the process (as significant as how you use it) because the implications are long-term and can be expensive to undo if you make the wrong choice.

When it comes to programming languages, different languages are better suited to specific use cases than others. In other instances, the decision is less clear-cut. For example, today, if I were to develop a multi-threaded application I would select Go or perhaps even Node.js in a Kubernetes cluster. I would not choose Java for such a project. No doubt some reading this may disagree with my example, and that illustrates my point. It can be difficult to determine which language is the best for a particular project; there are lots of factors that must be weighed and considered.

Original Link

Continuous, Incremental, Progressive Delivery: Pick Three

Software developers have spent the last decade talking about Continuous Delivery and the benefits of delivering working code as often as possible. But it turns out that’s only one part of the whole picture of software delivery. Modern teams actually have three distinct outcomes they are trying to achieve — a holy trinity of continuous, incremental, and progressive delivery. Each of these delivery practices can help your team move faster with less risk.

Continuous Delivery

Continuous Delivery is a set of practices that ensure your code is always in a deployable state. You accomplish this by increasing the frequency at which code is committed, built, tested, and deployed-steps that in the past only occurred at the end of a project when it was ‘code complete’.

Original Link

5 Quick Wins for Securing Continuous Delivery

“DevOps is Agile on steroids — because Agile isn’t Agile enough.”

So says Jim Bird, the CTO for BiDS Trading, a trading platform for institutional investors. Jim continued, "DevOps teams can move really fast…maybe too fast? This is a significant challenge for operations and security. How do you identify and contain risks when decisions are being made quickly and often by self-managing delivery teams? CABs, annual pen tests, and periodic vulnerability assessment are quickly made irrelevant. How can you prove compliance when developers are pushing their own changes to production?"

Jim was presenting at the 2018 Nexus User Conference on Continuous Delivery. Pulling on his 20+ years of experience in development, operations, and security in highly regulated environments, Jim laid how and why Continuous Delivery reduces risk and how you can get some easy wins toward making it more secure.

Original Link

What Continuous Delivery Means for Testers, QA, and Software Quality

If you’ve been a software tester for any length of time, you’ve likely noticed the shift toward continuous delivery, whereby businesses and project and operations teams aim to safely and quickly release new builds to production, ostensibly at the push of a button. The realization of continuous delivery means faster feedback, improved time to market, increased quality, and a better customer experience, though not necessarily in that order.

What’s All the Fuss About Continuous Delivery?

Continuous Delivery gives developers rapid feedback on their code, which leads to improved productivity. In theory, code can be written, tested, reviewed, merged, and integration and acceptance tested before it even gets into a tester’s hands. Rapid, reliable and high-quality releases mean happier customers, which often translates into increased business revenue.

Original Link

Architecture for Continuous Delivery

This is part 3 in a series on continuous delivery. In parts one and two, we introduced you to the concept of continuous delivery and how you can prepare your organization before adopting CD practices.

In this article, we’re going to discuss architecture for continuous delivery. How do we architect our systems in a way that enables us to continuously deliver value to our customers?

Original Link

Conferences and Tutorials — Fall 2018 DevOps Season

Hey, DevOps fans! This month, I want to get you ready for fall. That means learning about the best upcoming conferences and hot topics that will help you in your career. Let’s start with the latest hit articles in the DZone DevOps Zone.


Hot DevOps Articles on DZone

  1. Ansible: An Effective IT Automation Tool, Anmol Nagpal. Learn about Ansible, a tool for automating application deployments, configuration management, and more in a DevOps environment.

    Original Link

4 Compelling Benefits of CI/CD Businesses Cannot Afford to Ignore

Waterfall, Agile, and now DevOps – are the most sought-after approach to building great mobile apps. The old methods of software development and delivery have now become passé.

Gone are the days when companies deployed software in annual, quarterly, or even monthly releases. Moving over to DevOps practices, software is deployed weekly, daily, or even multiple times a day without disturbing the user journey.

Original Link

Are You Prepared for Continuous Delivery?

This is part 2 in the series on Continuous Delivery. In part 1, we introduced to you the concept of continuous delivery, how it is related to continuous integration, and the advantages it brings to the table.

Continuous Delivery requires that every commit into the code base is built immediately and that any build can be deployed into the production environment.

Original Link

Dealing With Unplanned but Urgent Work Through DevOps

3) Maintenance and Evolution:
To keep a product alive, we choose backlog stories that will bring value, and do them one after the other.
But… as support of the application may take a huge part the work. And when the problem is critical, there is nothing you can do but stop what you do and fix it. This can blow any estimation.
How do you deal with firefighting in a #NoProjects world?
And techniques to avoid it.
How does #NoProject and DevOps work together?

Let me take the last part of this question first. Operations have never been plagued by the project model the way development has. When does a SysAdmin ever say "The project is finished so I’m not going to restart the server?"

Original Link

Installing CI/CD Tools With Ansible: Everything You Need to Know

When setting up a CI/CD pipeline, you need to choose the right tools. Things you should take into consideration include your specific needs, abilities and ease of use. In my last post, I recommended different tools for building a CI/CD pipeline. Today, I will explain how to install and configure them with Ansible. In the end, I will provide resources that explain how to configure them to work together.

The tools we will show are

Original Link

Why Is DevOps Becoming Mainstream in Software-Powered Organizations?

Early DevOps practitioners have shown DevOps to be more than just a cultural aspect or a set of tools – they have confirmed it to be a crucial success factor and a competency well worth developing in today’s environment of rapid evolution, technological advancement, and huge customer or employee expectations. The demand for DevOps in organizations is high and need of the hour, but it is not something that can be adopted on to the average team just like that. When this happens, the current organizational undercurrents will weaken the effectiveness of such a program. Rather, the development, operations, and overarching management processes must be redesigned anew and from the scratch. DevOps can be profoundly disruptive to a business, it has an enduring and strong impact on organizational success. After all, IT is the core of almost any business and the effectiveness and agility gained there will have a notable impact on the readiness and coordination of the organization as a whole.

The term DevOps has entered into our general language and has gathered much attention and focus these days.

Original Link

DevOps on AWS Radio: CI, CD, and DevOps

In this episode, Paul Duvall and Brian Jakovich cover recent DevOps on AWS news along with a discussion with one of the founding fathers of Continuous Integration, Paul Julius.

Here are the show notes:

Original Link

Blue-Green Deployment For Cloud Native Applications

What Is Blue-Green Deployment?

Blue-green deployment is a technique that enables continuous delivery to production with reduced downtime and risk. It achieves this by running two identical production environments called Blue and Green. Let’s assume, Green is the existing live instance and Blue is the new version of the application. At any time, only one of the environments is live, with the live environment serving all production traffic.

What Are The Benefits?

  • It helps to reduce the downtime and even reduces it to zero depending on the application design and deployment approach.

    Original Link

Ensure Customer Satisfaction With End-to-End Value Delivery

Today, consumers are more ready than ever to switch allegiances if they feel their expectations are not being met, so you need to ensure that they receive end-to-end value reliably and predictably. In the face of increasing complexity and market pressure, continuous delivery has provided the answers, facilitating speed, quality, and visibility within the software delivery lifecycle.

While continuous delivery was implemented from the get-go by start-ups and smaller software houses, large-scale enterprises have been slower to the punch. For them, the wake-up call only arrived after seeing unicorns beat them in the marketplace, push competitors into oblivion and cause widespread disruption. They were forced to sit up and take note.

Original Link

Tips on Jenkins — a Decade Later — for Continuous Delivery

My first encounter with Jenkins (actually, Hudson — Jenkins was forked from Hudson many years ago) was almost a decade ago when I had to use it for Continuous Integration (CI). A few days ago, I had to work with it again. This time, it was for the task of Continuous Delivery (CD). Jenkins has undergone many changes over the years and seems to have evolved into quite a powerful tool with a lot of plugins.

My task was quite simple. I had to download a package in ZIP format from a repository and install it on a specific node. For this, I did the following:

Original Link

5 Steps to a Clear S/4HANA Migration Strategy

If you’re not thinking about S/4HANA already, you definitely should be. It’s fair to say that SAP’s latest platform has taken some time to gain traction amongst the global SAP user base, but the wheels are slowly starting to turn, with adoption rates increasing sharply over the past nine to twelve months.

Uncertainty about the journey to SAP S/4HANA leads many customers to fear a lengthy, expensive project that will disrupt operations, a fear that is clearly evident amongst many SAP users I speak to. It’s true that for any organization with a relatively complex SAP landscape today (i.e. most large organizations that run SAP), the ECC to S/4HANA migration is going to take time and will likely not come cheap. However, with sufficient preparation, appropriate change management, and the right tools and resources, real benefits can be unlocked through a smoother, lower-risk transition.

Original Link

Continuous Discussions (#c9d9) Podcast, Episode 91: Gene Kim and DOES’18 Speakers #2 [Video]

This morning on our Continuous Discussions (#c9d9) podcast, we had a great discussion with a panel of DevOps Enterprise Summit Las Vegas 2018 (DOES18) speakers and DevOps thought leaders to discuss conference programming and challenges for operations teams.

Today’s episode dove into the DOES18 program focus on next-gen operations. For the past several years, operations teams have often felt like DevOps was really NoOps – with continuous delivery and modern architectures mainly benefiting developers. NoOps no more at DOES18! Today we learned that the programming committee has dedicated 25% of the program to next-gen operations.

Original Link

When to Rely on DevOps-as-a-Service

It is a popular misconception that DevOps workflows are centered around automating daily operations with the cloud infrastructure. Quite to the contrary, DevOps services can do so much more…

The main reason for ordering DevOps-as-a-Service from the very beginning is obvious — time and cost savings on cloud computing resources involved in software development. This gives DevOps teams more time to design and implement the required infrastructure and use it in all the stages of software delivery. More importantly, it allows experienced DevOps engineers to predict future system bottlenecks and design the cloud infrastructure to avoid them.

Original Link

Xcode Server With Xcode 10

Xcode Server is a continuous integration service launched by Apple a few years back. In the previous post about Xcode Server and Xcode 9, we covered most of the major enhancements announced at WWDC 2017 in the session "What’s New in Signing for Xcode and Xcode Server." Since then, using Xcode Server has become so easy and painless to use. The major enhancements announced in the Xcode 9 release were

  • Inbuilt Xcode Server
  • Code signing and device provisioning within the server; automated code signing
  • Headless test running on Xcode Server

All these features make Xcode Server a painless choice for continuous integration for iOS apps. However, some game-changing things happened at the beginning of 2018. Apple acquired BuddyBuild, another continuous integration service. This is a major factor for Xcode Server and its ongoing development. This might be the reason WWDC 2018 was so exciting in terms of continuous integration.

Original Link

Troubleshooting AWS CodePipeline Artifacts

AWS CodePipeline is a managed service that orchestrates workflow for continuous integration, continuous delivery, and continuous deployment. With CodePipeline, you define a series of stages composed of actions that perform tasks in a release process from a code commit all the way to production. It helps teams deliver changes to users whenever there’s a business need to do so.

One of the key benefits of CodePipeline is that you don’t need to install, configure, or manage compute instances for your release workflow. It also integrates with other AWS and non-AWS services and tools such as version control, build, test, and deployment.

Original Link

What to Expect at DevOps World | Jenkins World 2018

DevOps World | Jenkins World 2018 is around the corner on September 16-19 in San Francisco. In addition to being the largest gathering of Jenkins users and DevOps practitioners, there’s a lot of reasons that make DevOps World | Jenkins World the essential DevOps event to attend each year. With keynotes and speaker sessions from industry-leading DevOps experts, an expo where you can check out the latest and greatest from our friends in the software industry or the themed after-party, there are many reasons people travel from all over the country to attend DevOps World | Jenkins World.

Whether you are already preparing for the show or still considering attending and may need some extra convincing, here’s what you can expect this year:

Original Link

Software Testing Takeaways From the 2018 Accelerate State of DevOps Report

The “2018 Accelerate State of DevOps” report is the brainchild of Dr. Nicole Forsgren, Gene Kim, and Jez Humble at DORA (DevOps Research and Assessment). Based on five years of research, with over 30,000 data points from thousands of companies, the project aims to understand precisely what practices enable teams to deliver better software faster.

Though it’s still hot off the press, the new report is already the talk of the DevOps community. We think it should become the talk of the software testing world as well. Testing and quality are discussed throughout the 78-page report—which now includes a section dedicated to Continuous Testing.

Original Link

Continuous Delivery: A Step Up From Continuous Integration

In this series of posts, we will take a look at how to extend and build on your existing Continuous Integration (CI) infrastructure and processes towards a Continuous Delivery (CD) model. In this article, we will go through the basics of CD, it’s relation to CI, and its importance in the software delivery model. Additionally, the purpose of this post is to point out the key elements and differences between continuous integration and continuous delivery. For those of you unfamiliar with Continuous Integration, I recommend reading the series of posts:

Part 1 – The basic concepts of CI and its relevance in an Agile and DevOps team culture.
Part 2 – What is CI Server and how it can seamlessly bring together various industry standard
practices of implementing a CI process.
Part 3 – The good and bad practices that make up a CI Process and Workflow.

Original Link

5 Tools to Speed Up Your App Development

Building an app is a costly and intensive process, both in time and financial resources. Sometimes you just don’t have the budget to build an expensive app, or you need to get to market quickly to seize an opportunity. Should you slash app features, or look elsewhere to speed up the app development process?

In this article, we’ll take a look at five different tools you can use to speed up your app development process. And that cuts both ways: you can reduce the cost of building an app, and at the same time release the app quicker.

Original Link

Is DevOps Really for All Types of Organizations?

DevOps has become a very popular practice in the software industry, and the reason is that people are delivering better software more quickly. It’s standard nowadays to hear that some companies are doing hundreds — or even more — deployments per day.

Unfortunately, some organizations have tried to implement DevOps but are still struggling after developers are ready to ship code changes. Those organizations sometimes ask if DevOps is really for them, or they wonder if there’s a magic wand out there that can make DevOps work for them. The truth is that DevOps really is for everyone, but it demands a lot of discipline and it takes time to see the benefits.

Original Link

Continuous Deployment Through Jenkins

Releasing software isn’t an art, but it is an engineering discipline. Continuous deployment can be thought of as an extension to continuous integration, which lets us catch defects earlier.

In this article on continuous deployment, we will go through the following topics:

Original Link

What Is GitOps, Really?

A year ago, we published an introduction to GitOps – Operations by Pull Request. This post described how Weaveworks ran a complete Kubernetes-based SaaS and developed a set of prescriptive best practices for cloud-native deployment, management, and monitoring.

The post was popular. Other people talked about GitOps and published new tools for git push, development, secrets, functions, continuous integration, and more. Our website grew, with many more posts and GitOps use cases. But people still had questions. How is this different from traditional infrastructure as code and continuous delivery? Do I have to use Kubernetes? etc.

Original Link

Managing Helm Releases the GitOps Way

What is GitOps?

GitOps is a way to do Continuous Delivery, it works by using Git as a source of truth for declarative infrastructure and workloads. For Kubernetes this means using git push instead of kubectl create/apply or helm install/upgrade.

In a traditional CICD pipeline, CD is an implementation extension powered by the continuous integration tooling to promote build artifacts to production. In the GitOps pipeline model, any change to production must be committed in source control (preferable via a pull request) prior to being applied on the cluster. This way rollback and audit logs are provided by Git. If the entire production state is under version control and described in a single Git repository, when disaster strikes, the whole infrastructure can be quickly restored from that repository.

Original Link

Continuous Integration and Continuous Delivery for Database Changes

Introduction

Over the last two decades, many application development teams have adopted Agile development practices and have immensely benefited from it. Delivering working software frequently, receiving early feedback from customers and having self-organized cross-functional teams led to faster delivery to market and satisfied customers. Another software engineering culture, called DevOps, started evolving towards the middle of this decade and is now popular among many organizations. DevOps aims at unifying software development (Dev) and IT operations (Ops). Both these software engineering practices advocate automation, and two main concepts coming out of them are Continuous Integration (CI) and Continuous Delivery (CD). The purpose of this article is to highlight how Database Change Management, which is an important aspect of the software delivery process, is often the bottleneck in implementing a Continuous Delivery process. The article also recommends some processes that help in overcoming this bottleneck and allow streamlining application code delivery and database changes into a single delivery pipeline.

Continuous Integration

One of the core principles of an Agile development process is Continuous Integration. Continuous Integration emphasizes on making sure that code developed by multiple members of the team are always integrated. It avoids “integration hell” that used to be so common during the days when developers worked in their silos and waited until everyone was done with their pieces of work, before attempting to integrate them. Continuous Integration involves independent build machine, automated builds, and automated tests. It promotes test-driven development and the practice of making frequent atomic commits to the baseline or master branch or trunk of the version control system.

Original Link

DevOps at Nike: There Is No Finish Line

The following is an excerpt from a presentation by Ron Forrester and Scott Boecker from Nike, titled “DevOps at Nike: There is No Finish Line.

nike-does-us-2017You can watch the video of the presentation, which was originally delivered at the 2017 DevOps Enterprise Summit in San Francisco.

Original Link

Building a Continuous Delivery Pipeline Using Jenkins

Continuous Delivery is a process, where code changes are automatically built, tested, and prepared for a release to production. I hope you have enjoyed my previous blogs on Jenkins. Here, I will talk about the following topics:

  • What is Continuous Delivery?
  • Types of Software Testing
  • Difference Between Continuous Integration, Delivery, and Deployment
  • What is the need for Continuous Delivery?
  • Hands-on Using Jenkins and Tomcat

Let us quickly understand how Continuous Delivery works.

Original Link

Five Can’t-Miss Sessions from DevOps World | Jenkins World 2018

It’s that time of year! DevOps World | Jenkins World is right around the corner and with a new name and an added location – Nice, France – it’s shaping up to be bigger and better than ever. This year’s event in San Francisco from September 16 -19, will host more than 70 sessions covering a variety of DevOps and Jenkins topics from security, pipeline automation and containers to DevOps adoption and continuous delivery best practices.

In addition to keynotes from Kohsuke Kawaguchi, the creator of Jenkins and CTO at CloudBees, Sacha Labourey, CloudBees’ CEO and co-founder and Dr. Nicole Forsgren, CEO and chief scientist of DORA, there will be a number of sessions conducted by industry practitioners.

Original Link

Extreme IT Automation

Following the Extreme IT Automation webinar on DevOps.com, I wanted to provide a summary and explore some of the topics discussed. You can watch the full webinar here.

How DevOps Happens

DevOps transformed the software development process. It facilitates continuous delivery — that is to say, faster and more efficient releases without a corresponding increase in operational risk. But DevOps is itself predicated on a number of pillars: culture, lean process design, measurement, sharing, and automation.

Original Link

The DevOps Toolchain per Lee Atchison, New Relic

Thanks to Anders Wallgren, CTO and Sam Fell, V.P. of Marketing at Electric Cloud for hosting, and moderating, our roundtable discussion on the necessary elements of a healthy DevOps toolchain with Lee Atchison, Senior Director for Cloud Architecture at New Relic, Ian Buchanan, Principal Solutions Engineer for DevOps at Atlassian, Ravi Gadhia, Solutions Engineer at GitHub, Mark Miller, Senior Storyteller and DevOps Evangelist at Sonatype, Prashant Mohan, Product Manager at SmartBear, and yours truly.

You can see the full podcast here.

Following are Lee’s thoughts on the four topics we discussed:

The DevOps Toolchain as a Value Stream

I can’t express enough the value of having a single set of integrated tools that show you a broad picture across the entire spectrum of that process. And you know, there is so much that can be learned from one side of the process by knowing how the other side of the process is working. And you can’t get that if you have these pair of tools that aren’t working together. So whatever the tool is you’re using, and there are lots of good tools, the tools should be able to incorporate data from all sorts of different areas and make a consistent whole that everyone has access to.

Not just a few people that might need the ops-view as part of their job, give that data and that view to everybody. Because everyone can look and see the data from a different angle and see a different aspect of it. And that provides value to the organization, understanding how the other parts of the organization are working.

Using Tools to Align People and Teams

That’s a good question. You know, there’s a couple different aspects when you think of monitoring. And certainly, if you look at where monitoring sits in the toolchain it looks like it’s at the end, right? It’s after you deploy, it’s out in production, you monitor to make sure it works. And that’s certainly important to catch hiccups that occur during your deploy and latency problems that occur. But that’s just one aspect of the monitoring that’s important to DevOps and to the teams that are dealing with DevOps.

Monitoring is also about the front-end of the process. It’s about getting data into the pipeline to support planning and improvements and to identify the areas where you need to make improvements to your application. And getting the data that supports decisions on what you should be working on into the front-end.

So monitoring is this piece really that takes what could be a straight line and turns it into the Mobius loop of the DevOps methodology. It’s the thing that makes it a continuous process. And that’s really what’s critical here. But it’s also a tool that helps with the collaboration process throughout all steps. And much like you were talking about the security aspect, understanding how monitoring the application as well as monitoring the toolchains affects everyone in that process, it allows you to help with problem diagnostics in the application and allows you to handle problem diagnostics within the toolchain.

It gives you the visibility you need. And as I was saying earlier, it gives everyone in the organization the visibility they need to understand what’s going on everywhere. And by giving everyone the same level of visibility, it makes decisions more uniform, it makes problems easier and quicker to find, to identify where the problem exists and therefore, makes the resolution of those problems and the cycle time through this whole loop much, much faster.

But I also think this way too, you had a problem, you’re running into where a problem is, and you have to fix the problem of the moment but then you want to start looking at the post-mortem of what’s causing these issues, what is causing these problems to occur. And you find that there are certain parts of the application that are making significant changes very quickly and perhaps you’re not going through the cycle quite as reliably and efficiently, especially if some of the verification steps or the security steps as it should be.

So understanding the visibility that monitoring can give you in the toolchain itself can give you the visibility on the teams that need to make some improvements or the parts of the application that are working great or which ones are not and where you really need to focus the most effort.

Is There One Right Tool?

We all work for companies and we all have our own agendas. But really, what’s most important is to make sure you use a tool that’s appropriate for your job and there isn’t one tool that will solve all of the needs of all of the parts of DevOps value chain nor is there one tool that will solve everyone’s needs for everybody that might approach the problem.

So, when we talk about monitoring, what’s the number one most important thing about monitoring? It isn’t that you use New Relic, it’s that you make sure you monitor at all. Make sure you’re monitoring. Whatever tool you use is less important, just make sure you’re using a tool and you’re using it appropriately.

With that said, I will say there is value in having a single tool for a given purpose and then making sure all the tools interconnect. We were talking earlier about the value chain, of allowing everyone access to all the same data, really requires your tools to be a unified set of tools that work well together and can create the unified view with one set of data that gives you the answers to the problems that you need at the same time versus conflicting data from different teams because they’re using different tools for different purposes and all the complexity involved with that. So, while there’s value in allowing teams to make decisions, there’s also value in standardization in a lot of respects.

Adapting to the Changing Environment

It’s APIs help you to give you the ability to swap tools out while reducing friction. But it’s not just APIs, it’s also integrations. And integrations are not APIs. As a company, I can say, “Hey, I have an API that does everything I need or want anybody to be able to do,” and call it a day. That doesn’t do anything to help if all the other tools have their own APIs and there’s no integration between those APIs. The glue is really what I’m talking about here.

It’s not just building APIs to make sure people can get data out of our application and out of our tools. It’s making sure our tools work with all of the tools that are involved in the toolchain upstream, downstream, side stream from all of us. To make sure that they work together in a way that’s effective for our customers. And so, it’s not just the APIs it’s making sure the APIs are correct, work together, and integrate with the products that our customers care about. And that’s the only way you’re going to make sure a tool can easily slide in and slide out of any particular slot and minimize the stress involved in doing that.

Original Link

The Five Key ”-tions” of Enterprise Scale Continuous Delivery

For a large organization, implementing and successfully adopting continuous delivery (CD) within an app system is a challenging, even daunting, task. The idea of overhauling a preexisting way of working can seem like taking a leap into the dark, which often causes executives to balk at the idea. Yet, the jump is not actually that large. The technical issues are well understood and there are plenty of tools available to get a basic, functional infrastructure ‘working’ with a bit of effort.

However, ‘working’ is not enough for modern enterprises. Large organizations are concerned with higher level capabilities and operational approaches that reflect the value of the actual scale at which they exist. These companies want to maximize and exploit the advantages of their size. Enterprises that cannot take advantage of their scale will struggle as competitors and market disruptors alike seek to alter the status quo.

How does the enterprise attain and deliver on the promise of CD at scale? To answer that question, I’d like to look at an interrelated list of “-tions”:

Delegation

There is no escaping the fact that large enterprises are concerned with who does what. Old-school hierarchies usually think of that in terms of pushing “authority” to do something “downward.” Indeed, this is the way most people think of the word ‘delegation’. In organizations attempting to adopt enterprise scale continuous delivery, there is also the challenge of horizontal delegation-from one team, or silo, to another. Continuous delivery only works when long communication, decision-making, or approval-granting chains are minimized or eliminated. This can be a challenge for enterprises that are used to very bureaucratic approaches to IT. Delegating control to those closest to the work is critical if the organization is to achieve agility.

Collaboration

Collaboration is closely related to delegation. It is an important enabler of trust, which enhances the willingness to delegate. It also incorporates key communications channels and behaviors that let people learn, share knowledge, and generally improve the organization’s ability to execute rapidly. Maintaining a rapid execution cadence is a key goal of CD and ultimately a reason enterprises are interested in it as a practice.

Large organizations, given their disparate teams, technologies, and even geographies, must deliberately invest time and effort into fostering these interactions. Merely saying people are ’empowered’ is not enough-goals and incentives must be aligned within and, crucially, across organizations in the same value streams. Otherwise, team-centric interests can interfere with delivering new functionality.

Automation

Automation is the capability that makes enterprise scale continuous delivery possible. It is the backbone of a modern software factory.

Automation serves as a source of great efficiency for repetitive tasks, executing them in a consistent, fast and reliable fashion. This shifts staff time from repetitive, low-value tasks to creative efforts that bring higher business value resulting in greater productivity for the business.

Successful automation relies on clear, cross-functional understanding of the overall process and its value increases the more broadly it is used. Good collaboration is crucial to achieving the understanding required to establish and extend automation within processes-particularly those that cross team boundaries.

Deduplication

Being a large enterprise enables you to leverage scale and achieve higher levels of efficiency. This is achieved via shared resources and reducing duplication of effort, which can be a major management challenge for enterprises because, as we have discussed, CD pushes a lot of activities to teams. This creates pressure in the enterprise as CD practices can lead to teams duplicating facilities that other teams have created.

However, the knee-jerk reaction to deduplicate efforts may actually yield a worse outcome for the enterprise as a whole. There is mounting research that the traditional pursuit of efficiency will generally yield worse business outcomes when it comes to delivering software. That said, too much duplication will also yield negative outcomes-remember that even a high-speed tech titan like Google enforces a single code repository standard because that is how it manages its intellectual property.

Striking the correct balance requires good automation. By eliminating repetitive tasks, it can empower teams to use a central resource in a self-service manner. Knowing what is critical to maintain centrally, such as intellectual property or compliance, is a key part of intelligent delegation.

Instrumentation

The first four “-tions” are all related to each other and there is a delicate balance required to successfully adopt CD. The fifth “-tion,” Instrumentation, sits at the heart of the CD practice and enables all participants in the enterprise to contribute to its success. Instrumentation provides the trust, transparency, and communication required to maintain and improve the CD practice in a sustainable way that benefits a large enterprise.

The 5 “-tions” outlined above serve as a framework that can help focus conversation and effort around enterprise scale CD. It can be a tricky balancing act, but the research published in books such as Accelerate: The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations, by Nicole Forsgren, Jez Humble and Gene Kim, reflects that companies that successfully adopt modern practices such as CD will substantially outperform their rivals.

Enterprises that want the benefits of DevOps practices such as CD must act deliberately. Early successes from individual teams or “grassroots” efforts are good and provide real data that such approaches are feasible. However, without a coherent framework such as these 5 “-tions,” scaling the early successes across the numerous and diverse teams within an enterprise will be impossible.

Original Link

What Is the Importance of DevOps Certification?

While conducting recruitment drives, HR managers look for different characteristics in a prospective employee, having one and only one criteria in mind: How will this specific competitor add value to my enterprise, particularly in contrast with others on the list? One way that can surely tilt the scales in your favor is if you have good certifications attained from recognized institutes.  Let us talk about the importance of DevOps certification today. To do that, we will first see the list of important DevOps certifications one needs to pursue and then we will look into the benefits that it provides to you.

List of DevOps Certifications

There are several DevOps certification courses available, but the following ones are the world-recognized list of DevOps certifications.

DevOps Foundation® Certification

The DevOps Foundation course gives a basic comprehension of key DevOps phrases to guarantee everybody is using a similar language of DevOps and highlights the advantages of DevOps to help authoritative achievement.

DevOps Leader (DOL)®

The DevOps Leader course is a one-of-a-kind and useful experience for members who need to adopt a transformational authority strategy and have a substantial impact inside their enterprise by actualizing the features DevOps.

DevSecOps Engineering (DSOE)℠

This course clarifies how DevOps security differs from other security approaches and gives the instructions to comprehend and apply information and security sciences.

Continuous Delivery Architecture (CDA)℠

This course is intended for members who are engaged with the planning, execution, and administration of DevOps organization pipelines and toolchains that help Continuous Integration, Continuous Delivery, Continuous Testing, and conceivably Continuous Deployment.

DevOps Test Engineering (DTE)®

This far-reaching course tests concepts in a DevOps domain and spreads ideas, for example, the dynamic utilization of test automation, testing earlier in the advancement cycle, and ingraining testing abilities in engineers, quality confirmation, security, and operational teams.

Why Is DevOps Certification Important?

DevOps certification comes in handy in many places and you can benefit a lot from it. There are several reasons it is important.

Benefit Your Organization

By acquiring a DevOps certification, you can offer your association heaps of quantifiable advantages. The DevOps belief system advances expanded cooperation and correspondence between operation and development groups. Code that goes into creation is expanded because of a shorter advancement cycle. What used to take 3-6 months will now take just a couple of hours with DevOps functionality.

Better Job Opportunities

DevOps is a very new and novel idea in business and an ever-increasing number of organizations are deploying the practices of DevOps. There is a deficiency of confirmed experts who can successfully deliver their DevOps skills to the organizations they are associated with. A DevOps accreditation will help you grow your mindset as an IT expert and better job openings for your desirable work profiles will definitely come your way.

Improved Skills & Knowledge

The DevOps ideology encourages a completely new way of thinking and decision-making. The business and technical benefits of DevOps are many and you can learn how to implement them in your organization. You learn to work in a team consisting of cross-functional team members: QA, developers, operation engineers, and business analysts.

Increased Production & Effectiveness

With a DevOps certification on your resume, your efficiency as an IT expert will increase. Under ordinary IT conditions, a considerable measure of time is squandered waiting for other individuals and other software. Everybody likes to be productive at work, and the time you squander waiting is certain to cause you some disappointment. With DevOps, you can dispose of this unsuitable aspect of your responsibilities and invest the energy in increasing the value of your organization and your staff.

Increased Salary

As per an ongoing review, DevOps certified experts are among the most generously compensated professionals in the IT business. The market demand for them is expanding quickly with its expanded usage worldwide and this pattern does not seem likely to change at any point in the near future.

Rejuvenates the Employees

DevOps certification helps you as an organization when your employees acquire the new specialized practices, as well as aids in adopting new things every day by taking various DevOps certifications. It gives a genuinely needed shake to the working existence of the employees. With new workplaces (shared workplaces as opposed to separate ones for every division), new individuals communicate and gain, and perhaps add a few new approaches to tackle the errand that was being performed in a customary way. DevOps resembles a much-needed refresher in the IT business.

Conclusion

DevOps is significantly acquiring a good market share in the IT industry. The demand for professionals who are skilled in DevOps is on an all-time rise. This pattern is likely to continue for a while. The career path and future scope look very bright if you attain the necessary DevOps certifications. Without certifications, your skills are of no use, as the recruiters may not believe you. The certification acts as a testimonial of your skills, and that is why it is very important for you to acquire the necessary certifications.

Original Link

What Is the Importance of DevOps Certification?

While conducting recruitment drives, HR managers look for different characteristics in a prospective employee, having one and only one criteria in mind: How will this specific competitor add value to my enterprise, particularly in contrast with others on the list? One way that can surely tilt the scales in your favor is if you have good certifications attained from recognized institutes.  Let us talk about the importance of DevOps certification today. To do that, we will first see the list of important DevOps certifications one needs to pursue and then we will look into the benefits that it provides to you.

List of DevOps Certifications

There are several DevOps certification courses available, but the following ones are the world-recognized list of DevOps certifications.

DevOps Foundation® Certification

The DevOps Foundation course gives a basic comprehension of key DevOps phrases to guarantee everybody is using a similar language of DevOps and highlights the advantages of DevOps to help authoritative achievement.

DevOps Leader (DOL)®

The DevOps Leader course is a one-of-a-kind and useful experience for members who need to adopt a transformational authority strategy and have a substantial impact inside their enterprise by actualizing the features DevOps.

DevSecOps Engineering (DSOE)℠

This course clarifies how DevOps security differs from other security approaches and gives the instructions to comprehend and apply information and security sciences.

Continuous Delivery Architecture (CDA)℠

This course is intended for members who are engaged with the planning, execution, and administration of DevOps organization pipelines and toolchains that help Continuous Integration, Continuous Delivery, Continuous Testing, and conceivably Continuous Deployment.

DevOps Test Engineering (DTE)®

This far-reaching course tests concepts in a DevOps domain and spreads ideas, for example, the dynamic utilization of test automation, testing earlier in the advancement cycle, and ingraining testing abilities in engineers, quality confirmation, security, and operational teams.

Why Is DevOps Certification Important?

DevOps certification comes in handy in many places and you can benefit a lot from it. There are several reasons it is important.

Benefit Your Organization

By acquiring a DevOps certification, you can offer your association heaps of quantifiable advantages. The DevOps belief system advances expanded cooperation and correspondence between operation and development groups. Code that goes into creation is expanded because of a shorter advancement cycle. What used to take 3-6 months will now take just a couple of hours with DevOps functionality.

Better Job Opportunities

DevOps is a very new and novel idea in business and an ever-increasing number of organizations are deploying the practices of DevOps. There is a deficiency of confirmed experts who can successfully deliver their DevOps skills to the organizations they are associated with. A DevOps accreditation will help you grow your mindset as an IT expert and better job openings for your desirable work profiles will definitely come your way.

Improved Skills & Knowledge

The DevOps ideology encourages a completely new way of thinking and decision-making. The business and technical benefits of DevOps are many and you can learn how to implement them in your organization. You learn to work in a team consisting of cross-functional team members: QA, developers, operation engineers, and business analysts.

Increased Production & Effectiveness

With a DevOps certification on your resume, your efficiency as an IT expert will increase. Under ordinary IT conditions, a considerable measure of time is squandered waiting for other individuals and other software. Everybody likes to be productive at work, and the time you squander waiting is certain to cause you some disappointment. With DevOps, you can dispose of this unsuitable aspect of your responsibilities and invest the energy in increasing the value of your organization and your staff.

Increased Salary

As per an ongoing review, DevOps certified experts are among the most generously compensated professionals in the IT business. The market demand for them is expanding quickly with its expanded usage worldwide and this pattern does not seem likely to change at any point in the near future.

Rejuvenates the Employees

DevOps certification helps you as an organization when your employees acquire the new specialized practices, as well as aids in adopting new things every day by taking various DevOps certifications. It gives a genuinely needed shake to the working existence of the employees. With new workplaces (shared workplaces as opposed to separate ones for every division), new individuals communicate and gain, and perhaps add a few new approaches to tackle the errand that was being performed in a customary way. DevOps resembles a much-needed refresher in the IT business.

Conclusion

DevOps is significantly acquiring a good market share in the IT industry. The demand for professionals who are skilled in DevOps is on an all-time rise. This pattern is likely to continue for a while. The career path and future scope look very bright if you attain the necessary DevOps certifications. Without certifications, your skills are of no use, as the recruiters may not believe you. The certification acts as a testimonial of your skills, and that is why it is very important for you to acquire the necessary certifications.

Original Link

How Secure Is Your CICD Pipeline?

In this blog, I’ll make the case that a CICD pipeline implemented using the GitOps methodology is a more secure way to automate deployment.

Consider the following questions:

  • Do you have direct access to the container image repository?
  • Do you have direct access to the production cluster?
  • How do you know what’s actually running in your cluster?
  • Can you tell when expected vs actual state diverges?
  • Would you have to re-run every CICD pipeline to recover a cluster after a disaster?

What’s a CICD Pipeline?

A brief perusal of the results of a Google image search for “CICD pipeline” produces a vast number of colorful and sometimes bewildering examples.

A CICD pipeline is the combination of Continuous Integration (CI) and Continuous Deployment or Continuous Delivery (CD), with automation such that a commit to a source code repository triggers the build, test, and packaging of an application that’s then deployed to a cluster.

CI is well understood to be a best practice, so I shall not discuss it further. CD has been around as a concept for quite a long time, but only recently has it been adopted as a common practice.

Most Continuous Integration (CI) systems/servers now have a deployment plugin or configuration for a container orchestrator like Kubernetes. This makes it easy to connect the CI systems output to the target environment for the application.

A simplification, below, starts with code on the developer machine, (Dev), being pushed to a code repository (e.g. git), where it’s picked up by a CI system, which runs some tests and then builds an artifact (a container image), which is pushed to the image repository and then deployed to the orchestrator (Kubernetes).

Security by Design

The OWASP project (which states how to produce secure applications by design) lists ten principles that should be applied when designing secure applications. I’ve highlighted a few to consider in the context of a CICD pipeline.

  • Minimize attack surface area
  • Establish secure defaults
  • Principle of least privilege
  • Principle of defense in depth
  • Fail securely
  • Don’t trust services
  • Separation of duties
  • Avoid security by obscurity
  • Keep security simple
  • Fix security issues correctly

Let’s look at the pipeline with these principles in mind, consider the credentials and access typically assigned, and then what’s actually needed for each step. (RW for Read Write access, and RO for Read Only access.)

Woah! There’s a lot of red ink there. It’s easy to see how a simple pipeline can violate some of the principles listed above. There’s an easy fix for some of that – by removing direct access to the Image Repo and cluster by the developer, the attack surface can be reduced, privileged access can be minimized, duties can be separated.

Here’s an improved pipeline:

Necessary RW access is now marked in blue. The dotted lines indicate where we might consider the logical security boundaries to be if they’re considered as separate duties. Defense in depth is improving as the need to cross those boundaries is reduced.

The CI system is still looking like a pretty interesting target, because it’s got credentials for the source code, the image repo, and the cluster, and it crosses two logical security boundaries. (If the CI system above is also maintaining the current state by updating YAML manifests, there’s another set of credentials here too.)

Is the CI system the most well-secured piece of your infrastructure?

A Better Approach

The GitOps way to address this is by running a reconciliation operator in the cluster itself. It operates on a configuration git repo, with separate credentials. The operator reconciles the desired state as expressed in the manifest files, stored in the git repo, against the actual state of the cluster.

There’s no credential leakage across the boundaries, the CI system can operate in a different security “zone” than the target cluster. Each pipeline component only needs a single RW credential. Now, you can “keep your secrets close” because the cluster credentials never leave the cluster itself.

Automating releases by writing them into git and only applying changes when they’ve already happened in git ensures that a record of the desired state of the cluster doesn’t depend on the cluster itself. If the cluster is lost, it can be restored quickly from the independent record in the config git repo, without having to re-run build pipelines for the entire application — thus improving availability. The config repo has its own set of credentials — which adds another layer of defense.

Developer-friendly workflows, reviews and pull requests are enabled on the config repo – which is independent of the cluster itself – meaning there’s a complete audit trail of every tag update and config change, regardless of whether it was made manually or automatically.

Finally – with a separate record of the desired state to compare the actual state with, it’s possible to alert when divergence occurs.

We can derive two assertions:

  1. CICD pipelines that require the cluster control API endpoint to be exposed to the internet and place sensitive cluster credentials in external CI systems are an anti-pattern.
  2. CI-driven CD patterns that change the state of the cluster without recording the change are an anti-pattern.

Original Link

Continuous Discussions Podcast, Episode 89: The DevOps Toolchain [Podcast]

I just took part in a great podcast hosted by Electric Cloud in partnership with DZone with a round-table discussion on the importance of a healthy DevOps toolchain.

Participants in the podcast included:

Prashant MohanPrashant Mohan, Product Manager for SmartBear’s software testing tools. @Prashz91

Lee AtchisonLee Atchison, Senior Director, Strategic Architecture at New Relic. @leeatchison

Ian BuchananIan Buchanan, Developer Advocate at Atlassian. @devpartisan

Mark MillerMark Miller, DevOps Evangelist at Sonatype. @EUSP | alldaydevops.com

Ravi GadhiaRavi Gadhia, Senior Solutions Engineer at GitHub.

Our hosts were:

Image titleAnders Wallgren, CTO at Electric Cloud. @anders_wallgren

and

Image title

Sam Fell, V.P. Marketing at Electric Cloud. @samueldfell

Key takeaways include:

The DevOps Toolchain as a Value Stream

It’s important to look at the entire value stream looking at the lead time ladder and identifying bottlenecks where automation will add the greatest value.

You need to be able to see upstream and downstream to ensure you are adding value and not duplicating effort. Have a single set of integrated tools giving you vision into how the entire process is working. 

Think about what the code is seeing and how it’s being affected.

Use Tools to Align People and Teams 

Given that the greatest challenge to implementing a DevOps methodology is culture, it’s important for every member of the team to have visibility and access to metrics for every link in the toolchain. 

DevOps is enabling cultural transformation while security overlays the entire process.

Value stream mapping is the biggest hurdle companies need to get over. Once you understand the whole process and the outcomes you are trying to achieve every member of the team is able to see where they are contributing and where they are hindering the process.

Is There One Right Tool?

No; however, there’s a correct set of tools for every organization and you determine this by choosing the right tool for the problem you are working to solve. Every tool has a purpose and it must integrate with every other tool and provide a holistic view of the process. There is value in standardization; however, be aware that the right tool might change. As such, tools need to be able for clients to change in and out.

The tools have to work together. You cannot expect companies to “rip and replace.” It’s too expensive, time-consuming, and there are a lot of people to be trained on the new tool.

Adapting to the Changing Environment

We need to be able to abstract out the details of the actual tools and focus on the outcome of the process versus the tool that’s being used to achieve the outcome.

Tools must integrate into a platform that’s easy and intuitive for team members to use. The tools need to provide access to data and vision into the entire pipeline for audit-ability and traceability. 

Think holistically about the entire DevOps toolchain.

Check out the full episode:

You can find Electric Cloud’s write-up of the podcast here, and check out the previous episodes of Continuous Discussions (#c9d9).

Original Link

The Continuous Delivery Challenge in the Enterprise

Continuous delivery (CD) as an engineering practice is taking hold at enterprises everywhere. Most forward-looking app developers’ efforts rely on CD to one extent or another. Typically, that is in the form of a functionally automated pipeline for code promotion and some test execution. Some amount of the delivery work, such as database changes, provisioning or configuration management tickets, production signoffs, etc., is still done manually. These forward-looking teams, therefore, have a CD pipeline that “works” reasonably well.

There is an old engineering adage that accurately describes the attitude many such teams have towards adopting CD: “First, make it work. Then, make it work well. Finally, make it work quickly and efficiently.” Today, enterprises are getting through the first and second phases of that adage in their CD adoption efforts, but they are going to want to reach the third eventually-and that’s where the difficulty lies. Organizations in this position should start planning for phase three now to avoid the expense and disruption of bringing it under control later down the line.

A Pipeline of Pipelines

Enterprises attempting to transform their app delivery approaches typically rely on team-level efforts. As a result, they usually have app delivery pipelines in different areas of the business. Many of those current efforts have a very limited scope, only focusing on the basic functional tasks of the specific technical environments of the specific application system they support. Sometimes the focus may even just be on a subset of those environments. Furthermore, the pipelines are often duplicative of each other across teams-even if the technology stacks are the same. There is nothing but manual effort and spreadsheets coordinating the pipelines.

This is a result of teams’ natural, but narrow, focus on their functional needs. The narrow focus can create architectural problems when it comes to helping adjacent delivery pipelines-especially if the tech stacks among the adjacent teams are different. That is because most team-level tools do not support the variety of tech stacks present in many enterprises. As a result, these function-focused automated pipelines still rely on manual management techniques for higher-level needs, such as cross-app dependency management. These enterprise needs are less technically focused or have other, larger business aspects that are beyond the scope of a team’s responsibility-even if the team’s activities impact those needs.

As development teams build out newer applications that expand to become critical pieces of enterprise software infrastructure, they discover that they now need something to manage their “pipeline of pipelines.” No matter how consistently the involved teams’ individual pipelines work, their narrow focus limits visibility and constrains their ability to effectively manage complexity for the business stakeholders. That blinds business stakeholders to the progress of key features and defects and is exacerbated when there are dependent projects. Existing data reported by team-level tools does not help (and may even hinder) coordination, security, quality, or compliance efforts, because the data resides in so many formats and tools across the various teams that it is impossible to correlate.

Enterprise-Grade Continuous Delivery Solutions

So, how do enterprises progress from scattered CD pipelines that merely “work” to CD pipelines that “work quickly and efficiently” at the scale they require? They switch to a model-driven approach that enables them to leverage the functional effectiveness of team-level efforts and, without throwing those away, thread them into a coherent and manageable whole. These models — a hallmark of enterprise-grade CD tools — enable enterprises to gain a clear, end-to-end understanding of complex value streams at both the technical “works” level and the management “works quickly and efficiently” level.

A model-driven approach to bringing disparate team efforts together:

  • Eases coordination among dependencies by supporting heterogeneity in tools and team preferences and supporting the breadth of enterprise technology equally (containers, cloud, platforms, legacy distributed, mainframe)
  • Increases visibility by providing a broad framework that collects data (both statistical and business-stakeholder relevant, e.g. feature/defect progress, etc.) for coherent reporting to the business and management
  • Improves security by providing a consistent framework for enterprise security and compliance concerns, managed consistently for all teams with minimal duplication

The emerging awareness that CD requires different or additional tools to manage the complexity in an enterprise context is a natural outcome of the constantly evolving modernization of software delivery practices. It is an example of why CD practitioners and evangelists talk about the adoption of such practices as a “journey”. With that journey will come learnings that shift team managers from being excited that something works “well” to an urgency toward taking the next step on their journey to make them “work quickly and efficiently.”

Original Link

  • 1
  • 2
  • 4